Tuesday, January 17, 2012

Forever young?

I was asked by the Guardian to write an online story about the new ‘youth cream’ from L’Oreal. I think they were anticipating a debunking job, but I guess I learnt here the difference between skepticism and cynicism. I’m not really interested in whether these things work or not (whatever ‘work’ can mean in this instance), but I had to admit that there was some kind of science behind this stuff, even if I see no proof yet that it has any lasting effect on wrinkles. So I was overcome by an attack of fairness (who said "gullibility"?). This is what resulted.
___________________________________________________

I don’t suppose I’m in the target group for Yves Saint Laurent’s new skin cream Forever Youth Liberator - but what if I did want to know it’s worth shelling out sixty quid for a 50 ml tub? I could be wowed by the (strangely similar) media reports. “It is likely to be one of the most sought after face creams ever”, says the Telegraph, “5,000 women have already pre-ordered a face cream using ingredients which scientists claimed would change the world.” Or as the Daily Mail puts it, the cream is “hailed as the ‘holy grail’ of anti-ageing.” (You have to read on to discover that it’s Amandine Ohayon, general manager of Yves Saint Laurent, who is doing the hailing here.)

But I’m hard to please. I want to know about the science supporting these claims. After all, cosmetics companies have been trying to blind us with science for years – perhaps ever since the white coats began to appear in the DuPont chemical company’s ads (“Better living through chemistry”) in the 1930s. Recently we’ve had skin creams loaded with nano-capsules, vitamins A, C and E, antioxidants and things with even longer names.

“The science behind the brand lies in the groundbreaking technology of Glycobiology”, one puff tells us. “It’s been noted as the future in the medical field, the fruit of more than 100 years of research and recognized by seven Nobel Prizes.” The Telegraph, meanwhile, parrots the PR that, “the cream has been 20 years in development, and has the backing of the Max Planck Institute in Germany.”

I rather wish that, as a chemist, I could say this is all tripe. But it’s not as simple as, say, claims by bottled-water companies to have a secret process that alters the molecular structure of water to assist hydration. For example, it’s true that glycobiology is a big deal. This field studies an undervalued and once unfashionable ingredient of living cells: sugars. Glycans are complicated sugar molecules that play many important biological roles. Attached to proteins at the surfaces of our cells, such sugars act as labels that distinguish different cell types – for example, they determine your blood group. Glycans and related biochemicals are an essential component of the way our cells recognise and communicate with one another.

Skin cells – essentially, tissue-generating cells called fibroblasts – produce glycans and other substances that form a surrounding extracellular matrix, Some of these glycans attract water and keep the skin plump and soft. But their production declines as fibroblasts age, and so the skin becomes dry and wrinkled. Skin creams routinely contain glycoproteins and glycans to redress this deficit.

Fine – but what’s so different about the new cream? It’s based on a combination of artificial glycans trademarked Glycanactif. Selfridges tells us that they “unlock the cells to reactivate their vital functions and liberate the youth potential at all levels of the skin”. Well, it would be nice if cells really were little boxes brimming with ‘youth potential’, just waiting to be ‘unlocked’, but this statement is basically voodoo.

So I contact YSL. And – what do you know? – they sent me some useful science. It’s surrounded by gloss and puff (“Youth is a state of mind that cannot live without science” – meaning what, exactly?), and exposed as the source of that garbled soundbite from Selfridges. But it also shows that YSL has enlisted some serious scientists, most notably Peter Seeberger, a specialist in glycan chemistry at the Max Planck Institute of Colloids and Interfaces in Berlin. And it explains that, instead of just supplying a source of glycans in the extracellular matrix to make up for their reduced production in ageing cells, Glycanactif apparently binds to glycan receptors on the cell surface and stimulates them to start making the molecules (including other glycans and related compounds) needed for healthy skin.

Tough-skinned cynic that I am about the claims of cosmetics manufacturers, I am nonetheless emolliated, if not exactly rejuvenated. True, there’s nothing in the leaflet which proves that FYL does a better job than other skin creams. The science remains very sketchy in places. And (this is true of any claims for cosmetics) we’d reserve judgement until the long-term clinical trials, if it were a drug. But I’m offered a troupe of serious scientists ready to talk about the work. I’m open to persuasion.

Still, it puzzles me. How many of the thousands of advance orders, or no doubt the millions to come, will have been based on examination of the technical data? I know we lack the time, and usually the expertise, for such rigour. So what instead informs our decision to shell out sixty quid on a tiny tub of youthfulness? And if the science was all nonsense, would it make a difference?

Monday, January 16, 2012

The truth about Einstein's wife

Some weeks back I mentioned in passing in my Guardian column the far-fetched claim that Einstein’s first wife Mileva Maric was partly or even primarily responsible for the ideas behind his theory of relativity. Allen Esterson has written to me to point out that this claim is still widely circulated and accepted as established fact by some people. Indeed, he says that “the 2008-2009 EU Europa Diary for secondary school children (print run 3 million) had the following: ‘Did you know? Mileva Marić, Einstein's first wife, confidant and colleague – and co-developer of his Theory of Relativity – was born in what is now Serbia’”. Seems to me that this sort of thing (and the concomitant notion that this ‘truth’ has been long suppressed) ultimately doesn’t do the feminist cause any good. Allen has also posted on the web site Butterflies and Wheels a critique of an independent short film that tries to promote the myth – you can find it here.

Wednesday, January 11, 2012

How big is yours?

Here, then, is my column from last Saturday’s Guardian.

While writing this, I discovered that Google Scholar has an add-on that will tot up your citations to establish an h-index. From that, I gather that mine is around 29. One of the comments on the Guardian thread points out that Richard Feynman has an h of 23. As Nigel Tufnell famously said apropos Jimmy Page, “I think that says quite a lot.”

_________________________________________________________________

Many scientists worry that theirs isn’t big enough. Even those who sniff that size isn’t everything probably can’t resist taking a peek to see how they compare with their rivals. The truly desperate can google for dodgy techniques to make theirs bigger.

I’m talking about the h-index, a number that supposedly measures the quality of a researcher’s output. And if the schoolboy double entendres seem puerile, there does seem to be something decidedly male about the notion of a number that rates your prowess and ranks you in a league table. Given that, say, the 100 chemists with the highest h-index are all male, whereas 1 in 4 postdoctoral chemists is female, the h-index does seem to be the academic equivalent of a stag’s antlers.

Few topics excite more controversy among scientists. When I spoke about the h-index to the German Physical Society a few years back, I was astonished to find the huge auditorium packed. Some deplore it; some find it useful. Some welcome it as a defence against the subjective capriciousness of review and tenure boards.

The h-index is named after its inventor, physicist Jorge Hirsch, who proposed it in 2005 precisely as a means of bringing some rigour to the slippery question of who is most deserving of a grant or a post. The index measures how many highly cited papers a scientist has written: your value of h is the number of your papers that have each been cited by (included in the reference lists of) at least h other papers. So a researcher with an h of 10 has written 10 papers that have received at least 10 citations each.

The idea is that citations are a measure of quality: if a paper reports something important, other scientists will refer to it. That’s a broadly a reasonable assumption, but not airtight. There’s evidence that some papers get highly cited by chance, because of a runaway copycat effect: people cite them just because others have, in the same way that some mediocre books and songs become unaccountably popular.

But to get a big h-index, it’s not enough to write a few influential papers. You have to write a lot of them. A single paper could transform a field of science and win its author a Nobel prize, while doing little for the author’s h-index if he or she doesn’t write anything else of note. Nobel laureate chemist Harry Kroto is ranked an apparently undistinguished 264th in the h-index list of chemists because his (deserved) fame rests largely on a single breakthrough paper in 1985.

That’s one of the criticisms of the h-index – it imposes a one-size-fits-all view of scientific impact. There are many other potential faults. Young scientists with few publications score lower, however brilliant they are. The value of h can be artificially boosted – slightly but significantly – by scientists repeatedly citing their own papers. It fails to distinguish the relative contributions to the work in many-author papers. The numbers can’t be compared across disciplines, because citation habits differ.

Many variants of the h-index have been proposed to get round these problems, but there’s no perfect answer, and one great virtue of the h-index is its simplicity, which means that its pros and cons are relative transparent. In any case, it’s here to stay. No one officially endorses the h-index for evaluation, but scientists confess that they use it all the time as an informal way of, say, assessing applicants for a job. The trouble is that it’s precisely for average scientists that the index works rather poorly: small differences in small h-indices don’t tell you very much.

The h-index is part of a wider trend in science to rely on metrics – numbers rather than opinions – for assessment. For some, that’s like assuming that book sales measure literary merit. It can distort priorities, encouraging researchers to publish all they can and follow fads (it would have served Darwin poorly). But numbers aren’t hostage to fickle whim, discrimination or favouritism. So there’s a place for the h-index, as long as we can keep it there.

Monday, January 09, 2012

No secret

Before I post my last Guardian column, here’s one that got away: I’d planned to write about a paper in PNAS (not yet online) on blind testing of new and old violins, until – as I was half-expecting – Ian Sample wrote a regular story on it. So this had to be scrapped.

Radio 4's PM programme covered the story too, but in a somewhat silly way. They got a sceptical professor from the Royal College of Music to come on and play some Bach on a new and and old instrument, and asked listeners to see if they could identify which was which. A good demonstration, I suppose, of exactly why double-blind tests were invented.
___________________________________________________________

At last we now know Antonio Stradivari’s secret. Violinists and craftsmen have long speculated about what makes the legendary Italian luthier’s instruments sound so special. Does the magic lie in the forgotten recipe for the varnish, or in a chemical pre-treatment of the wood? Or perhaps it’s the sheer passage of time that mellows the tone into such richness?

Alas, none of these. A new study by French and US researchers suggests that the reason the sound of a Stradivari is so venerated is because it has never before been properly put to the test.

Twenty-one experienced violinists were asked to blind-test six violins – three new, two Stradivaris and one made by the equally esteemed eighteenth-century instrument-maker Guarneri del Gesù. Most of the players were unable to tell if an instrument was new or old, and their preferences bore no relation to cost or age. Although their opinions varied, the favourite choice was a modern instrument, and the least favourite, by a clear margin, was a Stradivari.

OK, it’s just a small-scale test – getting hold of even three old violins (combined value $10m) was no mean feat. And you’ll have to trust me that the researchers took all the right precautions. The tests were, for example, literally double-blind – both the researchers and the players wore welders’ goggles in dim lighting to make sure they couldn’t identify the type of instrument by eye. And in case you’re thinking they just hit on a dud Stradivari (which do exist), the one with the worst rating had been owned by several well-known violinists.

This is embarrassing for the experts, both scientists and musicians. In judging quality, “the opinions of different violinists would coincide absolutely”, one acoustics expert has previously said. “Any musician will tell you immediately whether an instrument he is playing on is an antique instrument or a modern one”, claimed another. And a distinguished violinist once insisted to me that the superior sound of the most expensive old instruments is “very real”.

But acoustic scientists have struggled to identify any clear differences between the tone of antique and (good) new instruments. And as for putting belief to the test, an acoustic scientist once told me that he doubted any musicians would risk exposing themselves to a blind test, preferring the safety of the myth.

That’s why the participants in the latest study deserve credit. They’re anonymous, but they must know how much fury they could bring down on their heads. If you’ve paid $3m for one of the 500 or so remaining Strads, you don’t want to be told that a modern instrument would sound as good at a hundredth of the price.

But that’s perhaps the problem in the first place. In a recent blind wine-testing study, the ‘quality’ was deemed greater when the subjects were told that the bottle cost more.

Is there a killjoy aspect to this demonstration that the mystique of the Strad evaporates under scientific scrutiny? Is it fair to tell violinists that their rapture at these instruments’ irreplaceable tone is a neural illusion? Is this an example of Keats’ famous criticism that science will “clip and Angel’s wings/Conquer all mysteries by rule and line”?

I suspect that depends on whether you want to patronize musicians or treat them as grown-ups – as well as whether you wish to deny modern luthiers the credit they are evidently due. In fact, musicians themselves sometimes chafe at the way their instruments are revered over their own skill. The famous violinist Jascha Heifetz, who played a Guarneri del Gesù, pointedly implied that it’s the player, not the instrument, who makes the difference between the sublime and the mediocre. A female fan once breathlessly complimented him after a performance on the “beautiful tone” of his violin. Heifetz turned around and bent to put his ear close to the violin lying in its case. “I don’t hear anything”, he said.

Wednesday, January 04, 2012

Science is a joke

Belatedly, here is last Saturday’s Critical Scientist column for the Guardian.
_____________________________________________________________________

Is there something funny about science? Audiences at Robin Ince’s seasonal slice of rationalist revelry, Nine Carols and Songs for Godless People, just before Christmas seemed to think so. This annual event at the Bloomsbury Theatre in London is far more a celebration of the wonders of science than an exercise in atheistic God-baiting. In fact God gets a rather easy ride: the bad science of tabloids, fundamentalists, quacks and climate-change sceptics provides richer comic fodder.

Time was when London theatre audiences preferred to laugh at science rather than with it, most famously with Thomas Shadwell’s satire on the Royal Society, The Virtuoso, in 1676. Samuel Butler and Jonathan Swift followed suit in showering the Enlightenment rationalists with ridicule. In modern times, scientists (usually mad) remained the butt of such jokes as came their way.

They haven’t helped matters with a formerly rather feeble line in laughs. Even now there are popularizing scientists who imagine that another repetition of the ‘joke’ about spherical cows will prove them all to be jolly japers. And while allowing that much humour lies in the delivery, there are scant laughs still to be wrung from formulaic juxtapositions of the exotic with the mundane (“imagine looking for the yoghurt in an eleven-dimensional supermarket!”), or anthropomorphising the sexual habits of other animals.

Meanwhile, science has its in-jokes like any other profession. A typical example: A neutron goes into a bar and orders a drink. “How much?”, he asks the bartender, who replies: “For you, no charge”. Look, I’m just telling you. Occasionally the humour is so rarefied that its solipsism becomes virtually a part of the joke itself. Thomas Pynchon, for instance, provides a rare example of an equation gag, which I risk straining the Guardian’s typography to repeat: ∫1/cabin d(cabin) = log cabin + c = houseboat. This was the only calculus joke I’d ever seen until Matt Parker produced a better one at Nine Carols. Speaking of rates of flow (OK, it was flow of poo, d(poo)/dt – some things never fail), he admitted that this part of his material was a little derivative.

The rise of stand-up has changed everything. Not only do we now have stand-ups who specialize in science, but several, such as Timandra Harkness and Helen Keen, are women, diluting the relentless blokeishness of much science humour. Some aim to be informative as well as funny. At the Bloomsbury you could watch Dr Hula (Richard Vranch) and his assistant demonstrate atomic theory and chemical bonding with hula hoops (more fun than perhaps it sounds).

As Ben Goldacre’s readers know, good jokes often have serious intent. Perhaps the most notorious scientific example was not exactly a joke at all. Certainly, when in 1996 the physicist Alan Sokal got a completely spurious paper on ‘quantum hermeneutics’ published in the journal of postmodern criticism Social Text, the postmodernists weren’t laughing. And Sokal himself was more intent on proving a point than making us giggle. Arguably funnier was the epilogue: in the early 2000s, a group of papers on quantum cosmology published in physics journals by the French brothers Igor and Grichka Bogdanov was so incomprehensible that this was rumoured to be the postmodernists’ revenge – until the indignant Bogdanovs protested that they were perfectly serious.

But my favourite example of this sort of prank was a paper submitted by computer scientists David Mazières and Eddie Kohler to one of the ‘junk science’ conferences that plague their field with spammed solicitations. The paper had a title, abstract, text, figures and captions that all consisted solely of the phrase “Get me off your fucking email list”. Mazières was keen to present the paper at the conference but was never told if it was accepted or not. Reporting the incident made me probably the first and only person to say ‘fucking’ in the august pages of Nature* – not, I admit, the most distinguished achievement, but we must take our glory where we can find it.

*Apparently not, according to Adam Rutherford on the Guardian site...

Monday, January 02, 2012

The new history

Here is the original draft of the end-of-year essay I published in the last 2011 issue of Nature.
___________________________________________________

2011 shows that our highly networked society is ever more prone to abrupt change. The future of our complex world depends on building resilience to shocks.

In the 1990s, American political scientist Francis Fukuyama, now at Stanford, predicted that the world was approaching the ‘end of history’ [1]. Like most smart ideas that prove to be wrong, Fukuyama’s was illuminating precisely for its errors. Events this year have helped to reveal why.

Fukuyama argued that after the collapse of the Soviet Union, liberal democracy could be seen as the logical and stable end point of civilization. Yet the prospect that the world will gradually replicate the US model of liberal democracy, as Fukuyama hoped, looks more remote today than it did at the end of the twentieth century.

This year we have seen proliferating protest movements in the fallout from the financial crisis – not just the cries of the marginalized and disaffected, but genuine challenges to the legitimacy of the economic system on which recent liberal democracies have been based. In the face of the grave debt crisis in Greece, the wisdom of deploying democracy’s ultimate tool – the national referendum – to solve it was questioned. The political situation in Russia and Turkey suggests that there is nothing inexorable or irreversible about a process of democratization, while North Africa and the Middle East demonstrate to politicians what political scientists could already have told them: that democratization can itself inflame conflict, especially when it is imposed in the absence of a strong pre-existing state [2,3]. Meanwhile, China continues to show that aggressive capitalism depends on neither liberalism nor democracy. As a recent report of the US National Intelligence Council admits, in the coming years “the Western model of economic liberalism, democracy, and secularism, which many assumed to be inevitable, may lose its luster” [4].

The real shortcoming behind Fukuyama’s thesis, however, was not his faith democracy but that he considered history to be gradualist: tomorrow’s history is more (or less) of the same. The common talk among political analysts now is of ‘discontinuous change’, a notion raised by Irish philosopher Charles Handy 20 years ago [5], and alluded to by President Obama in his speech at the West Point Military Academy last year, when he spoke of ‘moments of change’. Sudden disruptive events, particularly wars, have of course always been a part of history. But they would come and go against a slowly evolving social, cultural and political backdrop. Now the potential for discontinuous social and political change is woven into the very fabric of global affairs.

Take the terrorist attack on the World Trade Centre’s twin towers in 2001. This was said by many to have proved Fukuyama wrong – but on this tenth anniversary of that event we can now see more clearly in what sense that was so. It was not simply that this was a significant historical event – Fukuyama was never claiming that those would cease. Rather, it was a harbinger of the new world order, which the subsequent ‘war on terror’ failed catastrophically to acknowledge. That was a war waged in the old way, by sending armies to battlegrounds (in Afghanistan and Iraq) according to Carl von Clausewitz’s old definition, in his classic 1832 work On War, of a continuation of international politics by other means. But not only were those wars in no sense ‘won, they were barely wars at all – illustrating the remark of American strategic analyst Anthony Cordesman that “one of the lessons of modern war is that war can no longer be called war” [6]. Rather, armed conflict is a diffuse, nebulous affair, no longer corralled from peacetime by declarations and treaties, no longer recognizing generals or even statehood. In its place is a network of insurgents, militias, terrorist cells, suicide bombers, overlapping and sometimes competing ‘enemy’ organizations [7]. Somewhere in this web we have had to say farewell to war and peace.

Network revolutions

The nature of discontinuous change is often misunderstood. It is sometimes said – this is literally the defence of traditional economists in their failure to predict the on-going financial and national-debt crises – that no one can be expected to foresee such radical departures from the previous quotidian. They come, like a hijacked aircraft, out of a clear blue sky. Yet social and political discontinuities are rarely if ever random in that sense, even if there is a certain arbitrary character to their immediate triggers. Rather, they are abrupt in the same way, and for the same reasons, that phase transitions are abrupt in physics. In complex systems, including social ones, discontinuities don’t reflect profound changes in the governing forces but instead derive from the interactions and feedbacks between the component parts. Thus, discontinuities in history are precisely what you'd expect if you start considering social phenomena from a complex-systems perspective.

Experience with natural and technological complex systems teaches us, for example, that highly connected networks of strong interactions create a propensity for avalanches, catastrophic failures, and systemic ruptures [8,9]: in short, for discontinuous change.

So it should come as no surprise that today’s highly networked, interconnected world, replete with cell phones, ipads and social media, is prone to abrupt changes in course. It is much more than idle analogy that connects the cascade of minor failures leading to the 2003 power blackout of eastern North America with the freezing of liquidity in the global banking network in 2007-8.

Some see the revolts in Tunisia and Egypt in this way too, dubbing them ‘Twitter revolutions’ because of the way unrest and news of demonstration were spread on social networks. Although this is an over-simplification, it is abundantly clear that networking supplied the possibility for a random event to trigger a major one. The Tunisian revolt was set in motion by the self-immolation of a street vendor, Mohammed Bouazizi, in Sidi Bouzid, in protest at harsh treatment by officials. Three months earlier there was a similar case in the city of Monastir – but no one knew about it because the news was not spread on Facebook.

It was surely not without reasons that Twitter and Facebook were shut down by both the Tunisian and Egyptian authorities. The issue is not so much whether they ‘caused’ the revolutions, but that their existence – and the concomitant potential for mobilizing the young, educated populations of these countries – can alter the way things happen in the Middle East and beyond. These same tools are now vital to the Occupy protests disrupting complacent financial districts worldwide, from New York to Taipei, drawing attention to issues of social and economic inequality.

Social media seem also to have the potential to facilitate qualitatively new collective behaviours, such as the riots during the summer in the UK. These brief, destructive paroxysms are still an enigma. Unlike previous riots, they were not confined either to particular demographic subsets of the population or to areas of serious social deprivation. They had no obvious agenda, not even a release of suppressed communal fury – although there was surely a link to post-financial-crash austerity policies. One might almost call them events that grew simply because they could. Some British politicians suggested that Twitter should be disabled in such circumstances, displaying not only a loss of perspective (some of the same people celebrated the power of networking in the Arab Spring) but also a failure to understand the new order. After all, police monitoring of Twitter in some UK cities provided information that helped suppress rioting.

What all these events really point towards is the profound impact of globalization. They show how deep and dense the interdependence of economies, cultures and institutions has become, in large part thanks to the pervasive nature of information and communication technologies. And with this transformation come new, spontaneous modes of social and political organization, from terrorist and protest networks to online consumerism – modes that are especially prone to discontinuous change. Nothing will work that fails to take this new interconnectedness into account: not the economy, not policing, not democracy.

The path forwards

Such extreme interdependence makes it hard to find, or even to meaningfully define, the causes of major events. The US subprime mortgage problem caused the financial collapse only in the way Bouazizi’s immolation caused the Arab Spring – it could equally have been something else that set events in motion. The real vulnerabilities were systemic: webs of dependence that became destabilized by, say, runaway profits in the US banking industry, or rising food prices in North Africa. This means that potential solutions must lie there too.

Complex systems can rarely if ever be controlled by top-down measures. Instead, they must be managed by guiding the trajectories from the bottom up [10]. In a much simpler but instructive example, traffic lights may direct flows more efficiently if they are given adaptive autonomy and allowed to self-organize their switching, rather than imposing a rigid, supposedly optimal sequence [11]. The robustness of the Internet to random server failures is precisely due to the fact that no one designed it – it grew its ‘small world’ topology spontaneously.

This does not imply that political interventions are doomed to fail, but just that they must take other forms from those often advanced today. “Complex systems cannot be steered like a bus”, says Dirk Helbing of the Swiss Federal Institute of Technology (ETH) in Zurich, a specialist on the understanding and management of complex social systems. “Attempts to control the systems from the top down may be strong enough to disturb its intrinsic self-organization but not strong enough to re-establish order. The result would be chaos and inefficiency. Modern governance typically changes the institutional framework too quickly to allow individuals and companies to adapt. This destroys the hierarchy of time scales needed to establish stable order.”

But these systems are nevertheless manageable, Helbing insists – not by imposing structures but by creating the rules needed to allow the system to find its own stable organization. “This can’t be ensured by a regulatory authority that monitors the system and tries to enforce specific individual action”, he says.

That’s why theories or ideologies are likely to be less effective at predicting or averting crises than scenario modelling. It’s why problems need to be considered at several hierarchical levels, probably with multiple, overlapping models, and why solutions must have scope for adaptation and flexibility. And although cascading crises and discontinuous changes may be unpredictable, the connections and vulnerabilities that permit them are not. Planning for the future, then, might not be so much a matter of foreseeing what could go wrong as of making our systems and institutions robust enough to withstand a variety of shocks. This is how the new history will work.

References
1. Fukuyama, F. The End of History and the Last Man (Penguin, London, 1992).
2. E. D. Mansfield & Snyder, J. Int. Secur. 20, 5–38 (1995).
3. Cederman, L.-E., Hug, S. & Wenger, A., in Democritization (eds Grimm, S. & Merkel, W.), 15, 509-524 (Routledge, London, 2008).
4. National Intelligence Council, Global Trends 2025: A Transformed World (US Government Printing Office, Washington DC, 2008).
5. Handy, C., The Age of Unreason (Harvard Business School Press, Boston, 1990).
6. In H. Strachan, Europaeum Lecture, Geneva, 9 November 2006, p. 12.
7. J. C. Bohorquez, S. Gourley, A. R. Dixon, M. Spagat & N. F. Johnson, Nature 462, 911-914 (2009).
8. Barabási, A.-L. IEEE Control Syst. Mag. 27(4), 33-42 (2007).
9. Vespignani, A. Nature 464, 984-985 (2010).
10. Helbing, D. (ed.), Managing Complexity: Insights, Concepts, Applications (Springer, Berlin, 2008).
11. Lämmer, S. & Helbing, D., J. Stat. Mech. P04019 (2008).

Thursday, December 22, 2011

400 years of snowflakes


Here is the pre-edited version of my In Retrospect piece for Nature celebrating the 400th anniversary of Kepler’s seminal little treatise on snowflakes.
_________________________________________________________________

Did anyone ever receive a more exquisite New Year’s gift than the German scholar Johannes Matthäus Wackher von Wackenfels, four hundred years ago? It was a booklet of just 24 pages, written by his friend Johannes Kepler, court mathematician to the Holy Roman Emperor Rudolf II in Prague. The title was De nive sexangula (On the Six-Cornered Snowflake), and herein Kepler attempted to explain why snowflakes have this striking hexagonal symmetry. Not only is the booklet charming and witty, but it seeded the notion from which all of crystallography blossomed: that the geometric shapes of crystals can be explained in terms of the packing of their constituent particles.

Like Kepler, Wackher was a self-made man of humble origins whose brilliance earned him a position in the imperial court. By 1611 he had risen to the position of privy councillor, and was a man of sufficient means to act as Kepler’s some-time patron. Sharing an interest in science, he was also godfather to Kepler’s son and in fact a distant relative of Kepler himself. It is sometimes said that Kepler’s booklet was in lieu of a regular gift which the straitened author, who frequently had to petition Rudolf’s treasury for his salary, could not afford. In his introduction, Kepler says he had recently noticed a snowflake on the lapel of his coat as he crossed the Charles Bridge in Prague, and had been moved to ponder on its remarkable geometry.

Kepler came to the imperial court in 1600 as an assistant to the Danish astronomer Tycho Brahe. When Tycho died the following year, Kepler became his successor, eagerly seizing the opportunity to use Tycho’s incomparable observational data to deduce the laws of planetary motion that Isaac Newton’s gravitational theory later explained.

Kepler’s analysis of the snowflake comes at an interesting juncture. It unites the older, Neoplatonic idea of a geometrically ordered universe that reflects God’s wisdom and design with the emerging mechanistic philosophy, in which natural phenomena are explained by proximate causes that, while they may be hidden or ‘occult’ (like gravity), are not mystical. In Mysterium Cosmographicum (1596) Kepler famously concocted a model of the cosmos with the planetary orbits arranged on the surfaces of nested polyhedra, which looks now like sheer numerology. But unlike Tycho, he was a Copernican and came close to formulating the mechanistic gravitational model that Newton later developed.

Kepler was not by any means the first to notice that the snowflake is six-sided. This is recorded in Chinese documents dating back to the second century BCE, and in the Western world the snowflake’s ‘star-like’ forms were noted by Albertus Magnus in the thirteenth century. René Descartes included drawings of sixfold stars and ice ‘flowers’ in his meteorological book Les Météores (1637), while Robert Hooke’s microscopic studies recorded in Micrographia (1665) revealed the elaborate, hierarchical branching patterns.

“There must be a cause why snow has the shape of a six-cornered starlet”, Kepler wrote. “It cannot be chance. Why always six? The cause is not to be looked for in the material, for vapour is formless and flows, but in an agent.” This ‘agent’, he suspected, might be mechanical, namely the orderly stacking of frozen ‘globules’ that represent “the smallest natural unit of a liquid like water” – not explicitly atoms, but as good as. Here he was indebted to the English mathematician Thomas Harriot, who acted as navigator for Walter Raleigh’s voyages to the New World in 1584-5. Raleigh sought Harriot’s expert advice on the most efficient way to stack cannonballs on the ship’s deck, prompting the ingenious Harriot to theorize about the close-packing of spheres. Around 1606-8 he communicated his thoughts to Kepler, who returned to the issue in De nive sexangula. Kepler asserted that hexagonal packing “will be the tightest possible, so that in no other arrangement could more pellets be stuffed into the same container.” This assertion about maximal close-packing became known as Kepler’s conjecture, which was proved using computational methods only in 1998 (published in 2005) [1].

Less commonly acknowledged as a source of inspiration is the seventeenth-century enthusiasm for cabinets of curiosities (Wunderkammern), collections of rare and marvelous objects from nature and art that were presented as microcosms of the entire universe. Rudolf II had one of the most extensive cabinets, to which Kepler would have had privileged access. The forerunners of museum collections, the cabinets have rarely been recognized as having any real influence on the nascent experimental science of the age. But Kepler mentions in his booklet having seen in the palace of the Elector of Saxony in Dresden “a panel inlaid with silver ore, from which a dodecahedron, like a small hazelnut in size, projected to half its depth, as if in flower” – a showy example of the metalsmith’s craft which may have stimulated his thinking about how an emergent order gives crystals their facets.

Yet despite his innovative ideas, in the end Kepler is defeated by the snowflake’s ornate form and its flat, plate-like shape. He realizes that although the packing of spheres creates regular patterns, they are not necessarily hexagonal, let alone as ramified and ornamented as that of the snowflake. He is forced to fall back on Neoplatonic occult forces: God, he suggests, has imbued the water vapour with a “formative faculty” that guides its form. There is no apparent purpose to the flake’s shape, he observes: the “formative reason” must be purely aesthetic or frivolous, nature being “in the habit of playing with the passing moment.” That delightful image, which touches on the late Renaissance debate about nature’s autonomy, remains resonant today in questions about the adaptive value (or not) of some complex patterns and forms in biological growth [2]. Towards the end of his inconclusive tract Kepler offers an incomparably beautiful variant of ‘more research is needed’: “As I write it has again begun to snow, and more thickly than a moment ago. I have been busily examining the little flakes.”

Kepler’s failure to explain the baroque regularity of the snowflake is no disgrace, for not until the 1980s was this understood as a consequence of branching growth instabilities biased by the hexagonal crystal symmetry of ice [3]. In the meantime, Kepler’s vision of crystals as stackings of particles informed the eighteenth-century mineralogical theory of René Just Haüy, the basis of all crystallographic understanding today.

But the influence of Kepler’s booklet goes further. It was in homage that crystallographer Alan Mackay called his seminal 1981 paper on quasicrystals ‘De nive quinquanglua’ [4]. Here, three years before the experimental work that won Dan Shechtman this year’s Nobel prize in chemistry, Mackay showed that a Penrose tiling could, if considered the basis of an atomic ‘quasi-lattice’, produce fivefold diffraction patterns. Quasicrystals showed up in metal alloys, not snow. But Mackay has indicated privately that it might indeed be possible to induce water molecules to pack this way, and quasicrystalline ice was recently reported in computer simulations of water confined between plates [5]. Whether it can furnish five-cornered snowflakes remains to be seen.

References
1. Hales, T. C. Ann. Math. 2nd ser. 162, 1065-1185 (2005).
2. Rothenberg, D. Survival of the Beautiful (Bloomsbury, New York, 2011).
3. Ben-Jacob, E., Goldenfeld, N., Langer, J. S. & Schön, G. Phys. Rev. Lett. 51, 1930-1932 (1983).
4. Mackay, A. L. Kristallografiya 26, 910-919 (1981); in English, Sov. Phys. Crystallogr. 26, 517-522 (1981).
5. Johnston, J. C., Kastelowitz, N. & Molinero, V. J. Chem. Phys. 133, 154516 (2010).

Reputations matter

Rather a lot of posts all at once, I fear. Here is the first, which I meant to put up earlier – last Saturday’s column in the Guardian.
_______________________________________________________________
Johannes Stark was a German physicist whose Nobel prize-winning discovery in 1913, the Stark effect (don’t ask), is still useful today. Just the sort of person, then, who you might expect to have scientific institutes or awards named after him.

The fact that there aren’t any is probably because Stark was a Nazi – a bitter and twisted anti-Semite who rejected relativity because Einstein was Jewish.

Scientists concur that, while your discovery should bear your name no matter how despicable (or just plain crazy) you are, you need a little virtue to be commemorated in other ways.

But how little? Everyone knows Isaac Newton was a grumpy and vindictive old sod, but that hardly seems reason to begrudge the naming of the Isaac Newton Institute for Mathematical Sciences in Cambridge. Yet when the Dutch Nobel laureate Peter Debye was accused in a 2006 book of collusion with the Nazis during his career in pre-war Germany, the Dutch government insisted that the Debye Institute at the University of Utrecht be renamed, and an annual Debye Prize awarded in his hometown of Maastricht was suspended.

Reputations matter, then. Two researchers have claimed this week to lay to rest the suggestion that Charles Darwin stole some of his ideas on natural selection from Alfred Russel Wallace, who sent Darwin a letter explaining his own theory in 1858. Darwin passed it on to other scientific authorities as Wallace requested, but it has been suggested that he first sat on it for weeks and revised his theory in the light of it.

No proper Darwin historian ever took that accusation seriously, not least because everything we know about Darwin’s character makes it highly implausible. But Wallace has admirers on the fringe who identify with his image of the wronged outsider and will stop at nothing to see him given priority. And knocking Darwin’s character is a favourite tactic of creationists for discrediting his science.

This isn’t the last word on that matter, not least because the dates of Wallace’s letter still aren’t airtight. Evolutionary geneticist Steve Jones has rightly said that “The real issue is the science and not who did it.” Oh, but we do care who did it. We do care if Einstein nicked his ideas from his first wife Mileva Maric (another silly notion), or if Gottfried Leibniz pilfered the calculus from Newton.

Partly we like the whiff of scandal. Partly we love seeing giants knocked off their pedestals. But in cases like Debye’s there are more profound questions. Debye finally left his physics institute in Berlin and moved to the US in 1940 because he refused to give up his Dutch citizenship and become German, as the Nazis demanded when they commandeered his institute for war research. Into the breach stepped Werner Heisenberg, among others, whose work on the nuclear programme still excites debate about whether or not he tried to make an atom bomb for Hitler.

After the war, Heisenberg encouraged the myth that he and his colleagues purposely delayed their research to deny Hitler such power. It’s more likely that they never in fact had to make the choice, since they weren’t given the resources of the Manhattan Project. In any event, Heisenberg began the war patriotically anticipating a quick victory. Yet he was never a Nazi, and today we have the Werner Heisenberg Institute and Prize.

Unlike Stark, Heisenberg and Debye weren’t terrible people – they behaved in the compromised, perhaps naïve way that most of us would in such circumstances. But engraving their names in stone and bronze creates difficulties. It forces us to make them unblemished icons, or conversely tempts us to demonize them. This rush to beatify brings down a weight of moral expectation that few of us could shoulder – even the deeply humane Einstein was no saint towards Maric. Why not give time more chance to weather and blur the images of great scientists, to produce enough distance for us to celebrate their achievements while overlooking their all-too-human foibles?

Wednesday, December 21, 2011

Happy Christmas to the Godless


This week I had the pleasure of taking part in one of Robin Ince’s Nine Lessons and Carols for Godless People at the Bloomsbury Theatre in London. Fending off the “I am not worthy” feeling amidst the likes of Simon Singh, Alexei Sayle and Mark Thomas, and knowing what a terrible idea it would be to try to make people laugh, I plucked a few things from my forthcoming book on curiosity, in particular Kepler’s treatise on snowflakes (on which, more shortly). But I couldn’t resist poking some fun at a few of the scientifically illiterate snowflakes we always get at Christmas, including the one above from dear Ed Milliband. I wanted to offer Ed a little get-out clause for his pentagonal snowflakes on the basis of quasicrystalline ice, but time did not permit.

Anyway, it’s a great show if you still have time to catch the last ones. I did a little interview for a podcast by New Humanist, which I mention mostly so that you can get a flavour of the other folk in the show.

Friday, December 16, 2011

Unweaving tangled relationships

Here’s the original text of my latest news story for Nature.
___________________________________________
A new statistical method discovers hidden correlations in complex data.

The American humorist Evan Esar once called statistics the science of producing unreliable facts from reliable figures. A new technique now promises to make those facts a whole lot more dependable.

Brothers David Reshef of the Broad Institute of MIT and Harvard in Cambridge, Massachusetts, Yakir Reshef of the Weizmann Institute of Science in Rehovot, Israel, and their coworkers have devised a method to extract from complex sets of data relationships and trends that are invisible to other types of statistical analysis. They describe their approach in a paper in Science today [1].

“This appears to be an outstanding achievement”, says statistician Douglas Simpson of the University of Illinois at Urbana-Champaign. “It opens up whole new avenues of inquiry.”

Here’s the basic problem. You’ve collected lots of data on some property of a system that could depend on many governing factors. To figure out what depends on what, you plot them on a graph.

If you’re lucky, you might find that this property changes in some simple way as a function of some other factor: for example, people’s health gets steadily better as their wealth increases. There are well known statistical methods for assessing how reliable such correlations are.

But what if there are many simultaneous dependencies in the data? If, say, people are also healthier if they drive less, which might not bear any obvious relation to their wealth (or might even be more prevalent among the less wealthy)? The conflict might leave both relationships hidden from traditional searches for correlations.

The problems can be far worse. Suppose you’re looking at how genes interact in an organism. The activity of one gene could be correlated with that of another, but there could be hundreds of such relationships all mixed together. To a cursory ‘eyeball’ inspection, the data might then just look like random noise.

“If you have a data set with 22 million relationships, the 500 relationships in there that you care about are effectively invisible to a human”, says Yakir Reshef.

And the relationships are all the harder to tease out if you don’t know what you’re looking for in the first place – if you have no a priori reason to suspect that this depends on that.

The new statistical method that Reshef and his colleagues have devised aims to crack precisely those problems. It can spot many superimposed correlations between variables and measure exactly how tight each relationship is, according to a quantity they call the maximal information coefficient (MIC).

A MIC of 1 implies that two variables are perfectly correlated, but possibly according to two or more simultaneous and perhaps opposing relationships: a straight line and a parabola, say. A MIC of zero indicates that there is no relationship between the variables.

To demonstrate the power of their technique, the researchers applied it to a diverse range of problems. In one case they looked at factors that influence people’s health globally in data collected by the World Health Organization. Here they were able to tease out superimposed trends – for example, how female obesity increases with income in the Pacific Islands, where it is considered a sign of status, while in the rest of the world there is no such link.

In another example, the researchers identified genes that were expressed periodically, but with differing cycle times, during the cell cycle of yeast. And they uncovered groups of human gut bacteria that proliferate or decline when diet is altered, finding that some bacteria are abundant precisely when others are not. Finally, they identified which performance factors for baseball players are most strongly correlated to their salaries.

Reshef cautions that finding statistical correlations is only the start of understanding. “At the end of the day you'll need an expert to tell you what your data mean”, he says. “But filtering out the junk in a data set in order to allow someone to explore it is often a task that doesn't require much context or specialized knowledge.”

He adds that “our hope is that this tool will be useful in just about any field that is amassing large amounts of data.” He points to genomics, proteomics, epidemiology, particle physics, sociology, neuroscience, earth and atmospheric science as just some of the scientific fields that are “saturated with data”.

Beyond this, the method should be valuable for ‘data mining’ in sports statistics, social media and economics. “I could imagine financial companies using tools like this to mine the vast amounts of data that they surely keep, or their being used to track patterns in news, societal memes, or cultural trends”, says Reshef.

One of the big remaining questions is about what causes what: the familiar mantra of statisticians is that “correlation does not imply causality”. People who floss their teeth live longer, but that doesn’t mean that flossing increases your lifespan.

“We see the issue of causality as a potential follow-up”, says Reshef. “Inferring causality is an immensely complicated problem, but has been well studied previously.”

Biostatistician Raya Khanin of the Memorial Sloan-Kettering Cancer Center in New York acknowledges the need for a technique like this but reserves judgement about whether we yet have the measure of MIC. “I’m not sure whether its performance is as good as and different from other measures”, she says.

For example, she questions the findings about the mutual exclusivity of some gut bacteria. “Having worked with this type of data, and judging from the figures, I'm quite certain that some basic correlation measures would have uncovered the same type of non-coexistence behavior,” she says.

Another bioinformatics specialist, Simon Rogers of the University of Glasgow in Scotland, also welcomes the method but cautions that the illustrative examples are preliminary at this stage. Of the yeast gene linkages, he says “one would have to do more evaluation to see if they are biologically significant.”


References
1. Reshef, D. N. et al. Science 334, 1518–1524 (2011).

Monday, December 12, 2011

Darwin not guilty: shock verdict

Here’s the pre-edited version of my latest news story for Nature. There’s somewhat more to it than can all be fitted in here, or indeed that I am at liberty to say. It seems that some may still find the authors’ reconstruction of the shipping route of Wallace’s letter open to question, even if they accept (as it seems all serious historians do) that the ‘conspiracy theory’ is bunk.

There was also more to Wallace’s letter to Hooker in September 1858 than I’ve quoted here. He said:
“I cannot but consider myself a favoured party in this matter, because it has hitherto been too much the practice in cases of this sort to impute all the merit to the first discoverer of a new fact or a new theory, & little or none to any other party who may, quite independently, have arrived at the same result a few years or a few hours later.
I also look upon it as a most fortunate circumstance that I had a short time ago commenced a correspondence with Mr. Darwin on the subject of “Varieties,” since it has led to the earlier publication of a portion of his researches & has secured to him a claim of priority which an independent publication either by myself or some other party might have injuriously affected, — for it is evident that the time has now arrived when these & similar views will be promulgated & must be fairly discussed.”

So whatever one thinks of the evidence put forward here, the notion that Darwin pilfered from Wallace really is a non-starter. Not that its advocates will take the slightest notice.
_____________________________________________________
Charles Darwin was not a plagiarist, according to two researchers who claim to have refuted the idea that he revised his own theory of evolution to fit in with that proposed in a letter Darwin received from the naturalist Alfred Russel Wallace.

This accusation has received little support from serious historians of Darwin’s life and work, who concur that Darwin and Wallace came up with the theory of evolution by natural selection independently at more or less the same time. But it has proved hard to dispel, thanks to some vociferous advocates of Wallace’s claim to primacy of the theory of evolution by natural selection.

The charge rests largely on a suggestion that in 1858 Darwin sat on a letter sent from Indonesia by Wallace, including an essay in which he described his ideas, for about two weeks before passing it on to the geologist Charles Lyell as Wallace requested.

After inspecting historical shipping records, John van Wyhe and Kees Rookmaaker, curators of the archives Darwin Online and Wallace Online and historians of science at the National University of Singapore, claim that Wallace’s letter and essay could not in fact have arrived sooner than 18 June, the very day that Darwin told Lyell he had received it [1].

Darwin had begun work on the text that became On the Origin of Species, published in 1859, as early as the 1840s, but had dallied over it. In his letter to Lyell he admitted rueing his own dilatoriness. “I never saw a more striking coincidence”, he said. “If Wallace has my M.S. sketch written out in 1842 he could not have made a better abstract.”

In the event – but not without misgivings about whether it was the honourable thing – Darwin followed the suggestion of Lyell and his friend Joseph Hooker that he write up his own views on evolution so that the papers could be presented side by side to the Linnaean Society in London. This took place on 1 July, but Darwin wasn’t present, for he was still devastated by the death of his youngest son from scarlet fever three days earlier.

The controversy about attribution would probably have mystified both Darwin and Wallace, who remained mutually respectful throughout their lives. Darwin was even ready to relinquish all priority to the idea of natural selection after seeing Wallace’s essay, until Lyell and Hooker persuaded him otherwise. And in September 1858 Wallace wrote to Hooker that “It would have caused me such pain & regret had Mr. Darwin’s excess of generosity led him to make public my paper unaccompanied by his own much earlier & I doubt not much more complete views on the same subject.”

Although most historians have accepted that Darwin’s account of the events was honest, others have argued that Wallace’s letter, sent from the island of Ternate in the Moluccas, arrived at Darwin’s house in Down in southern England, several weeks earlier than 18 June. They suggest that Darwin lied about the date of receipt because he used the intervening time to revise his own ideas in the light of Wallace’s.

The most extreme accusation came in a 2008 book The Darwin Conspiracy: Origins of a Scientific Crime by the former BBC documentary-maker Roy Davies. “Ideas contained in Wallace’s Ternate paper were plagiarised by Charles Darwin”, wrote Davies, who called this “a deliberate and iniquitous case of intellectual theft, deceit and lies.” Others have claimed that Darwin wrote to Hooker on 8 June saying that he had found a ‘missing keystone’ to his theory, and allege that he took this from Wallace’s essay.

“Many conspiracy theorists have made hay because of this unexplained date mystery”, says van Wyhe. He and Rookmaaker have now painstakingly retraced the tracks of the letter. They have discovered the sailing schedules of mail boats operated by Dutch firms in what was then the Dutch East Indies, and claim that these indicate the letter could not have left Ternate sooner than about 5 April. It was carried via Jakarta, Singapore and Sri Lanka, and then overland from Suez to Alexandria. “We found that Wallace’s essay travelled across Egypt on camels”, says van Wyhe. “That was not known before, and it’s a rather charming image to think of this essay that will change the world swaying on the back of a camel for two days.”

The researchers say that the letter then passed on by boat to Gibraltar and Southampton in England, arriving on 16 June. It was taken by train to London and then on to Down to arrive on the morning of the 18th.

“I'm not sure there really ever has been a controversy over this within the history of science community”, says evolutionary biologist John Lynch of Arizona State University, who has written extensively on cultural responses to evolutionary theory. He says that the claims of plagiarism “have had marginal, if any, influence - the evidence has failed to convince most readers.”

The story “has always seemed unlikely to me given what we know about Darwin’s generally kind and tolerant personality”, agrees geneticist Steve Jones of University College, London, whose 1999 book Almost like a Whale was an updated version of the Origin of Species.

But van Wyhe says that “these conspiracy stories are very widely believed. Thousands of people have heard that something fishy happened between Darwin and Wallace. I hear these stories very often when I give popular lectures.”

Historian of science James Lennox of the University of Pittsburgh says that “this is an important piece of evidence for Davies’ claim of deceit on Darwin’s part. I think that claim has been undermined.”

But Lennox adds that he doesn’t think it will close the ‘controversy’. “For a variety of different motives, there will, I fear, always be people who see it as their mission to attack Darwin's character as a way of undermining his remarkable scientific achievements.”

References


1. Van Wyhe, J. & Rookmaaker, K. Biol. J. Linnaean Soc. 105, 249-252 (2012). See here.

Saturday, December 10, 2011

Creativ thinking

Here’s my latest Critical Scientist column in the Guardian, published today. It now seems that this back page of the Saturday issue is going to be reshuffled for various reasons, so it isn’t clear what the column’s fate will be in the New Year. Enjoy/criticize/excoriate it while you can.
_______________________________________________________________________
The kind of idle pastime that might amuse physicists is to imagine drafting Einstein’s grant applications in 1905. “I propose to investigate the idea that light travels in little bits”, one might say. “I will explore the possibility that time slows down as things speed up” goes another. Imagine what comments those would have elicited from reviewers for the German Science Funding Agency, had such a thing existed. Instead, Einstein just did the work anyway while drawing his wages as a Technical Expert Third Class at the Bern Patent Office. And that’s how he invented quantum physics and relativity.

The moral seems to be that really innovative ideas don’t get funded – indeed, that the system is set up to exclude them. To wring research money from government agencies, you have to write a proposal that gets assessed by anonymous experts (“peer reviewers”). If its ambitions are too grand or its ideas too unconventional, there’s a strong chance it’ll be trashed. So does the money go only to only ‘safe’ proposals that plod down well-trodden avenues, timidly advancing the frontiers of knowledge a few nanometres?

There’s some truth in the accusation that grant mechanisms favour mediocrity. After all, your proposal has to specify exactly what you’re going to achieve. But how can you know the results before you’ve done the experiments, unless you’re aiming to prove the bleeding obvious?

To address this complaint, the US National Science Foundation has recently announced a new scheme for awarding grants. From next year – if Congress approves – the Creative Research Awards for Transformative Interdisciplinary Ventures (CREATIV – oh, I get it) will have $24 million to give to “unusually creative high-risk/high-reward interdisciplinary proposals.” In other words, it’s looking for really new ideas that might not work, but which would be massive if they do.

As science funding goes, $24m is peanuts – the total NSF pot is $5.5 bn. And each application is limited to $1m. But this is just a pilot project; more might follow. The real point is that CREATIV has been created at all, because it could be interpreted as an admission of NSF’s failure to support innovation previously. Needless to say, that’s not how NSF would see it. They would argue that the usual funding mechanisms have blind spots, especially when it comes to supporting research that crosses disciplinary boundaries.

This is a notorious problem. Talking up the importance of “interdisciplinarity” is all the rage, but most funds are still marshaled into conventional boundaries – medicine, say, or particle physics – so that if you have an idea for how to apply particle physics to medicine, each agency directs your grant request to the other one.

The problem is all the worse if you want to tackle a really big problem. To make a new drug you need chemists; to tackle Africa’s AIDS epidemic you will require not only drugs but the expertise of epidemiologists, sociologists, virologists and much else. The buzzword for really big solutions and technologies is “transformative” – the Internet is transformative, Viagra is not. This big-picture thinking is in vogue; the European Commission’s Future Emerging Technologies programme is promising to award €1 bn (now you’re talking) next year for transformational projects under the so-called Flagship Initiative.

Are schemes like CREATIV the way forward? Because the funding will be allocated by individual project managers rather than risking the conservatism of review panels, it could fall prey to cronyism. And who’s to say that those project managers will be any more broad-minded or perceptive? In the end, it’s a Gordian knot: only experts can properly assess proposals, but by definition their vision tends to be narrow. It’s good that CREATIV acknowledges the problem, but it remains to be seen if it’s a solution. Like movie-making or publishing, it’ll need to accept that there will be some duds. It’s a shame there aren’t more scientific problems that can be solved with pen, paper, and a patent clerk’s pay packet.

Saturday, December 03, 2011

Science criticism

My first of an undisclosed number of columns in the Saturday Guardian has appeared today. And got a shedload of online feedback.

I’m grateful for all these comments, good and bad (and indifferent), for giving me some sense of how the aims of this column are being perceived. It would be as premature for me to tell you what it is going to do at this point, as it is for anyone else to judge it. This is an experiment. We don’t know yet quite where it will go (that’s how it is with experiments, right?). No doubt feedback will have an influence on that. But I think I’d better make a few things more clear than I could in the piece itself:

1. This isn’t going to be a science-knocking column. Wouldn’t that be bizarre? Like appointing a theatre critic who hates theatre. (Someone, I am sure, will now come up with a few candidates for that description.) Theatre, art and literary critics almost inevitably think that theatre, art and literature are the most wonderful things: essential, inspiring, and deeply life-affirming. It is precisely caring strongly about it their subject that constitutes a necessary (if not sufficient) qualification for the job. Well, ditto here.

2. I’m not going to be peer-reviewing anyone’s work. It’s interesting that some of the comments still seem to evince a notion that this is the full extent of the meaningful evaluation of a piece of scientific work. Look at what Dorothy Nelkin brought to the discussion about DNA and genetics – in my view, important questions that were pretty much off the radar screen of most scientists working on those things. Sadly, the Guardian hasn’t got Dorothy Nelkin, though – it’s got me. She would never have done it for this kind of money.

3. But it’s not necessarily about bringing scientists to task for what they do or don’t do or say – at least, not uniquely. I like the three definitions of “critic” in the Free Dictionary:
i. One who forms and expresses judgments of the merits, faults, value, or truth of a matter. [Mostly what peer reviewers are supposed to do, yes?]
ii. One who specializes especially professionally in the evaluation and appreciation of literary or artistic works: a film critic; a dance critic.
iii. One who tends to make harsh or carping judgments; a faultfinder. [Mostly bores and climate sceptics, yes?]

So (ii) then: I don’t see why it’s just ‘literary or artistic works’ that deserve ‘evaluation and appreciation’. Remember that critics praise as well as pillory (and in my view, the best ones always make an effort to find what is valuable in a work). The critic is also there to offer context, draw analogies and comparisons, point to predecessors. (The sceptic might here scoff “Oh yeah, very valuable in science – the predecessors of E=mc2?” To which my answer is here). I also feel that the best critics don’t try to tell you what to think, but just suggest things it might be worth thinking about.

4. Some of these folks will be disappointed – in particular, those who seem to think that the column is going to be concerned mainly with highlighting why science has lost its way, or ignores deep philosophical conundrums, or fails in its social duty. I really hope to be able to touch on some of those issues (that is, to consider whether they’re really true), and I have much sympathy with some of what Nicholas Maxwell has written. But my themes will generally be considerably less grand and more specific, perhaps even parochial. Weekly critics tend to review what’s just opened at the Royal Court, not the state of British theatre, right? Besides, it’s important that I’m realistic about what can be attempted (let alone achieved) in this format. Remember that this is a weekly column in a newspaper, not an academic thesis. I have 600 words, and then you get Lucy Mangan.

All we want to try for, really, is a somewhat different way of writing about science: not merely explaining who did what and why it will transform our lives (which of course it mostly doesn’t), but writing about science as something with its own internal social dynamics, methodological dilemmas, cultural pressures and drivers, and as something that reflects and is reflected by the broader culture. That’s what I have generally attempted to do in my books already. And I want to make it very clear that I don’t claim any great originality in taking this perspective. Many writers have done it before, and doubtless better. It’s just that there is rarely a chance to discuss science in this way in newspapers, where it is all too often given its own little geeks’ ghetto. Indeed, Ben Goldacre’s Bad Science was one of the first efforts that successfully broke that mould. What’s new(ish) is not the idea but the opportunity.

Friday, December 02, 2011

Diamond vibrations neither here nor there

Here’s the pre-edited version of my latest news story for Nature online.
_________________________________________________

Two objects big enough for the eye to see have been placed in a weirdly connected quantum state.

A pair of diamond crystals has been spookily linked by quantum entanglement by researchers working in Oxford, Canada and Singapore.

This means that vibrations detected in the crystals could not be meaningfully assigned to one or other of them: both crystals were simultaneously vibrating and not vibrating.

Quantum entanglement is well established between quantum particles such as atoms at ultra-cold temperatures. But like most quantum effects, it doesn’t usually tend to survive either at room temperature or in objects large enough to see with the naked eye.

The team, led by Ian Walmsley of Oxford University, found a way to overcome both those limitations – demonstrating that the weird consequences of quantum theory don’t just apply at very small scales.

The result is “clever and convincing” according to Andrew Cleland, a specialist in the quantum behaviour of nanometre-scale objects at the University of California at Santa Barbara.

Entanglement was first mooted by Albert Einstein and two of his coworkers in 1935, ironically as an illustration of why quantum theory could not tell the whole story about the microscopic world.
Einstein considered two quantum particles that interact with each other so that their quantum states become interdependent. If the first particle is in state A, say, then the other must be in state B, and vice versa. The particles are then said to be entangled.

Until a measurement is made on one of the particles, its state is undetermined: it can be regarded as being in both states A and B simultaneously, known as a superposition. But a measurement ‘collapses’ this superposition into just one state or the other.

The trouble is, Einstein said, that if the particles are entangled then this measurement determines which state the other particle is in too – even if they have become separated by a vast distance. The effect of the measurement is transmitted instantaneously to the other particle, via what Einstein called ‘spooky action at a distance’. That can’t be right, he argued.

But it is, as countless experiments have since shown. Quantum entanglement is not only real but could be useful. Entangled photons of light have been used to transmit information in a way that cannot be intercepted and read without that being detectable – a technique called quantum cryptography.

And entangled quantum states of atoms or light can be used in quantum computing, where the superposition states allow much more information to be encoded in them than in conventional two-state bits.

But superpositions and entanglement are usually seen as delicate states, easily disrupted by random atomic jostling in a warm environment. This scrambling also tends to happen very quickly if the quantum states contain many interacting particles – in other words, for larger objects.

Walmsley and colleagues got round this by entangling synchronized atomic vibrations called phonons in diamond. Phonons – wavelike motions of many atoms, rather like sound waves in air – occur in all solids. But in diamond, the stiffness of the atomic lattice means that the phonons have very high frequencies and energy, and are therefore not usually active even at room temperature.

The researchers used a laser pulse to stimulate phonon vibrations in two crystals 3 mm across and 15 cm apart. They say that each phonon involves the coherent vibration of about 10**16 atoms, corresponding to a region of the crystal about 0.05 mm wide and 0.25 mm long – large enough to see with the naked eye.

There are three crucial conditions for getting entangled phonons in the two diamonds. First, a phonon must be excited with just one photon from the laser’s stream of photons. Second, this photon must be sent through a ‘beam splitter’ which directs it into one crystal or the other. If the path isn’t detected, then the photon can be considered to go both ways at once: to be in a superposition of trajectories. The resulting phonon is then in an entangled superposition too.

“If we can’t tell from which diamond the photon came, then we can’t determine in which diamond the phonon resides”, Walmsley explains. “Hence the phonon is ‘shared’ between the two diamonds.”

The third condition is that the photon must not only excite a phonon – also, part of its energy must be converted into a lower-energy photon, called a Stokes photon, that signals the presence of the phonon.

“When we detect the Stokes photon we know we have created a phonon, but we can’t know even in principle in which diamond it now resides”, says Walmsley. “This is the entangled state, for which neither the statement ‘this diamond is vibrating’ nor ‘this diamond is not vibrating’ is true.”

To verify that it’s been made, the researchers fire a second laser pulse into the two crystals to ‘read out’ the phonon, from which it draws extra energy. All the necessary conditions are satisfied only very rarely during the experiment. “They have to perform an astronomical number of attempts to get a very finite number of desired outcome”, says Cleland.

He doubts that there will be any immediate applications, partly because the entanglement is so short-lived. “I am not sure where this particular work will go from here”, he says. “I can’t think of a particular use for entanglement that lasts for only a few picoseconds [10**-12 s].”

But Walmsley is more optimistic. “Diamond could form the basis of a powerful technology for practical quantum information processing”, he says. “The optical properties of diamond make it ideal for producing tiny optical circuits on chips.”

1. K. C. Lee et al., Science 334, 1253-1256 (2011).

Thursday, December 01, 2011

Beautiful labs


Here is my latest Crucible column for the December issue of Chemistry World.
_________________________________________________________________

Fresh from visiting some science departments in China, I figure that, in appearance, these places don’t vary much the world over. They have the same pale corridors lined with boxy offices or neutral-hued, cluttered lab spaces; the same wood-clad lecture theatres with their raked seating and projection screens (few sliding blackboards now survive); the same posters of recent research lining the walls. They are unambiguously work places: functional, undemonstrative, bland.

Yet people spend their working lives here, day after day and sometimes night after night. Doesn’t all this functionalist severity and gloom stifle creativity? Clearly it needn’t, but increasingly we seem to suspect that conducive surroundings can offer stimulus to the advancement of knowledge. When the Wellcome Wing of the biochemistry department at Cambridge was designed and built in the early 1960s, its rectilinear modernist simplicity realised in concrete and glass was merely the order of the day, and celebrated by some (notably the influential architectural critic Nikolaus Pevsner) for its precision [1]. Today, stained and weathered, it fares less well, engendering that feeling I get from my old copy of Cotton & Wilkinson that learning chemistry is a dour affair.

Yet no longer are labs and scientific institutions built just to place walls around the benches and fume cupboards. Increasingly, for example, their design takes account of how best to encourage researchers to engage in informal discussions over coffee: comfy seating, daylight and blackboards are supplied to lubricate the exchanges. The notion that all serious work has to take place out of sight behind closed doors has yielded to the advent of open atria and glass walls, exemplified by the new biochemistry laboratory at Oxford, designed by Hawkins/Brown, which opened three years ago at a cost of nearly £50m. Not only does this space take its cue from the open-plan office, but it also follows the corporate habit of adorning the interior with expensive artworks, such as the flock of resin birds that hang suspended in the atrium. Some might grumble that the likes of Hans Krebs and Dorothy Hodgkin did not seem to need art around them to think big thoughts – but the department’s Mark Sansom has eloquently defended thus the value of the project’s artistic component: “if you have a greater degree of visual literacy, you reflect more on both the way you represent things, and also the way that may limit the way you think about them” [2]. Besides, where would you rather work?

The watchword for this new approach to laboratory design is accessibility: physically, visually, intellectually. Jonathan Hodgkin in Oxford’s biochemistry department explains that, in making art a part of the new building’s design, “part of our aim is to humanize the image of science for the public" [2]. Similarly, Terry Farrell, who was behind the dramatic (and controversial) redesign of the Royal Institution in London, says that his aim was to reconfigure the place “not as a museum but as a living, working, lively and engaging institution, which will inspire an enthusiasm for science in future generations” [3]. Even someone like me who loved the dusty, crammed warren that was the old RI has to admire the result, although the compromises to the research lab space contributed to the internal tensions of the project.

Or take the striking glass facades of the new Frick Chemistry Laboratory at Princeton, whose chief architect Michael Hopkins says that "We wanted to inspire the younger students by letting them see the workings of the department.” A common theme is to use the design to echo the science, as for example in the double-helical staircase of the European Molecular Biology Laboratory’s Advanced Training Centre in Heidelberg.

However, not all beautiful labs are new ones, a point illustrated in a recent list of “the 10 most beautiful college science labs” compiled by the US-based OnlineColleges.net. While some of these have been selected for their sleek contemporary feel – the Frick building is one, and the stunning new Physical Sciences Building that abuts Cornell’s neoclassical Baker Laboratory is another – others are more venerable. Who could quibble, for example, with the inclusion of Harvard’s Mallinckrodt Laboratory, an imposing neoclassical edifice built in the 1920s and home to the chemistry department? And then there is the Victorian gothic of the ‘Abbot’s kitchen’ in the Oxford inorganic chemistry labs, a delightful feature that I shamefully overlooked on many a tramp along South Parks Road to coax more crystals out of solution. Or the ivy-coated mock-gothic of Chicago’s Ryerson Physical Laboratory, where Robert Millikan deduced the electron’s charge.

Among these ‘beautiful labs’, chemistry seems to be represented disproportionately. Have we chemists perhaps a stronger aesthetic sensibility?

References
1. M. Kemp, Nature 395, 849 (1998).
2. G. Ferry, Nature 457, 541 (2009).
3. T. Farrell, Interiors and the Legacy of Postmodernism (Laurence King, London, 2011).

Monday, November 28, 2011

Building a better foam




I have a story on Nature’s blog about a nice new paper on ‘minimal foams’, which finally reports evidence for the Weaire-Phelan foam reported several years ago to be a more energetically favourable structure than Kelvin’s, postulated in 1887. Denis Weaire has written a nice (but goodness me, pricey) account of this so-called Kelvin Problem. And I get to show last year’s holiday snaps in Beijing…
_________________________________________________________________
Physicists working at Trinity College in Dublin, Ireland, have finally made the perfect foam. Whereas most Dubliners might consider that to be the head of a pint of Guinness, Denis Weaire and his coworkers have a more sophisticated answer.

‘Perfect’ here means the lowest-energy configuration of packed bubbles of equal size. This is a compromise. Making a soap film costs energy proportional to the film’s surface area. But the many interlocking faces of an array of polyhedral bubbles in a foam also have be mechanically stable. The Belgian scientist J. A. P. Plateau calculated in the nineteenth century that three soaps films are mechanically stable when they meet at angles of 120o, whereas four films meet at the tetrahedral angle of about 109.5o.

So what bubble shape minimizes the total surface area while (more or less) satisfying Plateau’s rules? That’s essentially the same as asking what shape balloons, or any squashy spheres, will adopt when squeezed together. Scientists including the French zoologist Georges Buffon have pondered that, using lead shot and garden peas, for centuries. The Irish scientist Lord Kelvin thought he had the answer in 1887: the ‘perfect foam’ is one in which the cells are truncated octahedra, with eight hexagonal faces and six square ones – provided that the faces are a little curved to better fit Plateau’s rules.

Kelvin’s solution was thought to be optimal for a long time, but there was no formal proof. Then in 1994 Weaire and his colleague Robert Phelan found a better way. It wasn’t so elegant – the structure had a repeating unit of eight polyhedra, six of them with 14 faces and two with 12, all with hexagons and imperfect pentagons and again slight curved (see first pic above). This has 0.3 percent less surface area than Kelvin’s foam.

But does it really exist? The duo found no definitive evidence of their ideal foam in experiments (conducted with washing-up liquid). Now there is. The key was to find the right container. Normal containers have flat walls, which the Weaire-Phelan (WP) foam won’t sit comfortably against. But physicist Ruggero Gabbrielli from the University of Trento figured that a container with walls shaped to fit the WP foam might encourage it to form. He has collaborated with Weaire and his colleagues, along with mathematician Kenneth Brakke at Susquehanna University in Pennsylvania, to design and make one out of plastic.

When the researchers filled this container with equal sized bubbles, they found that the six layers of about 1500 bubbles were ordered into the WP structure (see second pic above). They describe their results in a paper to be published in the Philosophical Magazine Letters.

This isn’t actually the first time that the WP foam has been made. But the previous example was built by hand, one cell at a time, from girders and plastic sheets, to comprise the walls of the iconic Olympic Swimming Stadium in Beijing (see third pic above).

Christmas reading

John Whitfield’s new book about reputation, People Will Talk, is out, and I am, to be honest, envious – and I haven’t even read it yet. First, John has picked such a great and timely topic. And second, I know that he will have covered it brilliantly. Yes, this is a shameless plug for my pals, but I really want John’s book to get the attention it deserves, and not get lost among all the pre-Christmas celebrity memoirs.

And while I’m plugging, look out for the debut novel Random Walk by Alexandra Claire, published by Gomer. I’m only part of the way through, but enjoying it for much more profound reasons than the fact that it quotes from my Critical Mass at the beginning (and not just because I’m a Cymruphile either).

Thursday, November 24, 2011

The sun and moon in Italy


Now that it’s happened, I can say that there’s definitely something uniquely challenging about having your fiction translated. The Sun and Moon Corrupted has become, in Italian, La Città del sole e della Luna (The City of the Sun and the Moon). I can live with that change, not least because I like the resonance with Tommaso Campanella’s visionary early seventeenth-century work The City of the Sun, which becomes somewhat apt. But how have the voices translated? In particular (this Italian illiterate wonders), how have they dealt with the eccentric English of Karl Neder and his fellow Eastern Europeans? How does one translate the Brixton riots and the Wapping news era – the whole oppressive gloom of the middle Thatcher years – to Italians?

Whatever the case, Edizioni Dedalo have done a nice job on the superficial level to which I am constrained: I like the faux-naif cover. I just hope there’s still enough disposable income in Italy for people to read it.

Friday, November 18, 2011

Surprise prize

The Royal Society Winton Prize for science books was awarded last night. I have written a piece on it for Prospect’s blog. Here it is for convenience.
_________________________________________________________________
Part of the pleasure of the presentation of the Royal Society Winton Prize for Science Books on Thursday night was that it was happening at all. Having lost its corporate sponsor (Rhône-Poulenc, subsequently merged to Aventis) after 2006, the prize was nobly supported by the Royal Society alone for the past four years but looked increasingly in danger of folding. Now it has been rescued by the British investment firm Winton Capital Management, who have agreed to back it for five years. So popular science still has its Man Booker.

The winning title, Gavin Pretor-Pinney’s The Wavewatcher’s Companion (Bloomsbury), was a surprise. In both cover and content, it looks like a sequel to Pretor-Pinney’s previously successful The Cloudspotter’s Guide, but it won over the judges with what Richard Holmes, chair of the judging panel, called “old-fashioned charm and wit”. Like many of the best science books, it doesn’t at first seem to be about science at all, but is a celebration of the ubiquity of waves of all sorts, from sonar to football crowds.

‘Wit’ seems to have been a valuable feature. Holmes commented on how often humour was employed in the submitted books. That’s encouraging – not because science books have previously been dour, but because they have often had a tendency towards leaden adolescent humour of the “imagine finding that in your sandwich!” variety. This sort of thing wouldn’t have passed muster with the erudite Holmes, whose The Age of Wonder (2009 winner of the prize) was, among many other praiseworthy things, a model of the wry footnote.

But another issue bothered some of the attendees. As the six white male shortlisted authors sat on the stage, broadcaster Vivienne Parry asked “Where are all the girls?” (Tucked up in bed, one was tempted to reply, but you could see her point.) The (typically gender-balanced) judges confessed that this had been a serious concern, but one that they could do nothing about. It’s even worse when you look at the prize’s history: only one woman has ever won it (anthropologist Pat Shipman in 1997), and then as a co-author. Parry is herself one of the very few women to have been shortlisted.

A glib answer is that this just reflects the lack of women in science. But that isn’t the case for science journalism and publishing. It is mercifully free of the male-domination still evident in the lab: at least half of the editorial staff of Nature are women, and this is fairly representative. Plenty of female science writers and scientists have authored books. And the imbalance is all the more troubling when compared to the strong female showing in other non-fiction literary awards such as the Samuel Johnson. So “what’s that about?”, asked science journalist Ian Sample, also on the science book prize shortlist, in response to Parry’s question. No one seemed to know.

Saturday, November 05, 2011

Identity crisis

This is not me, though I kind of wish it was.

Everyone (except, I am willing to bet, my daughters) has namesakes, but there’s genuine scope for confusion here, as I’ve just discovered. There’s also a young medical writer called Philip Ball. I think someone should book us all to talk on the same platform.

PS Funnily enough, I've just discovered that it goes further. This nice discussion of my appearances at the Edinburgh Book Festival claims to direct the reader to my "surprising" web site - which is probably even more of a surprise when, with a double "l" in my name, it in fact takes you here.

Tuesday, November 01, 2011

Who is human?

Back from two weeks in China, with some things to catch up on. I have a feature article on graphene in the latest issue of the BBC's science magazine Focus - not available online, sadly. And an article on pattern and shape in the glossy lifestyle & sports magazine POC - also not (yet) available on the web, but I'll put the piece on my website shortly. And here is a book review I did for the Observer on 7 October.
__________________________________________________________

What It Means to be Human:
Reflections from 1791 to the Present

Joanna Bourke
Virago, 2011
ISBN 978-1-84408-644-3

“Are women animals?”, asked a correspondent to the Times in 1872 who described herself only as “An Earnest Englishwoman.” Her point was not that women should be regarded as less than fully human, but that they already were – to such a degree that they would have more rights if they could at least be granted the same status as cats, dogs and horses. The law could be more punitive to a man who ill-treated his horse than to one who murdered his wife.

Inmates at Guantánamo Bay made precisely the same case. Noticing a dog in an air-conditioned kennel next to Camp X-Ray, a British detainee said to the guards “I want his rights” – only to be told “That dog is a member of the US army.” Clive Stafford Smith, representing the inmates, declared that “it would be a huge step for mankind of the judges gave our clients the same rights as the animals.”

As these cases illustrate, historian Joanna Bourke’s survey is not so much about the boundaries of humankind as about the way in which some humans have systematically denied full personhood to others, particularly women, children and other (generally non-European) races and cultures. She would have helped her argument by keeping that distinction more clear. When for example she remarks apropos of slavery that it questions “who is truly human and who is merely ‘property’”, only to follow with the suggestion that “the claim that some humans are property rather than true ‘persons’ is still rampant”, the confusion muddies the point.

Although the forms of denigration that Bourke considers are certainly ‘dehumanizing’, they don’t usually challenge biological or species identity. Rather, they erect hierarchies of human worth, development, and supposed intellectual and spiritual capacity. All the same, her well-made thesis is that this tendency has commonly pushed the oppressed group towards the realm of beasts, whether via the bird-like ‘twittering’ of women or the ‘simian’ countenance of African slaves.

It is an ugly spectacle to see with what insufferable smugness and pseudoscientific justification these judgements have been repeatedly made by white Western males. And of course it would be nonsense to pretend that we all know better now. Yet there is something a little paralysing about this detailed exposé of the obviously pernicious. It is not to belittle the evils of slavery, racism, female oppression and the Holocaust to say that they are, in themselves, scarcely news.

There is also a strong risk of presentism in all this: judging the past as if it were the present. While it is no response to protest that no one knew any better in those days (not least because plenty of women and slaves certainly did), one is left wondering how to contextualize Darwin’s references to “savages… on [a] par with Monkeys” and his chauvinistic hierarchy of races relative to, say, Thackeray’s or Carlyle’s hysterical aversion to African-Americans. It is surely an oversight that nothing is made of Darwin’s anti-slavery motivation in showing that humankind is truly one species, given how thoroughly this was recently documented by Adrian Desmond and James Moore.

The kind of exclusivity that Bourke explores is at least as old as slavery itself, which occasionally means that one feels the absence of the long view. The nastiness and bigotry on display here would be found in spades in the Middle Ages or ancient Greece, albeit differently nuanced. Bourke shows how fears of animalization in the use of animal tissue in medicine have remained more or less unchanged from Jenner’s cowpox vaccinations in 1796 to xenografts of animal organs today. But it seems a shame not to consider the same themes in Thomas Shadwell’s play The Virtuoso (1676), where he satirized the animal-to-human transfusion experiments of the Royal Society. And when one critic of vaccination worried that it might induce ladies to “receive the embraces of the bull”, there are significant echoes of the legendary coupling of Pasiphae and Minos’s beautiful bull to produce the monstrous Minotaur.

But within the scope that Bourke has set herself, she has found some extraordinary material, such as the rejuvenation experiments of Serge Voronoff in the 1920s. These involved placing slices of simian testicle inside a man’s scrotum under local anaesthetic. An analogous anti-aging procedure for women was harder to arrange, but in any event deemed less important (not everything stays the same, then). Women did, however, worry about receiving the advances of septuagenarians whose renewed sexual vigour was said to be “abnormal both in degree and character”.

No wonder it is an embarrassment to endocrinologists that this is how their field began, although I didn’t need to be told that twice in the same chapter. Such repetition is not the only evidence of some loose editing. Lapses into the gnomic wink-wink traits of literary theory are mercifully rare, but to define molecular biologist James Watson as a “leading Darwin scholar” is eccentric at best. Perhaps that’s part and parcel with the neglect of modern genomics, the most egregious omission in the book.

Yet if the narrative is patchy, it is more than a collection of historical curiosities. Bourke’s critique of the concept of human rights opens an important debate on a complacent ideal, while her cross-examination of animal welfare should give all parties pause for thought. And she is quite right to say that modern biomedical science genuinely does now complicate the definition of humanity in ways that we are ill equipped, ethically and philosophically, to confront.