Tuesday, July 31, 2012

Climate conversion

I have a piece in the Guardian online about this paper from Richard Muller that is causing so much fuss, though it says nothing new and hasn’t even passed peer review yet (and might not). Actually my piece is not really about the paper itself, which is discussed elsewhere, but the question of scientists revising their views (or not).

I suspect one could publish a piece on the Guardian’s Comment is Free that read simply “climate change”, and then let them get on with it. There are a few comments below my piece that relate to the article, but they quickly settle down into yet another debate among themselves about whether climate change is real. Hadn’t they all exhausted themselves in the 948 comments following Leo Hickman’s other piece on this issue? But there’s some value in it, not least in sampling the range of views that non-scientist climate sceptics hold. I don’t mean that sarcastically – it seems important to know how all the scepticism justifies itself. Disheartening, sure, but useful.

________________________________________________________________

It’s tempting to infer from the reports of University of California physicist Richard Muller’s conversion that climate sceptics really can change their spots. Analyses by Muller’s Berkeley Earth Surface Temperature project, which have been made publicly available, reveal that the Earth’s land surface is on average 1.5 C warmer than it was when Mozart was born, and that, as Muller puts it “humans are almost entirely the cause”. He says that his findings are even stronger than those of the Intergovernmental Panel on Climate Change, which presents the consensus of the climate-science community that most of the warming in the past half century is almost certainly due to human activities. “Call me a converted skeptic”, says Muller in the New York Times.

Full marks for the professor’s scientific integrity, then. But those of us who agree with the conclusions of nearly every serious climate scientist on the planet shouldn’t be too triumphant. Muller was never your usual sceptic, picking and choosing his data to shore up an ideological position. He was sceptical only in the proper scientific sense of withholding judgement until he felt persuaded by the evidence.

Besides, Muller already stated four years ago that he accepted the consensus view – not because everyone else said so, but because he’d conducted his own research. That didn’t stop him from pointing out the (real) flaws with the infamous ‘hockey stick’ graph of temperature change over the past millennium, nor from accusing Al Gore of cherry-picking facts in An Inconvenient Truth.

In one sense, Muller is here acting as a model scientist: demanding strong evidence, damning distortions in any direction, and most of all, exemplifying the Royal Society’s motto Nullius in verba, ‘take no one’s word for it.’ But that’s not necessarily as virtuous as it seems. For one thing, as the Royal Society’s founders discovered, you have to take someone’s word for some things, since you lack the time and knowledge to verify everything yourself. And as one climatologist said, Muller’s findings only “demonstrate once again what scientists have known with some degree of certainty for nearly two decades”. Wasn’t it verging on arrogant to have so doubted his peers’ abilities? There’s a fine line between trusting your own judgement and assuming everyone else is a blinkered incompetent.

All the same, Muller’s self-confessed volte-face is commendably frank. It’s also unusual. In another rare instance, James Lovelock was refreshingly insouciant when he recently admitted that climate change, while serious, might not be quite as apocalyptic as he had previously forecast – precisely the kind of doom-mongering view that fuelled Muller’s scepticism. There’s surely something in Lovelock’s suggestion that being an independent scientist makes it easier to change your mind – the academic system still struggles to accept that getting things wrong occasionally is part of being a scientist.

But the problem is as much constitutional as institutional. Despite their claim that evidence is the arbiter, scientists rarely alter their views in major ways. Sure, they are often surprised by their discoveries, but on fundamental questions they are typically trenchant. The great astronomer Tycho Brahe never accepted the Copernican cosmos, Joseph Priestley never renounced phlogiston, Einstein never fully accepted quantum theory. Most great scientists have carried some obsolete convictions to the grave, which is why Max Planck claimed that science advances one funeral at a time.

This sounds scandalous, but actually it’s useful. Big questions in science are rarely resolved at a stroke by transparent experimental results. So they require vigorous debate, and the opposing views need resolute champions. Richard Dawkins and E. O. Wilson are currently locking horns about the existence of group selection in Darwinian evolution precisely because the answer is so far from obvious. I’d place money on neither ever rescinding.

The fact is that most scientists seek not to convert themselves but to convert others. That’s fair enough, for it’s those others who can most objectively judge who has the best case.

Could this mean we actually need climate sceptics? Better to say that we need to subject both sides of the debate to rigorous scientific testing. Just as Muller has done.

Friday, July 27, 2012

Political interference

I’ve a mountain of stuff to put up here after a holiday. For starters, here’s the pre-edited version of an editorial for last week’s issue of Nature. I mention here in passing an opinion piece by Charles Lane of the Washington Post, but couldn’t sink my teeth into it as much as I’d have liked. It is breathtaking what passes as political commentary in the right-wing US media. Lane is worried that US social scientists have an unduly high proportion of Democrats. As I say below, that’s true for US academia generally. To Lane, this means there is a risk of political bias (so that social science is dangerous). Needless to say, there is quite a different interpretation that one might place on the fact that a majority of intelligent, educated Americans are liberals.

But the truly stupid part of his argument is that “Politicization was a risk political scientists accepted when they took government funding in the first place.” No one, Lane trumpets, has offered any counter-argument to that, “so I’ll consider that point conceded.” He’d do better to interpret the lack of response as an indication of the asinine nature of the assertion. Basically he is saying that all governments may reserve the right to employ the methods of dictatorship, imposing censorship and restricting academic freedoms. So if Congress acts like the Turkish government, what are those damned academics whining about? This is the thing about US right-wingers that just leaves we Europeans scratching our heads: they seem to believe that government is a necessary evil that should interfere with the state as little as possible, unless that interference is based on right-wing ideology (for example, by tampering with climate research). Perhaps there’s nothing surprising about that view in itself, though; what’s weird is how blind those who hold it are to its inconsistency.

______________________________________________________________

A fundamental question for democracy is what to submit to the democratic process. The laws of physics should presumably be immune. But should public opinion decide which science gets studied, or at least funded? That’s the implication of an amendment to the US National Science Foundation’s 2013 spending bill approved by the House of Representatives in May. Proposed by Republican Jeff Flake, it would prevent the NSF from funding political science, for which it awarded about $11m in grants this year. The Senate may well squash the amendment, but it’s deeply concerning that it got so far. Flake was hoping for bigger cuts to the NSF’s overall budget, but had to settle for an easier target. He indulged in the familiar trick in the US Congress of finding research with apparently obscure or trivial titles and parading it as a waste of taxpayers’ money.

One can do this in any area of science. The particular vulnerability of the social sciences is that, being less cluttered with technical terminology, it seems superficially easier for the lay person to assess. As social scientist Duncan Watts of Microsoft Research in New York has pointed out, “everyone has experience being human, and so the vast majority of findings in social science coincide with something that we have either experienced or can imagine experiencing”. This means the Flakes of this world have little trouble proclaiming such findings obvious or insignificant.

Part of the blame must lie with the practice of labelling the social sciences ‘soft’, which too readily translates as woolly or soft-headed. Because they deal with systems that are highly complex, adaptive and not rigorously rule-bound, the social sciences are among the hardest of disciplines, both methodologically and intellectually. What is more, they suffer because their findings do sometimes seem obvious. Yet equally, the “obvious”, common-sense answer may prove quite false when subjected to scrutiny. There are countless examples, from economics to traffic planning, which is one reason why the social sciences probably unnerve some politicians used to making decisions based not on evidence but on intuition, wishful thinking and an eye on the polls.

What of the critics’ other arguments against publicly funded political science? They say it is more susceptible to political bias; in particular, more social scientists have Democratic leanings than Republican. The latter is true, but equally so for US academics generally. We can argue about why, but why single out political science? The charge of bias, meanwhile, is asserted rather than demonstrated.

And what has political science ever done for us? We don’t know why crime rates rise and fall or the effect of deterrents, we can’t solve the financial crisis or stop civil wars, we can’t agree on the state’s role in systems of justice or taxation. As Washington Post columnist Charles Lane argues, “the larger the social or political issue, the more difficult it is to illuminate definitively through the methods of ‘hard science’.” In part this just restates the fact that political science is among the most difficult of the sciences. To conclude that hard problems are better solved by not studying them is ludicrous. Should we slash the physics budget unless the dark-matter and dark-energy problems are solved? Lane’s statement falls for the very myth it wants to attack: that political science is ruled, like physics, by precise, unique, universal rules. In any case, we have little idea how successful political science has been, for politicians rarely pay much heed to evidence-based advice from the social sciences, unless of course the evidence suits them. And to constrain political scientists with utilitarian bean-counting is to miss what is mostly its point anyway. As the likes of John Rawls, Herbert Simon, Robert Axelrod, Kenneth Waltz and Karl Popper have shown, they have enriched political debate beyond measure.

The general notion that politicians should decide what is or is not worthy of research is perilous. Here, the proper function of democracy is to establish impartial bodies of experts and leave it to them. But Flake’s amendment does more than just traduce a culture of expertise. Among the research he selected for ridicule were studies of gender disparity in politics and models for international analysis of climate change: issues unpopular with right-wingers. In other words, his interference is not just about cost-cutting but has a political agenda. That he and his political allies feel threatened by evidence-based study of politics and society does not speak highly of their confidence in the objective case for their policies. Flake’s amendment is no different in principle to the ideological infringements of academic freedom in Turkey or Iran. It has nothing to do with democracy.

Thursday, July 12, 2012

Name that colour

I don’t read much popular science. That’s not a boast, as if to say that I’m above such things, but a guilty confession – I ought to read more, but am too slow a reader. That I’m missing out is being confirmed for me now as I finally get round to reading Guy Deutscher’s Through the Language Glass, which was shortlisted for the Royal Society Winton Prize last year. I knew this was a book I wanted to read, because it deals in some detail with the linguistics of colour terminology, which I looked into while writing Bright Earth. I was finally moved to get it after writing the piece below for the BBC Future site a month or so ago, and wanting to do more with this very interesting work. Whether I will be able to do that or not remains to be seen, but I’m glad it motivated me to get Deutscher’s book, because it is absolutely splendid. I remember Richard Holmes, chairing the book prize panel, questioning how helpful it really was for a book to advertise itself with Stephen Fry’s quote “Jaw-droppingly wonderful”, but His Fryness is quite correct. There’s another chapter – well, perhaps another section – that I would have added to Bright Earth, had I known some of this stuff: I wasn’t aware that Gladstone (that Gladstone) had postulated that the invention of new dyes and pigments actually stimulated the development of colour terminology itself, since it was only (he said) when people could abstract colours from their manifestations in natural objects that they figured they needed words for them. It’s not at all clear if this is true, but it is an intriguing idea, and not obviously nonsense.

____________________________________________________
The artist Derek Jarman once met a friend on London’s Oxford Street and complimented him on his beautiful yellow coat. His friend replied that he’d bought it in Tokyo, where it wasn’t considered yellow at all, but green.

We don’t always agree about colour. Your red might be my pink or orange. Vietnamese and Korean don’t differentiate blue from green – leaves and sky are both coloured xanh in Vietnam. These overlaps and omissions can seem bizarre if they’re not part of your culture, but aren’t even visible if they are.

But we shouldn’t be too surprised by them. The visible spectrum isn’t like a paint colour chart, neatly separated into blocks of distinct hue, but is a continuum in which each colour blends into the next. Why should we expect to agree on where to set the boundaries, or on which colours are the most fundamental? The yellow band, say, is as wide as the cyan band, so why is yellow considered any more ‘basic’ than cyan?

A new study by physicist Vittorio Loreto at the University of Rome ‘La Sapienza’ and his colleagues argues that this naming and hierarchical ranking of colours isn’t, after all, arbitrary. The researchers say that there is a natural hierarchy of colour terms that arises from the interplay between our innate ability to distinguish one hue from another and the complex cultural negotiation out of which language itself appears.

In essence, their argument pertains to the entire edifice of language: how it is that we come to divide the world into specific categories of object or concept that we can all, within a given culture, agree on. Somehow we arrive at a language that distinguishes ‘cup’, ‘mug’, ‘glass’, ‘bowl’ and so on, without there being well-defined and mutually exclusive ‘natural’ criteria for these terms.

But the researchers have not chosen arbitrarily to focus on colour words. These have long been a focus for linguists, since they offer an ideal multicultural example of how we construct discrete categories from a world that lacks such inherent distinctions. Why don’t we have a hundred basic colour terms like ‘red’, ‘blue’ and so on, given that we can in principle tell apart at least this many hues (think back to those paint charts)? Or why not get by with just four or five colours?

In fact, some cultures do. The Dugerm Dani people of New Guinea, for example, have only two colour words, which can best be translated as ‘black’ and ‘white’, or light and dark. A few other pre-literate cultures recognize only three colours: black, white and red. Others have only a handful more.

The curious thing is that these simplified colour schemes are not capricious. For one thing, the named colours tend to match the ‘basic’ colours of more complex chromatic lexicons: red, yellow, blue and so on. What’s more, the colours seem to ‘arrive’ in a culture’s evolving vocabulary in a universal order: first black and white, then red, then green or yellow (followed by the other of this pair), then blue... So there is no known culture that recognizes, say, just red and blue: you don’t tend to ‘get’ blue unless you already have black, white, red, yellow and (perhaps) green.

This universal hierarchy of colour names was first observed [actually Deutscher shows that this wasn’t the first observation, but a rediscovery of an idea proposed in the nineteenth century by the German philologist Lazarus Geiger] by anthropologists Brent Berlin and Paul Kay in 1969, but there has been no explanation for it. This is what Loreto and colleagues now purport to offer. They use a computer model of language evolution in which new words arise as if through a kind of ‘game’ played repeatedly between pairs of individuals in a population: one the speaker, the other the hearer. The speaker might talk about a particular object – a colour say – using a word that the hearer doesn’t already possess. Will the hearer figure out what the speaker is referring to, and if so, will she then adopt the same word herself, jettisoning her own word for that object or recognizing a new sub-category of such objects? It is out of many interactions of this sort, which may or may not act to spread a word, that the population’s shared language arises.

For colour words, this negotiation is biased by our visual perception. We don’t see all parts of the visible spectrum equally: it is easier for us to see small changes in hue (that is, in the wavelength of the light entering our eyes) in some parts than in others. Loreto and colleagues impose this so-called “just noticeable difference function” of colour perception on the inter-agent interactions in their model. That’s what makes it more likely that some bands of the spectrum will begin to emerge as more ‘deserving’ than others of their own colour word. In other words, the population of agents will agree faster on a word associated with some hues than others.

This speed at which a consensus arises about a colour word with an agreed meaning specifies the resulting hierarchy of such words. And the order in which this happens in the computer experiments – red first, then violet, green/yellow, blue, orange and then cyan – is very close to that identified by Berlin and Kay. (Black and white, which aren’t themselves spectral colours, must be assumed at the outset as the crude distinction between dark and light.) Crucially, this sequence can’t be predicted purely from the “just noticeable difference function” – that is, from the physiology of colour vision – but arises only when it is fed into the ‘naming game’.

The match isn’t perfect, however. For one thing, violet doesn’t appear in Berlin and Kay’s hierarchy. Loreto and colleagues explain its emergence in their sequence as an artificial consequence of the way reddish hues crop up at both ends of the visible spectrum. And Berlin and Kay listed brown after blue. But brown isn’t a spectral colour – it’s a kind of dark yellow/orange, and so can be considered a variant shade of orange. Whether or not you accept those explanations for the discrepancies, this model of language evolution looks set to offer a good basis for exploring factors such as cultural differences and contingencies, like those Jarman discovered, and how language gets transmitted between cultures, often mutating in the process.

Paper: V. Loreto, A. Mukherjee & F. Tria, Proc. Natl Acad. Sci. USA doi: 10.1073/pnas.1113347109.

Monday, July 09, 2012

Who's in charge?

When I was asked to write a piece for the Guardian about the GSK scandal, my first thought was that it would be nice to know Richard Sykes’ response to the court decision, given that at least some of what GSK is being punished for happened under his watch. Lacking the time to hunt him down, I hoped someone else might do that subsequently. They have. The result is quite astonishing. As the Observer also reports this weekend, he tells us that “I have not had a chance to read the newspapers and have not a clue as to what is going on.”

Is this a joke? Sykes is a busy man, but we are being asked to believe that a law case that has been dragging on for years, involving extremely serious malpractice and resulting in a $3 bn settlement, against the company of which he was chairman during at least part of the relevant period, has somehow passed him by, so that he’s now in the same position as the rest of us in having to read all about it in the papers – and that he hasn’t quite got round to that yet. If there is any truth in this – and what do I know about how these things work? – that is all the more shocking. I really do struggle to imagine a situation in which Sykes has managed to shut out all knowledge of this case, has not been called upon during its course, and now lacks the motivation or the sense of obligation to get up to speed on it. And even if all this were somehow plausible, could he not even at least come up with the kind of blandishments offered by the current GSK CEO about having now put things right? The company has pleaded guilty, for goodness’ sake, it is not even as though he can refuse to comment on the question of guilt and culpability. So Murdoch knew nothing, Diamond knew nothing, now Sykes knew nothing. Is there actually anyone in charge of the world?

Wednesday, July 04, 2012

The drugs aren't working

The Guardian seems to be keeping me busy at the moment. Here’s a piece published today about the GlaxoSmithKline scandal. It was apparently lightly edited for ‘legal’ reasons, but I’m not sure that Coca-Cola is really libelled here. Mind you, in Britain it’s hard to tell. Perhaps I’m safe so long as I don’t mention chiropractors.

______________________________________________________________

Perhaps the most shocking thing about the latest GlaxoSmithKline drug scandal is that malpractice among our overlords still has the ability to shock at all. Yet despite popular cynicism about doctors being in the pockets of the drug companies, there remains a sense that the people responsible for our healthcare are more principled and less corruptible than expenses-fiddling politicians, predatory bankers, amoral media magnates and venal police.

If this were a junk-food company lying about its noxious products, or a tobacco company pushing ciggies on schoolkids, we’d be outraged but hardly surprised. When a major pharmaceutical company is found to have been up to comparable misdemeanours – bad enough to warrant an astonishing $3 bn fine – it seems more of a betrayal of trust.

This is absurd, of course, but it shows how the healthcare industry benefits from its apparent proximity to the Hippocratic Oath. “Do more, feel better, live longer”, GSK purrs. How can we doubt a company that announces as its priorities “Improving the health and well-being of people around the world”, and “Being open and honest in everything we do”?

Now GSK admits that, in effect, it knowingly risked damaging the health of people around the world, and was secretive and fraudulent in some of what it did. Among other things, it promoted the antidepressant drug Paxil, approved only for adults, to people under 18; it marketed other drugs for non-approved uses; it suppressed scientific studies that didn’t suit (for example over the heart-attack risks of its diabetes drug Avandia), and over-hyped others that did; and it hosted outings for doctors in exotic locations and showered them with perks, knowing that this would boost prescriptions of its drugs.

I’m incensed not because this vindicates a conviction that pharmaceutical companies are staffed by profit-hungry liars and cheats, but precisely because I know that they are not: that so many of their scientists, and doubtless executives and marketers too, are decent folk motivated by the wish to benefit the world. We were bamboozled, but they have been degraded.

And it is precisely because Big Pharma really has benefitted the world, making life a great deal more tolerable and advancing scientific understanding, that the industry has acquired the social capital of public trust that GSK has been busy squandering.

But it’s time we accepted that it is a business like any other, and does not operate on a higher, more altruistic plane than Coca-Cola. It will do whatever it can get away with, whether that means redacting scientific reports, bribing academics and physicians, or pushing into ‘grey’ markets without proper consent or precaution. After all, this has happened countless times before. All the giants – AstraZeneca, Bristol-Myers Squibb, Merck, Eli Lilly – have been investigated for bribery. One of the most notorious episodes of misconduct involved Merck’s anti-inflammatory drug Vioxx, withdrawn in 2004 after the company persistently played down its risk of causing cardiovascular problems. History suggests that GSK CEO Andrew Witty’s assurances that lessons have been learnt are meaningless.

As with the banking scandals, GSK’s downfall is partly a failure of management – those at the top (some of the malparactice predates Witty’s incumbency) weren’t watching. It’s partly a failure of culture: the jollies and bribes came to seem normal, ethically unproblematic, even an entitlement, to both the donors and recipients.

And it’s partly a failure of regulation. The US Food and Drugs Administration has seemed at times not just toothless but actually collusive. Meanwhile, some American academics, having enjoyed Big Pharma’s kickbacks for decades, are now shrieking about the Physician Payments Sunshine Act, a part of the ObamaCare package which would make it mandatory for physicians to declare any perks or payments received from drug companies greater than $10, whether as speaker fees, theatre tickets or Hawaiian holidays. The protestors claim they will drown in bureaucracy. In reality they will be forced to reveal how much these things supplement their already healthy income. Harvard physician Thomas Stossel claimed in the Wall Street Journal that the backhanders don’t harm patients. The GSK ruling shows otherwise.

But the problems are still deeper. You don’t have to be an anti-capitalist to admit the inadequacies of relying solely on market forces for our drugs – not least for those that, being urgently needed mostly by poor countries, will never turn a profit. Incentives for Global Health, a non-profit organization at Yale University, have argued the case for a global, public-sector drug development agency, funded for example by a Tobin tax. In the unlikely event that our leaders should dare to demand such genuine recompense for the moral bankruptcy of the financial world, there would be few better uses for it – and freedom from the corrupting influence of the profit margin adds another argument to this already compelling case.

One way or another, some rethinking of how drugs are discovered, developed, sold and used is needed, before the noble art of medicine comes to look more like Mr Wormwood selling a dodgy motor for whatever he can get away with.

Tuesday, July 03, 2012

Introducing Iamus

This story was in yesterday’s Guardian in slightly edited form. It was accompanied by a review of some of Iamus’s music by Tom Service, who was not terribly impressed. It’s a shame that Tom had only Hello World! to review, since that was an early piece by Iamus and so very much a prototype – things have moved on since then. I think his review was quite fair, but I had a sense that, knowing it was made by computer, he was looking out for the “computer-ness” in it. This bears on the final part of my story below, for which there was no room in the Guardian. I think one can detect a certain amount of ‘anti-computer prejudice’ in the Guardian comments thread too, though that is perhaps no stronger than the general ‘anti-modernist’ bias. I’d be interested to see what Tom Service makes of the CD when it appears later this year. I carry no torch for Iamus as a composer, but I must admit that I’m growing fond of it and certainly feel it is a significant achievement. Anyway, there will be more on this soon – I’m writing a different piece on the work for Nature, to appear in August.

_______________________________________________________

As soon as you see the title of Iamus’s composition “Transits – Into an Abyss”, you know it’s going to be challenging, modernist stuff. The strings pile up discords, now spooky, now ominous. But if your tastes run to Bartók, Ligeti and Penderecki, you might like it. At least you have to admit that this bloke knows what he’s doing.

But this bloke doesn’t know anything at all. Iamus is a computer programme. Until the London Symphony Orchestra was handed the score, no human had intervened in preparing this music.

“When we tell people that, they think it’s a trick”, says Francisco Vico, leader of the team at the University of Malaga who devised Iamus. “Some say they simply don’t believe us, others say it’s just creepy.” He anticipates that when Iamus’s debut CD is released in September, performed by top-shelf musicians including the LSO, it is going to disturb a lot of folk.

You can get a taste of Iamus’s oeuvre before then, because on 2 July some of Iamus’s compositions will be performed and streamed live from Malaga. The event is being staged to mark the 100th anniversary of the birth of Alan Turing, the man credited with more or less inventing the concept of the computer. It was Turing who devised the test to distinguish humans from artificial intelligence made famous by the opening sequence of Ridley Scott’s Blade Runner. And the performance will itself be a kind of Turing test: you can ask yourself whether you could tell, if you didn’t know, that this music was made by machine.

Iamus composes by mutating very simple starting material in a manner analogous to biological evolution. The evolving compositions each have a kind of musical core, a ‘genome’, which gradually becomes more complex. “Iamus generates an initial population of compositions automatically”, Vico explains, “but their genomes are so simple that they barely develop into a handful of notes, lasting just a few seconds. As evolution proceeds, mutations alter the content and size of this primordial genetic material, and we get longer and more elaborated pieces.” All the researchers specify at the outset is the rough length of the piece and the instruments it will use.

“A single genome can encode many melodies”, explains composer Gustavo Díaz-Jerez of the Conservatory of the Basque Country in San Sebastian, who has collaborated with the Malaga team since the outset and is the pianist on the new recordings. “You find this same idea of a genome in the Western musical canon – that’s why the music makes sense.”

The computer doesn’t impose any particular aesthetic. Although most of its serious pieces are in a modern classical style, it can compose in other genres too, and for any set of instruments. The ‘darwinian’ composition process also lends itself to producing new variations of well-known pieces [PB: I’ve been sent some great variants of the Nokia ringtone] or merging two or more existing compositions to produce offspring – musical sex, you might say.

Using computers and algorithms – automated systems of rules – to make music has a long history. The Greek composer Iannis Xenakis did it in the 1960s, and in the following decade two Swedish composers devised an algorithm for creating nursery-rhyme melodies in the style of Swedish composer Alice Tegnér. In the 1980s computer scientist Kemal Ebcioglu created a program that harmonised chorales in the style of Bach.

As artificial intelligence and machine learning became more sophisticated, so did the possibilities for machine music: now computers could infer rules and guidelines from real musical examples, rather than being fed them to begin with. Computer scientist John ‘Al’ Biles devised an algorithm called GenJam that learns to improvise jazz. A trumpeter himself, Biles performs live alongside GenJam under the name the Al Biles Virtual Quintet, but admits that the algorithm is a rather indifferent player. The same is true of GenBebop, devised by cognitive scientists Lee Spector and Adam Alpern, which improvises solos in the style of Charlie Parker by ‘listening’ to him and iterating its own efforts under the ultimately less-than-discerning eye of an automated internal critic.

One of the most persuasive systems was the Continuator, devised by François Pachet at Sony’s Computer Science Laboratory in Paris. In a Turing test where the Continuator traded licks with an improvising human pianist, expert listeners were mostly unable to guess whether it was the human or the computer playing.

But these efforts still haven’t shown that a computer can make tolerable music from scratch. One of the best known attempts is ‘Emily Howell’, a programme created by music professor David Cope. Yet Howell’s bland, arpeggiated compositions sound like a technically skilled child trying to ape Beethoven or Bach, or like Michael Nyman on a bad day: fine for elevators but not for the concert hall.

This is why Iamus – named after the mythical son of Apollo who could understand the language of birds – is different. This seems to be the first time that music composed purely by computer has been deemed good enough for top-class performers to play it. Díaz-Jerez admits that the LSO were “a little bit sceptical at the beginning, but were very surprised” by the quality of what they were being asked to play. The soprano Celia Alcedo, he says, “couldn’t believe the expressiveness of some of the lines” she was given to sing.

Lennox Mackenzie, the LSO’s chairman, had mixed feelings about the orchestral pieces. “I felt it was like a wall of sound”, he says. “If you put a colour to it, this music was grey. It went nowhere. It was too dense and massive, no instrument stuck out at any point. But at the end of it, I thought it was quite epic.”

“The other thing that struck me”, Mackenzie adds, “was that it was festooned with expression marks, which just seemed arbitrary and meaningless. My normal inclination is to delve into music and find out what it’s all about. But here I don’t think I’d find anything.” But he’s far from discouraging. “I didn’t feel antipathy towards it. It does have something. They should keep trying, I’d say.”

What is slightly disconcerting is that Iamus can produce this stuff endlessly: thousands of pieces, all valid and musically plausible, all fully notated and ready to play, faultless from a technical point of view, and “many of them great”, according to Díaz-Jerez. Such profligacy feels improper: if it’s that easy, can the music really be any good? Yet Díaz-Jerez thinks that the pieces are often better in some respects than those produced by some avant-garde composers, which might revel in their own internal logic but are virtually impossible to play. And crucially, different people have different favourites – it’s not as though the programme just occasionally gets lucky and turns out something good.

How does a performer interpret these pieces, given that there’s no “intention” of the composer to look for? “Suppose I found a score in a library without knowing who wrote it”, says Díaz-Jerez. “I approach these pieces as I would that one – by analysing the score to see how it works.” In that respect, he sees no difference from deducing the structure of an intricate Bach fugue.

You can compare it with computer chess, says philosopher of music Stephen Davies of the University of Auckland in New Zealand. “People said computers wouldn't be able to show the same original thinking, as opposed to crunching random calculations. But now it’s hard to see the difference between people and computers with respect to creativity in chess. Music too is rule-governed in a way that should make it easily simulated.”

However, Iamus might face deeply ingrained prejudice. Brain-scanning studies by neuroscientists Stefan Koelsch and Nikolaus Steinbeis have shown that the same piece of music played to listeners elicits activity in the parts of the brain responsible for ascribing intentions to other humans if they are told that it was composed by a human but not if they are told it was computer-generated. In other words, it matters to our perceptions of expressiveness how we think the music was made. Perhaps Iamus really will need to be marketed as a pseudo-human to be taken seriously.

Thursday, June 28, 2012

Want to win £1000?

I have a piece in the Guardian online on the Mpemba effect and the RSC’s £1000 prize for explaining it. The article is largely unchanged from what I wrote, but here it is anyway.

The aim here was to stimulate suggestions from readers of how this thing can be explained – or even if there’s a real effect to be explained, though few seem to question that. (One does so with amusing literalism, thinking it implies that hot water will always freeze first whatever the temperature difference. All the same, this reinforces Charles Knight’s point that the phenomenon is too ill defined.) I like too the cute popular notion of “heat loss momentum” – check out Newton’s cooling law, please.

But I’m certainly not going to mock the many confused or just plain wrong suggestions put forward, since the whole point of the exercise is to get people engaged, not to laugh at their errors. However, I can’t help being struck by the inevitable one or two who say, apparently in all seriousness, that the answer is just obvious and everyone but them has been too stupid so far to see it. One chap dispenses with all of the additional ‘mysteries’ in the article this way too. Why can all arms of a snowflake sometimes be identical? “The symmetry comes from the initial nucleation of the crystal. It starts symmetrically and keeps growing symmetrically. And computer simulations have shown this.” I can only assume he/she (probably he) has seen some simulated flakes and failed to read the warning that symmetry was imposed on all six arms. He certainly didn’t think it worth bothering to check out the link to Ken Libbrecht’s page, which makes it clear that (as I said in the piece) the side-branches are, according to the standard theory of dendritic growth, amplified randomness. So the entire form of any given flake is somehow inherent in its initial nucleus? Please. I couldn’t help smiling too at the apparent belief of some readers that the Brazil-nut effect was actually discovered in muesli (leading to a discussion on how muesli gets packaged). Anyway, the comments thread provides a nice little cross-section of how folk think about science. And, I think, somewhat encouraging at that, despite the misconceptions.

________________________________________________________________

If you can explain, before the end of July, why hot water freezes faster than cold, you could bag £1000. That’s what the Royal Society of Chemistry (RSC) is offering for “the most creative explanation” of this phenomenon, known as the Mpemba effect. They say that submissions should be “eye-catching, arresting and scientifically sound”, and may use any media, including film.

At the end of the month the problem will also be put to an international summer school for postgraduate science students called Hermes 2012, convened at Cumberland Lodge in Windsor Great Park to present research in materials science and imbue the participants with skills in science communication. The event, organized by Imperial College and sponsored by the RSC, is timed to coincide with the opening of the Olympic Games as a kind of scientific Olympiad. A presentation of the top entries to the RSC’s competition, alongside the efforts of the meeting attendees, will form a highlight of the event on 30 July.

All good fun – except that the Mpemba effect seems at first encounter to be scientific nonsense. Let’s have that again: “why hot water freezes faster than cold”. How can that be? In order to freeze, hot water has to lose more heat than cold, so why would that happen faster? Even if the cooling of hot water somehow catches up with that of the colder water, why should it then overtake, if the two have at that point the same temperature?

Yet this effect has been attested since antiquity. Aristotle mentions it, as do two of the fathers of modern science, Francis Bacon and René Descartes in the seventeenth century. The effect is today named after a Tanzanian schoolboy, Erasto Mpemba, who was set the project of making ice cream from milk in the 1960s. The pupils were supposed to boil their milk, let it cool, then put it in the fridge to freeze. But Mpemba worried about losing his space in the fridge, and so put in the milk while it was still hot. It froze faster than the others.

When Mpemba learnt a few years later that this seemed to contradict the theory of heat transfer devised by Isaac Newton, he recalled his experiment and asked his teacher to explain it – only to receive a mocking reply. Undeterred, he carried out his own experiments, and asked a visiting university professor from Dar es Salaam, D. G. Osborne, what was going on. Osborne was more open-minded – he asked his technician to repeat the experiment, and found the same result. In 1969 Osborne published the result in a physics education journal. Coincidentally, that same year a physicist in Canada described the same result, saying that it was already folk wisdom in Canada that a car should be washed with cold water in winter, because hot water froze more quickly.

Yet no one really knows if the Mpemba effect is real. You’d think it should be easy to check, but it isn’t. Ice specialist Charles Knight of the National Center for Atmospheric Research in Boulder, Colorado, says that the claim that “hot water freezes faster than cold” is so ill-defined that it’s virtually meaningless. Does it mean when ice first starts to appear, or when the last bit of water is frozen? Both are hard to observe in any case. And there are so many things you could vary: the amount of water, the shape of the containers, the initial temperature difference, the rate of cooling… Do you use tap water, distilled water, de-aerated water, filtered water? Freezing is notoriously capricious: it can be triggered by tiny scratches on the sides of the flask or suspended dust in the liquid, so it’s almost impossible to make truly identical samples differing only in their starting temperature. For this reason, even two samples starting at the same temperature typically freeze at different times. If such ‘seeding’ sites are excluded, water can be ‘supercooled’ well below freezing point without turning to ice – but here experiments are conflicting. Some find that initially hotter water can be supercooled further, others that it can be supercooled less before it freezes.

There is one trivial explanation for Mpemba’s observations. Hot water would evaporate faster, so if there was no top on the flasks then there could have been less liquid left to freeze – so it would happen faster. Tiny gas bubbles in solution could also act as seeds for ice crystals to form – and hot water holds less dissolved gas than cold.

All this means that a single experiment won’t tell you much – you’ll probably have to do lots, with many different conditions, to figure out what’s important and what isn’t. And you’ve only got a month, so get cracking.

Other mysteries to solve at home:

1. Why do the Brazil nuts gather at the top of the muesli? There’s no complete consensus on the cause of the so-called Brazil nut effect, but current explanations include:
- shaken grains in a tall box circulate like convection currents while the big bits get trapped at the top, excluded from the narrow descending current at the sides
- little landslides in the void that opens up temporarily under a big grain as it is shaken upwards ratchet it ever higher
- it's all to do with the effect of air between the grains

The problem is made harder by the fact that, under some conditions, the big grains can sink to the bottom instead – the ‘reverse Brazil nut effect’.

2. Does the water in a bathtub spiral down the plughole in opposite directions in the Northern and Southern Hemisphere? Cyclones rotate counterclockwise in the north and clockwise in the south, a consequence of the Earth’s rotation called the Coriolis effect. But is the effect too weak to govern a plughole vortex? In 1962 an American engineer named Ascher Shapiro claimed that he consistently observed counterclockwise plughole vortices in his lab, but this result has never been verified. The problem is that it’s really hard to rid a bathtub of water of any residual currents that could bias the outcome.

3. Why are all six arms of a snowflake sometimes (but not always) identical? How does one arm know what the other is doing? The standard theory of snowflake formation explains the ornate branching patterns as amplifications of random bumps on the sides of needle-like ice crystals. But if they’re random, how can one arm look like another? One suggestion is that they listen to one another: acoustic vibrations in the ice crystal set up standing-wave patterns that dictate the shape. But this doesn’t seem to work. Most snowflakes aren’t actually as symmetrical as is often supposed – but the fact that some are is still unexplained.

Thursday, June 21, 2012

Society is a complex matter


It is a book? A booklet? A brochure? Search me, but my, er, tract on social complexity is now published by Springer. This 70-page item was commissioned to explain the case for considering society as a complex system, along the lines envisaged in the FuturICT project (currently one of the contenders for the $1 bn pot offered for the EU’s Flagship initiative), in a way that offers an introduction and primer to folks such as policy-makers. I’d like to think that it amounts to somewhat more than an extended puff piece for FuturICT, although it provides an unabashed summary of that project at the end (written by the project leader Dirk Helbing of ETH) – an initiative whose time has surely come, regardless of whether it will achieve its grand goals. I dearly hope that it is selected for the full Flagship funding some time this year.

In any event, this book(let) could also be seen as a brief, non-comprehensive progress report on the subject broached in my book Critical Mass.

I also have an editorial in a special issue of ChemPhysChem on the subject of nanobubbles. It’s available for free online. I am preparing a feature on this controversial topic for Chemistry World.

Friday, June 15, 2012

Silly names

Here’s my Crucible column for the June issue of Chemistry World. Incidentally, since the magazine chose to illustrate its regular contributors using artwork rather than photos, I have become the man with the narrowest shoulders in chemistry.

This is because I provided them with a photo without shoulders showing, so they drew them in. Andrea Sella, meanwhile, has become an uber-geek –

- which I suspect he would merrily concede anyway. I hope to see him making some bangs and flashes at Cheltenham tomorrow - after this.
___________________________________________________________
In his new novel The Chemistry of Tears, author Peter Carey displays his deep interest in science. The book alludes to the mechanical inventiveness that sustained the Industrial Revolution, and makes much reference to Charles Babbage’s proto-computer the Difference Engine and to modern scientific analysis of historical artifacts. (The chemistry of tears is given a fleeting nod too.) Having met Carey recently, I know how keen he is to get the science correct. So I hope he won’t be too mortified to discover that he has used ‘silicon’ when he clearly meant ‘silicone’.

Did I feel the thrill of superiority at noticing this? On the contrary, I felt cross on Carey’s behalf, for what could be more idiotic and confusing than giving a chemical compound a name that is all but indistinguishable in spelling and sound from a chemical element? This has long topped my list of Bad Chemical Names. It’s not even as though there is much excuse. When Frederick Kipping, an excellent synthetic chemist at the University of Nottingham, began making organosilicon compounds, including polymeric siloxanes, using Grignard reagents around the turn of the century, he was keen to develop an analogy between carbon compounds and what seemed to be their silicon analogues. For that reason he called the long-chain diphenylsiloxane Ph2SiO that he made in 1901 a ‘silicone’ because its molecular formula was analogous to that of the ketone benzophenone Ph2CO. Yet it was already clear that the silicon compound was polymeric and that there was no chemical analogy to ketones [1]. Now we’re lumbered with Kipping’s confounding name.

It’s not hard to find other examples, and drugs are some of the worst offenders. This, however, is a tough one. There are so many to give names to, and you don’t want them to be totally arbitrary. Yet do they have to be quite so un-euphonious? I don’t hold out any prospect that my wife will ever pronounce ibuprofen properly, and the only reason I do so is that I have acquired the chemist’s pedantry for complex nomenclature. I know that the chemical reasoning here is sounder: it is a contraction of isobutyl propanoic phenolic acid. But it makes you wonder whether some people still get their analgesia from aspirin simply because they can pronounce it at the pharmacist’s.

The same goes for countless other generic drug names: clopidogrel, lansoprazole, bevacizumab, venlafaxine. The brand names are hardly masterful neologisms, sounding more like characters in a bad science fantasy novel, but at least they tend to roll off the tongue: Crestor, Effexor, Valtrex. Of course, the IUPAC names of these compounds are far worse for the lay person, but they are never intended for lay use, and they are tightly constrained by their information content. For generic names, we do have some choice.

These aren’t totally arbitrary, however – there is some method in the madness. A drug’s generic (or ‘international nonproprietary’) name is ultimately fixed by the World Health Organization, but many are first prescribed by the US Adopted Names (USAN) Council, which has its own guidelines. In particular, the stem of the name denotes the effect or mode of action: -axine drugs are antidepressants that inhibit uptake of particular neurotransmitters, -vir means an antiviral, -mab is a monoclonal antibody, and so on. Some prefixes and other syllables have particular meanings – lev- and dex- are clearly stereoisomers, for example – but most don’t. USAN aims to avoid names that non-English speakers will find hard to pronounce, as well as ones that turn out to have obscene connotations in other languages, so there are no equivalents of the wonderful arsole.

Despite this logic, is it really just familiarity that separates codeine and paracetamol from atorvastatin and montelukast? Faced with these agglomerations of semi-arbitrary syllables, one has to wonder if half a system is worse than no system at all. The proliferation of brand names only makes matters worse; I had no idea until I looked that taxol is also known as onxol, abraxane and cryoxet. I shouldn’t even call it taxol, of course, since Bristol-Myers Squibb got very upset when its brand name was used instead of the generic paclitaxel. While plenty of brand names have become common names through familiarity –(think of Kevlar), this was the reverse: taxol was widely used by chemists before Bristol-Myers Squibb registered it as a trademark, provoking dismay at their appropriation of established usage [2].

Whatever the legal issues, the fact is that taxol works as a name: anyone can say it without effort and not confuse it with something else. With graphene, fullerenes and dendrimers, scientists have shown that they can sometimes master the trick of balancing euphony, descriptiveness and specificity in chemical naming. But there’s still cause for entreating that, before you christen your compound, think how it will sound in Boots.

1. K. L. Mittal & A. Pizzi (eds). Handbook of Sealant Technology, p. 13. CRC Press, Bocan Raton, 2009.
2. N. White & S. Cohen, Nature 375, 432 (1995).

Wednesday, June 13, 2012

New website articles

I've just put two new pdfs on my website: extended versions of a piece on Turing patterns published in the June issue of Chemistry World, and of a piece on animal photonics and physical coloration published in the May issue of Scientific American. Both have considerably more information than is in the published versions, because you're worth it. You'llfind them in the Selection of Articles, under the sections "Patterns" and "Other". And I have an article on "radiofrequency biology" in the 9 June issue of New Scientist, which I fear is not available online without subscription. What, you want that too? OK, give me a moment.

Tuesday, June 12, 2012

Green fireworks

An edited version of this piece has just gone up on the BBC Future site.

____________________________________________________

Some environmental activists have been called killjoys for seeking to ban firework displays. They are concerned that, as one campaigner put it, “fireworks shows spray out a toxic concoction that rains down quietly into lakes, rivers and bays.” But there may be a solution that doesn’t spoil the fun: green fireworks. A team of scientists at the US Army’s Pyrotechnics Technology and Prototyping Division at Picatinny Arsenal in New Jersey, USA, has found more eco-friendly replacements for one of the troublesome chemical components of fireworks, the so-called oxidizer that sets off the explosion.

As you might imagine, the researchers, led by Jared Moretti and Jesse Sabatini, are concerned less with the civilian pyrotechnics unleashed on 4 July in the US, 5 November in the UK, or at every conceivable opportunity in China, and more with military applications such as battlefield flares, which tend to use similar chemical formulations. But Moretti says that their new formulations also “have tremendous potential for civilian fireworks applications.”

Oxidizers are chemical compounds rich in oxygen, which they can relinquish to set the mixture burning. The most common types are nitrates and chlorates or perchlorates. Potassium nitrate is the ‘saltpetre’ used in old recipes for gunpowder, while sodium chlorate is a herbicide notorious for its use in homemade ‘sugar/weed-killer’ bombs. Many civilian and military pyrotechnic devices now use either potassium perchlorate or barium nitrate as the oxidizer. Both of these have environmental drawbacks. The US Environmental Protection Agency (EPA) are scrutinizing the use of perchlorate because it can substitute for iodide in the thyroid gland, disrupting the production of hormones. It can also cause growth abnormalities in embryos. The strict limits placed on perchlorate levels in drinking water by the EPA has hampered military training in the US and threatens to cause problems for civilian firework displays too.

Barium is a health hazard too: it can interfere with heart function and cause constriction of the air passages in breathing. Aside from flares, both potassium perchlorate and barium nitrate are currently used by the US Army in an incendiary mixture called IM-28, which is added to armour-piercing bullets so that the impact creates a bright flash that marks the impact point. Finding a replacement incendiary oxidizer for this application was the immediate motivation for the research by Moretti and colleagues.

Among the alternatives considered already are nitrates that don’t contain barium, in particular sodium nitrate. However, that – as well as another candidate, strontium nitrate – has a different problem: it readily absorbs water vapour from the atmosphere (that is, it is hygroscopic), because the compound is quite soluble in water. This means that the substance is liable to become damp if the pyrotechnic device is stored for a long time, and so it won’t ignite.

Moretti and colleagues have now identified alternatives that don’t seem to have any of the health risks of current oxidizers nor suffers from moisture-sensitivity. This isn’t just a question of finding another compound that will cause ignition. It also has to produce a bright flash, ideally of white light (different metals, in particular, tend to generate different colours), and should not be so exotic as to be unaffordable. It had also better not be set off too easily: one doesn’t want flares and fireworks detonating in the box if they get too warm.

The researchers find that sodium and potassium periodate (pronounced “per-eye-oh-date”) seem to fulfil all these requirements. These are analogous to perchlorates, with the chlorine atoms replaced with iodine. That’s a crucial difference from the point of view of thyroid toxicity. It seems likely that perchlorate ions can nudge out iodide ions in the thyroid because they have a similar size. But periodate ions are considerably too big to substitute for iodide in the same manner.

Yet isn’t it a bit odd to talk at all of ‘green’ military technology – stuff that is used in combat, perhaps lethally, but doesn’t harm the environment? The apparent irony is not lost on the researchers engaged in such work. But it’s hardly cynical to say that, since armed conflicts do occur whether you like it nor not, one would rather not pollute the environment afterwards for civilians.

With that in mind, making military armaments greener has become a significant concern. The US Department of Defense issued a ‘statement of need’ last October calling for research proposals for ‘environmentally advantaged submunitions’ – basically, ‘green’ explosives. For example, the ‘primer’ that sets off the bullet-propelling explosive in small arms typically contains lead, which lingers in firing ranges and accumulates alarmingly in the blood of trainee soldiers and police officers.

High explosives are problematic too. TNT is a carcinogen, although rarely used now in military applications, while the most common alternatives, compounds called HMX and RDX, can cause neurological and reproductive problems. In 1984 a child was hospitalized with epileptic seizures after chewing on a piece of RDX plastic explosive stuck to the clothes of its mother, a munitions worker (and you thought your parenting was irresponsible?). The army is worried about how much of this stuff is left lying around ranges and battlegrounds in unexploded dud shells, which constitute 3-4% of those supplied to troops. Hundreds of thousands of duds were dropped as cluster bombs in the 1991 Gulf War, for example.

The new green incendiary oxidizers represent another facet of this general trend – and they have the added appeal of benefitting peaceful pyrotechnics too.

Reference: J. D. Moretti, J. J. Sabatini & G. Chen, Angewandte Chemie advanced online publication, doi: 10.1002/anie.201202589.

Saturday, May 26, 2012

Some books to browse

The Browser has run an interview in which I recommend five books connected to Curiosity. (I cheat – one is a multi-volume series.) And I just discovered that my short talk on patterns at the NY ‘Wonder Cabinet’ Survival of the Beautiful is now online, along with a little post-talk interview.

Friday, May 25, 2012

Buckled up

I have written a story for Physical Review Focus, of which the pre-edited version is below. There’s more on this topic in my book Shapes, and out of sheer Friday-night generosity I reproduce some of it below too.
__________________________________________________________________

Some of nature’s most delicate forms and patterns, such as the fluted head of a daffodil and the convoluted labyrinths of fingerprints, are created by buckling and wrinkling. These deformations are a response to internal stress, for example as a sheet of soft tissue grows while being constrained at its boundaries. Old paint wrinkles as the paint film swells in some places but stays pinned to the surface below in others.

Because buckling and wrinkling patterns can be highly regular and predictable, they could provide a way of creating complex structures by spontaneous self-organization. In a paper in Physical Review Letters [1], Nicholas Fang at the Massachusetts Institute of Technology in Cambridge and coworkers describe a way of controlling the buckling shapes in small tubes of soft material, and show that they can explain theoretically how the pattern evolves as the tube dimensions are altered.

“These patterns are lovely to look at”, says Michael Marder, a specialist on nonlinear dynamics at the University of Texas at Austin, “and if the ability to control patterns is not yet at the level of control that is likely to interest engineers, it’s a promising step forward.”

“Mechanical buckling has long been suggested as a means of pattern formation in biological tissues”, says mathematician Alan Newell of the University of Arizona, who has previously advanced this as an explanation for the spiral phyllotaxis patterns of leaves and florets. “What’s good about this work is that they do a precise experiment and their results tend to agree with simple theories.”

To ‘grow’ a deformable material so as to induce buckling, Fang’s team use a polymer gel that swells when it absorbs water. They explored tubular geometries not only because these are conceptually simple but because they are relevant to some natural buckling structures, such as the bronchial passage, which may become swollen and wrinkled in asthmatics.

The researchers used a microfabrication technique to make short tubes with diameters (D) of several mm, and walls of various thickness (t) and length (h). The tubes are fixed at one end to a solid substrate, creating the constraint that drives buckling. To induce swelling that begins at the free end of the tube, the researchers inverted the tubes in oil and let the ends poke into a layer of water below.

Swelling deformed the tubes into truncated cone shapes, which might then buckle into a many-pointed star-shaped cross-section. In general, the shorter the tubes (the smaller the ratio h/D), the more wrinkles there were around the tube circumference. Surprisingly, the wall thickness had relatively little influence: tubes with the same h/D tended to have a similar buckled shape regardless of the wall thickness.

To understand that, Fang and colleagues used a simple model to calculate the shape that minimizes the total elastic energy of a tube. Buckling costs elastic energy around the circumference, but it can also reduce the elastic energy due to outward bending of the tube as it swells. For a given set of parameters, the two contributions balance to minimize the total energy for a particular number of buckles – which turns out to depend only on h/D and not wall thickness. The experimental results mapped well onto these theoretical predictions of the most stable mode of deformation.

Fang, who usually works on photonic structures for controlling the flow of light, hopes that these regular buckles and wrinkles might offer ways of channeling and directing light and sound waves by scattering. “The basic idea is that the wrinkled structure could absorb or scatter the light field or acoustic waves in a directional way”, he says. “We’re currently testing such structures for applications in ultrasound-mediated drug delivery.”

The results may have implications for natural systems too. Fang says it’s no coincidence that the buckled gel rings resemble slices of bell pepper, for example. “Bell peppers can be considered as a tubular structure that grow under constraints from the ends”, he says. “Often we find a slice of slender peppers display a triangle shape and that of short and squat peppers appear in square or even star-like. Our model suggests that these patterns are determined by the ratio of length to diameter.” The team also thinks that the results might elucidate the buckling patterns of corals and brain tissue.

Xi Chen of Columbia University in New York, who has studied the buckling pattern on the surfaces of fruits and vegetables, is not yet convinced by this leap. “It’s not yet clear where the rather strict constraint on swelling – the key for obtaining the shapes described in their paper – comes from in nature. It’s interesting work but there’s still a large gap before it could be applied directly to natural systems.”

Newell raises a more general note of caution: similarities of form and pattern between an experiment and a theory are just suggestive and not conclusive proof that one explains the other. “To say that the pattern you observe is indeed the one produced by the mechanism you suggest requires one to test the dependence of the pattern on the parameters relevant to the model”, he says. “In this case, the authors test the h/D ratio dependence but it would also have been good to see the dependence of the outcomes on various of the elastic parameters.”

Reference
1. Howon Lee, Jiaping Zhang, Hanqing Jiang, and Nicholas X. Fang, "Prescribed Pattern Transformation in Swelling Gel Tubes by Elastic Instability", Phys. Rev. Lett. 108, 214304 (2012).

From Shapes:

Buckling might conceivably also explain the surface patterning of some fruits and vegetables, such as pumpkins, gourds, melons and tomatoes. These have soft, pulpy flesh confined by a tougher, stiffer skin. Some fruits have smooth surfaces that simply inflate like balloons as they grow, but others are marked by ribs, ridges or bulges that divide them into segments (Fig. 1a). According to Xi Chen of Columbia University in New York, working in collaboration with Zexian Cao in Beijing and others, these shapes could be the result of buckling.


Fig. 1 Real (a) and modelled (b) buckled fruit shapes

This is a familiar process in laminates that consist of a skin and core with different stiffness: think, for example, of the wrinkling of a paint film stuck to wood that swells and shrinks. Under carefully controlled conditions, this process can generate patterns of striking regularity (Figure 2).

Fig. 2 Wrinkles in a thin metal film attached to a rubbery polymer

Chen and colleagues performed calculations to predict what will happen if the buckling occurs not on a flat surface but on spherical or ovoid ones (spheroids). They found well-defined, symmetrical patterns of creases in a thin, stiff skin covering the object’s surface, which depend on three key factors: the ratio of the skin thickness to the width of the spheroid, the difference in stiffness of the core and skin, and the shape of the spheroid – whether, say, it is elongated (like a melon or cucumber) or flattened (like a pumpkin).

The calculations indicate that, for values of these quantities comparable to those that apply to fruits, the patterns are generally either ribbed – with grooves running from top to bottom – or reticulated (divided into regular arrays of dimples), or, in rare cases, banded around the circumference (Figure 3). Ribs that separate segmented bulges are particularly common in fruit, being seen in pumpkins, some melons, and varieties of tomato such as the striped cavern or beefsteak. The calculations show that spheroids shaped like such fruits may have precisely the same number of ribs as the fruits themselves (Figure 1)

Fig. 3 Buckling shapes on spheroids as a function of geometry

For example, the 10-rib pattern of Korean melons remains the preferred state for a range of spheroids with shapes like those seen naturally. That’s why the shape of a fruit may remain quite stable during its growth (as its precise spheroidal profile changes), whereas differences of, say, skin thickness would generate different features in different fruits with comparable spheroidal forms.

Chen suggests that the same principles might explain the segmented shapes of seed pods, the undulations in nuts such as almonds, wrinkles in butterfly eggs, and even the wrinkle patterns in the skin and trunk of elephants. So far, the idea remains preliminary, however. For one thing, the mechanical behaviour of fruit tissues hasn’t been measured precisely enough to make close comparisons with the calculations. And the theory makes some unrealistic assumptions about the elasticity of fruit skin. So it’s a suggestive argument, but far from proven. Besides, Chen and his colleagues admit that some of the shaping might be influenced by subtle biological factors such as different growth rates in different parts of the plant, or direction-dependent stiffness of the tissues. They argue, however, that the crude mechanical buckling patterns could supply the basic shapes that plants then modify. As such, these patterns would owe nothing to evolutionary fine-tuning, but would be an inevitable as the ripples on a desert floor.

I daresay Figure 2 may have already put you in mind of another familiar pattern too. Don’t those undulating ridges and grooves bring to mind the traceries at the tips of your fingers (Figure 4)? Yes indeed; and Alan Newell of the University of Arizona has proposed that these too might be the product of buckling as the soft tissue is compressed during our early stages of growth.

Fig. 4 A human fingerprint

About ten weeks into the development of a human foetus, a layer of skin called the basal layer starts to grow more quickly than the two layers – the outer epidermis and the inner dermis – between which it is sandwiched. This confinement gives it no option but to buckle and form ridges. Newell and his colleague Michael Kücken have calculated what stress patterns will result in a surface shaped like a fingertip, and how the basal layer may wrinkle up to offer maximum relief from this stress.

The buckling is triggered and guided by tiny bumps called volar pads that start to grow on the foetal fingertips after seven weeks or so. The shape and positions of volar pads seem to be determined in large part by genetics – they are similar in identical twins, for instance. But the buckling that they produce contains an element of chance, since it depends (among other things) on slight non-uniformities in the basal layer. The American anatomist Harold Cummins, who studied volar pads in the early twentieth century, commented presciently on how they influence the wrinkling patterns in ways that cannot be fully foreseen, and which echo universal patterns elsewhere: “The skin possesses the capacity to form ridges, but the alignments of these ridges are as responsive to stresses in growth as are the alignments of sand to sweeping by wind or wave.” Newell and Kücken found that the shape of the volar pads govern the print patterns: if they are highly rounded, the buckling generates concentric whorls, whereas if the pads are flatter, arch-shaped ridges are formed (Figure 5). Both of these are seen in real fingerprints.

Fig. 5 Whorls (top) and arches (bottom) in a model of fingerprint patterns.

Wednesday, May 23, 2012

It's not only opposites that attract

My latest news article for Nature is here – it was altered very little in the editing, so I shan’t include the original. Was this problem really not put to bed in the nineteenth century? Apparently not. But an awful lot gets forgotten in the annals of science…

Slippery slopes


I often used to get asked if the image on the cover of the UK version of Critical Mass, shown here, is real (it is). It was a great choice by Heinemann, a perfect visual metaphor for the kind of social group behaviour that the book discusses. Now the science I presented there has been extended to include the very phenomenon depicted. In a paper in Physical Review E, Thomas Holleczek and Gerhard Tröster and of ETH in Zurich present an agent-based model of ‘skiing traffic’ - a modification of the pedestrian and traffic models that motivate the early part of my book, which incorporates the physics of skiing into the rules of motion of the agents. The researchers aim, among other things, to develop a model that can predict congestion on ski slopes, so that appropriate safety measures such as slope widening can be undertaken. It’s a complex problem, because the trajectories of skiers depend on a host of factors: the slope and friction of the snow, for example, may determine how many turns they make. To my eye, the work is still in the preliminary stages: the matching of model predictions with observed data for skier density, speed and number of turns still leaves something to be desired, at least in the cases studied. But already it becomes clear what the most salient factors are that govern these things, and the discrepancies will feed back into better model parameterization. There’s more on the work here.

Monday, May 21, 2012

Last word

One of the sections of New Statesman I most enjoy is This England, which supplies little snippets of the poignantly weird, stupid and ridiculous kinds of behaviour that this scepter’d isle seems to produce in such abundance. I thought had found a candidate in this week’s New Scientist, until I saw that it comes from New South Wales. It is one of the questions for The Last Word, and it made me laugh out loud:

“Could anyone explain why I can hold an electric fence between finger and thumb and feel only a tiny pulse in the finger, yet my wife can touch the same spot and feel a much larger pulse through her whole arm? If I touch my wife with one finger on my other hand as I hold the fence, I feel a solid shock through both arms and across my chest and my wife feels a massive shock leaving her shaking and weak. Footwear type does not seem to play a role.”

That last line is a stroke of sheer genius. Well, can anyone indeed explain it? One is tempted to imagine it has something to do with living in Australia, but somehow I don’t have too much difficulty seeing this sort of thing going on in our own beloved South Wales either.

Thursday, May 17, 2012

Galileo versus Bacon?

Andrew Robinson gives me a kind review in this week’s New Scientist (not available free online, but Andrew has put it on his website here). But he’s not convinced by aspects of my thesis, specifically with the following quote:
“It is [Francis]Bacon’s picture (derived from the natural magic tradition) and not Galileo’s (which drew as much on scholastic deduction from theorem and axiom as it did on observation), that conditioned the emergence of experimental, empirical science.”
Against this, Andrew contrasts Einstein saying that Galileo was “the father of modern physics – indeed, of modern science altogether” because he was the first to insist that “pure logical thinking cannot yield us any knowledge of the empirical world.”

The first thing to notice is that these two statements of what Galileo actually did are entirely compatible, when read carefully. In that sense, Einstein’s comment does not at all disavow mine.

But more revealing is the fact that Andrew has chosen to bring Einstein’s authority to bear. Now, it happens that I am writing about Einstein at the moment, which is reaffirming my deep respect for his wisdom as well as his scientific acumen. But one thing Einstein is not is a historian of science. And that is important not just because it means Einstein made no deep, careful analysis of the evolution of science but because his position is precisely the one that scientists for the past hundred years or more have loved to assert, making Galileo their idol and the model of the “modern scientist”. Historians of science today adopt a much more nuanced position. Moreover, while it is true that in Einstein’s time there were still some science historians who pushed this Whiggish line that I criticize in my book, scholarship has moved on. In other words, while Einstein is so often considered the ultimate arbiter of all things scientific (a role that Stephen Hawking is unfortunately now often awarded instead), this is one situation in which his opinion is decidedly amateur. (I am an amateur here too, of course, but my view is one that many professionals have already laid out. I don't make any great claim to originality here.)

All the same, there is certainly a debate to be had about the relative influences of the Baconian versus the Galilean (or for that matter, Aristotelian, Cartesian, Newtonian and Boylean) approaches to science. I’d hope my book can help a little to stimulate that discussion, and I’m glad Andrew brings it to the fore.

Tuesday, May 15, 2012

Curioser...

So: reviews. I read somewhere recently that some writers still feel it the proper thing to do, if not to never read them, then at least not to respond to them. But where’s the fun in that? However, I’m fortunate in having, so far, little to respond to: the Literary Review (not online) and the Daily Telegraph both liked Curiosity. I think it is fair to say that the Scotsman on Sunday and the Sunday Telegraph liked some things about it too, but had reservations about others. I can’t quibble with Doug Johnstone in the former (not least because he was so kind about The Music Instinct), since his reservations are a matter of taste: he wanted less history and more science (which is one reason I’m pleased when I’m not described as a science writer), and found some bits boring. (If you’re looking for science, the late Renaissance court scene isn’t likely to be your thing.) Noel Malcolm in the Sunday Telegraph offers, as one would expect of him, a very well informed opinion. And his point that a key transition is that of objects from being representations of something else to being material things deserving study for their own sake. I have not particularly tackled that issue, and I’m not aware if anyone else has, at least to the point of generating a solid story about it.

Malcolm points out that science historians have been, for some time, saying much of what I’m saying. This is true, and I’m not sure I can see how I could have made it more plain in the book (it is first said in the Preface…) without sounding repetitious. My only real gripe, though, is with his suggestion that Frances Yates is my “leading authority”. The kindest word I can find for this notion is “bizarre”. It is so off the mark that I have to suspect there is some other agenda here (if I have “leading authorities”, they are the likes of Lorraine Daston, Katharine Park, Steven Shapin, Simon Schaeffer, Mary Baine Campbell, Catherine Wilson, Neil Kenny, William Eamon, William Newman, Lisa Jardine…). I know that Yates is now out of favour (although “batty” is putting it a little strongly) – but in any event, she is used here in much the same way as other older historians of science such as Rosemary Syfret, Alistair Crombie, Lynn Thorndike and Margery Purver. Yes, it’s a very odd remark indeed, and I suppose a reminder of the fact that sometimes engaging an expert reviewer has its pitfalls: one can get pulled way off course by the unseen currents of academic battles, antipathies and allegiances.

Sunday, May 13, 2012

Science and wonder

This piece appeared in the 30 April issue of the New Statesman, a “science special”, for which I was asked to write about whether “science has lost it sense of wonder.”
___________________________________________________

The day I realised the potential of the internet was infused with wonder. Not wonder at the network itself, however handy it would become for shovelling bits, but at what it revealed, televised live by NASA, as I crowded round a screen with the other staff of Nature magazine on 16 July 1994. That was the day the first piece of Comet Shoemaker-Levy 9 smashed into Jupiter, turning our cynicism about previous astronomical fireworks promised but not delivered into the carping of ungrateful children. There on our cosmic doorstep bloomed a fiery apocalypse that left an Earth-sized hole in the giant planet’s baroquely swirling atmosphere. This was old-style wonder: awe tinged with horror at forces beyond our comprehension.

Aristotle and Plato didn’t agree on much, but they were united in identifying wonder as the origin of their profession: as Aristotle put it, “It was owing to their wonder that men began to philosophize”. This idea appeals to scientists, who frequently enlist wonder as a goad to inquiry. “I think everyone in every culture has felt a sense of awe and wonder looking at the sky”, wrote Carl Sagan, locating in this response the stirrings of a Copernican desire to know who and where we are.

But that’s not the only direction in which wonder may take us. To Thomas Carlyle, wonder sits at the beginning not of science but of religion. That’s is the central tension in forging an alliance of wonder and science: will it make us curious, or induce us to prostrate ourselves in pitiful ignorance?

We had better get to grips with this question before too hastily appropriating wonder to sell science. That’s surely what is going on when pictures from the Hubble Space Telescope are (unconsciously?) cropped and coloured to recall the sublime iconography of Romantic landscape painting, or the Human Genome Project is wrapped in Biblical rhetoric, or the Large Hadron Collider’s proton-smashing is depicted as “replaying the moment of creation”. The point is not that such things are deceitful or improper, but that if we want to take that path, we should first consider the complex evolution of science’s relation to wonder.

For Sagan, wonder is evidently not just an invitation to be curious but a delight: it is wonderful. Maybe the ancients felt this too; the Latin equivalents admiratio and mirabilia seem to have their roots in an Indo-European word for ‘smile’. But this was not the wonder enthusiastically commended by medieval theologians, which was more apt to induce fear, reverence and bewilderment. Wonder was a reminder of God’s infinite, unknowable power – and as such, it was the pious response to nature, as opposed to the sinful prying of ‘curiosity’, damned by Saint Augustine as a ‘lust of the eyes’.

In that case, wonder was a signal to cease questioning and fall to your knees. Historians Lorraine Daston and Katharine Park argue that wonder and curiosity followed mirror-image trajectories between the Middle Ages and the Enlightenment, from good to bad and vice versa, conjoining symbiotically only in the sixteenth and seventeenth centuries – not incidentally, the period in which modern science was born.

It’s no surprise, then, to find the early prophets of science uncertain how to manage this difficult emotion of wonder. Francis Bacon admitted it only as a litmus test of ignorance: wonder signified “broken knowledge”. The implicit aim of Bacon’s scientific programme was to make wonders cease by explaining them, a quest that began with medieval rationalists such as Roger Bacon and Albertus Magnus. That which was understood was no longer wonderful.

Undisciplined wonder was thought to induce stupefaction. Descartes distinguished useful wonder (admiration) from useless (astonishment, literally a ‘turning to stone’ that “makes the whole body remain immobile like a statue”). Useful wonder focused the attention: it was, said Descartes, “a sudden surprise of the soul which makes it tend to consider alternatively those objects which seem to it rare and extraordinary”. If the ‘new philosophers’ of the seventeenth century admitted wonder at all, it was a source of admiration, not debilitating fear. The northern lights might seem “frightful” to the “vulgar Beholder”, said Edmond Halley, but to him they would be “a most agreeable and wish’d for Spectacle”.

Others shifted wonder to the far side of curiosity: something that emerges only after the dour slog of study. In this way, wonder could be dutifully channelled away from the phenomenon itself and turned into esteem for God’s works. “Wonder was the reward rather than the bait for curiosity”, say Daston and Park, “the fruit rather than the seed.” It is only after he has carefully studied the behaviour of ants to understand how elegantly they coordinate their affairs that Dutch naturalist Jan Swammerdam admits to his wonder at how God could have arranged things thus. “Nature is never so wondrous, nor so wondered at, as when she is known”, wrote Bernard Fontenelle, secretary of the French Academy of Sciences. This is a position that most modern scientists, even those of a robustly secular persuasion, are comfortable with: “The science only adds to the excitement and mystery and awe of a flower”, said physicist Richard Feynman.

This kind of wonder is not an essential part of scientific practice, but may constitute a form of post hoc genuflection. It is informed wonder that science generally aims to cultivate today. The medieval alternative, regarded as ignorant, gaping wonder, was and is denounced and ridiculed. That wonder, says social historian Mary Baine Campbell, “is a form of perception now mostly associated with innocence: with children, the uneducated (that is, the poor), women, lunatics, and non-Western cultures… and of course artists.” Since the Enlightenment, Daston and Park concur, uncritical wonder has become “a disreputable passion in workaday science, redolent of the popular, the amateurish, and the childish.” Understanding nature was a serious business, requiring discipline rather than pleasure, diligence rather than delight.

Descartes’ informed, sober wonder re-emerged as an aspect of Romanticism, whether in the Naturphilosophie of Schilling and Goethe or the passion of English Romantics like Coleridge, Shelley and Byron, who had a considerable interest in science. Now it was not God but nature herself who was the object of awe and veneration. While natural theologians such as William Paley discerned God’s handiwork in the minutiae of nature, the grander marvels of the Sublime – wonder’s “elite relative” as Campbell aptly calls it – exposed the puny status of humanity before the ungovernable forces of nature. The divine creator of the Sublime was no intricate craftsman who wrought exquisite marvels, but worked only on a monolithic scale, with massive and inviolable laws. He (if he existed at all) was an architect not of profusion but of a single, awesome order.

Equally vexed during science’s ascension was the question of what was an appropriate object for wonder. The cognates of the Latin mirabilia – marvels and miracles – reveal that wonder was generally reserved for the strange and rare: the glowing stone, the monstrous birth, the fabulous beast. No mere flower would elicit awe like Feynman’s – it would have to be misshapen, or to spring from a stone, or have extraordinary curative powers. This was a problem for early science, because it threatened to misdirect curiosity towards precisely those objects that are the least representative of the natural order. When the early Royal Society sought to amass specimens for its natural history collection, it was frustrated by the inclination of its well-meaning donors throughout the world to donate ‘wonderful’ oddities, thinking that only exotica were worthy gifts. If they sent an egg, it would be a ‘monstrous’ double-shelled one; if a chicken, it had four legs. What they were supposed to do with the four-foot cucumber of one benefactor was anyone’s guess.

This collision of the wondrous with the systematic was evident in botanist Nehemiah Grew’s noble efforts to catalogue the Society’s chaotic collection in the 1680s. What this “inventory of nature” needed, Grew grumbled, were “not only Things strange and rare, but the most known and common amongst us.” By fitting strange objects into his complex classification scheme, Grew was attempting to neutralize their wonder. Underlying that objective was a growing conviction that nature’s order (or was it God’s?) brooked no exceptions. In earlier times, wondrous things took their significance precisely from their departure from the quotidian: monstrous births were portents, as the term itself implied (monstrare: to show). Aristotle had no problem with such departures from regular laws – but precisely because they were exceptions, they were of little interest. Now, in contrast, these wonders became accommodated into the grand system of the world. Far from being aberrations that presaged calamity and change, comets obeyed the same gravitational laws as the planets.

There is perhaps a little irony in the fact that, while attempting to distance themselves from a love of wonders found in the tradition of collectors of curiosities, these early scientists discovered wonders lurking in the most prosaic and unlikely of places, once they were examined closely enough. Robert Hooke’s Micrographia (1665), a gorgeously illustrated book of microscopic observations, was a compendium of marvels equal to any fanciful medieval account of journeys in distant lands. Under the microscope, mould and moss became fantastic gardens, lice and fleas were intricate armoured brutes, and the multifaceted eyes of a fly reflect back ten thousand images of Hooke’s laboratory. Micrographia shows us a determined rationalist struggling to discipline his wonder into a dispassionate record.

Stern and disciplined reason triumphed: it came to seem that science would bleach the world of wonder. Thence the disillusion in Keats’ Lamia:
Do not all charms fly
At the mere touch of cold philosophy?
There was an awful rainbow once in heaven:
We know her woof, her texture; she is given
In the dull catalogue of common things.

But science today appreciates that the link between curiosity and wonder should not and probably cannot be severed, for true curiosity – as opposed, say, to obsessive pedantry, acquisitiveness or problem-solving – grinds to a halt when deprived of wonder’s fuel. You might say that we first emancipated curiosity at the expense of wonder, and then re-admitted wonder to take care of public relations. Yet in the fear of the subjective that characterizes scientific discourse, wonder is one of the casualties; excitement and fervour remain banished from the official records. This does not mean they aren’t present. Indeed, the passions involved in wonder and curiosity, as an aspect of the motivations for research, are a part of the broader moral economy of science that, as Lorraine Daston says, “cannot dictate the products of science in their details [but is] the framework that gives them coherence and value.”

Pretending that science is performed by people who have undergone a Baconian purification of the emotions only deepens the danger that it will seem alien and odd to outsiders, something carried out by people who do not think as they do. Daston believes that we have inherited a “view of intelligence as neatly detached from emotional, moral, and aesthetic impulses, and a related and coeval view of scientific objectivity that brand[s] such impulses as contaminants.” It’s easy to understand the historical motivations of this attitude: the need to distinguish science from credulous ‘enthusiasm’, to develop an authoritative voice, to strip away the pretensions of the mystical Renaissance magus acquiring knowledge by personal revelation. But we no longer need this dissimulation; worse, it becomes a defensive reflex that exposes scientists to the caricature of the emotionally constipated boffin, hiding within thickets of jargon.

They were never really like this, despite their best efforts. Reading Robert Boyle’s account of witnessing phosphorus for the first time, daubed on the finger of a German chemical showman to trace out “Domini” on his sister’s expensive carpet in Pall Mall, you can’t miss the wonder tinged with fear in his account of this “mixture of strangeness, beauty and frightfulness”.

That response to nature’s spectacle remains. It’s easy to mock Brian Cox’s spellbound admiration as he looks heavenward, but the spark in his eyes isn’t just there for the cameras. You only have to point binoculars at the crescent moon on a clear night, seeing as Galileo did the sunlit peaks and shadowed valleys where lunar day becomes night, to see why there is no need to manufacture a sense of wonder about such sights.

Through a frank acknowledgement of wonder – admitting it not just for marketing, but into the very inception of scientific inquiry – it might be possible to weave science back into ordinary experience, to unite the objective with the subjective. Sagan suggested that “By far the best way I know to engage the religious sensibility, the sense of awe, is to look up on a clear night.” Richard Holmes locates in wonder a bridge between the sentiments of the Romantic poets and that of their scientific contemporaries.

Science deserves this poetry, and needs it too. When his telescope showed the Milky Way to be not a cloudy vapour but “unfathomable… swarms of small stars placed exceedingly close together”, Galileo already did better than today’s astronomers in conveying his astonishment and wonder without compromising the clarity of his description. But look at what John Milton, who may have seen the same sight through Galileo’s own telescope when he visited the old man under house arrest in Arcetri, made of this vision in Paradise Lost:
A broad and ample road, whose dust is gold,
And pavement stars, as stars to thee appear
Seen in the galaxy, that milky way
Which nightly as a circling zone thou seest
Powdered with stars.

Not even Carl Sagan could compete with that.