Friday, June 24, 2011

Movie characters mimic each other's speech patterns


Here’s my latest news story for Nature News.
****************************************************
Script writers have internalized the unconscious social habits of everyday conversations.

Quentin Tarantino's 1994 film Pulp Fiction is packed with memorable dialogue — 'Le Big Mac', say, or Samuel L. Jackson's biblical quotations. But remember this exchange between the two hitmen, played by Jackson and John Travolta?

Vincent (Travolta): "Antwan probably didn't expect Marsellus to react like he did, but he had to expect a reaction".
Jules: "It was a foot massage, a foot massage is nothing, I give my mother a foot massage."

Computer scientists Cristian Danescu-Niculescu-Mizil and Lillian Lee of Cornell University in Ithaca, New York, see the way Jules repeats the word 'a' used by Vincent as a key example of 'convergence' in language. "Jules could have just as naturally not used an article," says Danescu-Niculescu-Mizil. "For instance, he could have said: 'He just massaged her feet, massaging someone's feet is nothing, I massage my mother's feet.'"

The duo show in a new study that such convergence, which is thought to arise from an unconscious urge to gain social approval and to negotiate status, is common in movie dialogue. It "has become so deeply embedded into our ideas of what conversations 'sound like' that the phenomenon occurs even when the person generating the dialogue [the scriptwriter] is not the recipient of the social benefits", they say.

“For the last forty years, researchers have been actively debating the mechanism behind this phenomenon”, says Danescu-Niculescu-Mizil. His study, soon to be published in a workshop proceedings [1], cannot yet say if the ‘mirroring’ tendency is hard-wired or learnt, but it shows that it does not rely on the spontaneous prompting of another individual and the genuine desire for his or her approval.

“This is a convincing and important piece of work, and offers valuable support for the notion of convergence”, says philologist Lukas Bleichenbacher at the University of Zurich in Switzerland, a specialist on language use in the movies.

The result is all the more surprising given that movie dialogue is generally recognized to be a stylized, over-polished version of real speech, serving needs such as character and plot development that don’t feature in everyday life. “The method is innovative, and kudos to the authors for going there”, says Howie Giles, a specialist in communication at the University of California at Santa Barbara.

"Fiction is really a treasure trove of information about perspective-taking that hasn't yet been fully explored," agrees Molly Ireland, a psychologist at the University of Texas at Austin. "I think it will play an important role in language research over the next few years."

But, Giles adds, "I see no reason to have doubted that one would find the effect here, given that screenwriters mine everyday discourse to make their dialogues appear authentic to audiences".

That socially conditioned speech becomes an automatic reflex has long been recognized. “People say ‘oops’ when they drop something”, Danescu-Niculescu-Mizil explains. “This probably arose as a way to signal to other people that you didn't do it intentionally. But people still say ‘oops’ even when they are alone! So the presence of other people is no longer necessary for the ‘oops’ behaviour to occur – it has become an embedded behavior, a reflex.”

He and Lee wanted to see if the same was true for conversational convergence. To do that, they needed the seemingly unlikely situation in which the person generating the conversation could not expect any of the supposed social advantages of mirroring speech patterns. But that’s precisely the case for movie script-writers.

So the duo looked at the original scripts of about 250,000 conversational exchanges in movies, and analysed them to identify nine previously recognized classes of convergence.

They found that such convergence is common in the movie dialogues, although less so than in real life – or, standing proxy for that here, in actual conversational exchanges held on Twitter. In other words, the writers have internalized the notion that convergence is needed to make dialogue ‘sound real’. “The work makes a valid case for the use of ‘fictional’ data”, says Bleichenbacher.

Not all movies showed the effect to the same extent. “We find that in Woody Allen movies the characters exhibit very low convergence”, says Danescu-Niculescu-Mizil – a reminder, he adds, that “a movie does not have to be completely natural to be good.”

Giles remarks that, rather than simply showing that movies absorb the unconscious linguistic habits of real life, there is probably a two-way interaction. “Audiences use language devices seen regularly in the movies to shape their own discourse”, he points out. In particular, people are likely to see what types of speech ‘work well’ in the movies in enabling characters to gain their objectives, and copy that. “One might surmise that movies are the marketplace for seeing what’s on offer, what works, and what needs purchasing and avoiding in buyers own communicative lives”, Giles says.

Danescu-Niculescu-Mizil hopes to explore another aspect of this blurring of fact and fiction. “We are currently exploring using these differences to detect ‘faked’ conversations”, he says. “For example, I am curious to see whether some of the supposedly spontaneous dialogs in so-called ‘reality shows’ are in fact all that real.”

1. C. Danescu-Niculescu-Mizil & L. Lee, Proc. ACL Workshop on Cognitive Modeling and Computational Linguistics, Portland, Oregon, 76-87 (Association for Computing Machinery Press, New York, 2011). Available as a preprint here.

I received some interesting further comments on the work from Molly Ireland, which I had no space to include fully. They include some important caveats, so here they are:

I think it's important to keep in mind, as the authors point out, that fiction can't necessarily tell us much about real-life dialog. Scripts can tell us quite a bit about how people think about real-life dialog though. Fiction is really a treasure trove of information about perspective-taking that hasn't been fully explored in the past. Between Google books and other computer science advances (like the ones showcased in this paper), it's become much easier to gain access to millions of words of dialog in novels, movies, and plays. I think fiction will play an important role in language and perspective-taking research over the next few years.

Onto their findings: I'm not surprised that the authors found convergence between fictional characters, for a couple of reasons. They mention Martin Pickering and Simon Garrod's interaction alignment model in passing. Pickering and Garrod basically argue that people match a conversation partner's language use because it's easier to reuse language patterns that you've just processed than it is to generate a completely novel utterance. Their argument is partly based on syntactic priming research that shows that people match the grammatical structures of sentences they've recently been presented with – even when they're alone in a room with nothing but a computer. So first of all, we know that people match recently processed language use in the absence of the social incentives that the authors mention (e.g., affection or approval).

Second, all characters were written by the same author (or the same 2-3 authors in some scripts). People have fairly stable speaking styles. So even in the context of scriptwriting, where authors are trying to write distinct characters with different speaking styles, you would expect two characters written by one author with one relatively stable function word fingerprint to use function words similarly (although not identically, if the author is any good).

The authors argue that self-convergence would be no greater than other-convergence if these cold, cognitive features of language processing [the facts that people tend to (a) reuse function words from previous utterances and (b) consistently sound sort of like themselves, even when writing dialog for distinct characters] were driving their findings. That would only be true if authors failed to alter their writing style at all between characters. Adjusting one's own language style when imagining what another person might say probably isn't conscious. It's probably an automatic consequence of taking another person's perspective. An author would have to be a pretty poor perspective-taker for all of his characters to sound exactly like he sounds in his everyday life.

Clearly I'm skeptical about some of the paper's claims, but I would be just as skeptical about any exploration into a new area of research using an untested measure of language convergence (including my own research). I think that the paper's findings regarding sex differences in convergence and differences between contentious and neutral conversations could turn out to be very interesting and should be looked at more closely – possibly in studies involving non-experts. I would just like to look into alternate explanations for their findings before making any assumptions about their results.

Thursday, June 23, 2011

Einstein and his precursors

From time to time, Nature used to receive (and doubtless still does) crank letters claiming that Einstein was not the first to derive E=mc2, but that this equation was first written down, after a fashion, by one Friedrich Hasenörhl, an Austrian physicist with a perfectly respectable, if unremarkable, pedigree and career who was killed in the First World War. This was a favourite ploy of those cranks whose mission in life was to discredit Einstein’s theory of relativity – so much so that I had two such folks discuss it in my novel The Sun and Moon Corrupted. But not until now, while reading Alan Beyerchen’s Scientists Under Hitler (Yale University Press, 1977), did I realise where this notion originated. The idea was put about by Philipp Lenard, the Nobel prizewinner and virulently anti-Semitic German physicist and member of the Nazi party. Lenard put forward the argument in his 1929 book Grosse Naturforscher (Great Natural Researchers), in which he sought to establish that all the great scientific discoveries had been made by people of Aryan-Germanic stock (including Galileo and Newton). Lenard was deeply jealous of Einstein’s international fame, and as a militaristic, Anglophobic nationalist Lenard found Einstein’s pacifism and internationalism abhorrent. It’s a little comical that this nasty little man felt the need to find an alternative to Einstein at all, given that he was violently (literally) opposed to relativity and a staunch believer in the aether. In virtually all respects Lenard fits the profile of the scientific crank (bitter, jealous, socially inadequate, feeling excluded), and he offers a stark (that’s a pun) reminder that a Nobel prize is no guarantee even of scientific wisdom, let alone any other sort. So there we are: all those crank citations of the hapless Hasenöhrl – this is a popular device of the devotees of Viktor Schauberger, the Austrian forest warden whose bizarre ideas about water and vortices led him to be conscripted by the Nazis to make a ‘secret weapon’ – have their basis in Nazi ‘Aryan physics’.

Friday, June 17, 2011

Quantum life

I have a feature in this week’s Nature on quantum biology, and more specifically, on the phenomenon of quantum coherence in photosynthesis. Inevitably, lots of material from the draft had to be cut, and it was a shame not to be able to make the point (though I’m sure I won’t be the first to have made it) that ‘quantum biology’ properly begins with Schrödinger’s 1944 book What is Life? (Actually one can take it back still further, to Niels Bohr: see here.) Let me, though, just add here the full version of the box on Ian McEwan’s Solar, since I found it very interesting to hear from McEwan about the genesis of the scientific themes in the novel.
_______________________________________________________________________________

The fact is, no one understands in detail how plants work, though they pretend they do… How your average leaf transfers energy from one molecular system to another is nothing short of a miracle… Quantum coherence is key to the efficiency, you see, with the system sampling all the energy pathways at once. And the way nanotechnology is heading, we could copy this with the right materials… Quantum coherence in photosynthesis is nothing new, but now we know where to look and what to look at.

These words are lifted not from a talk by any of the leaders in this nascent field but from the pages of Solar, a 2010 novel by the British writer Ian McEwan. A keen observer of science, who has previously scattered it through his novels Enduring Love and Saturday and has spoken passionately about the dangers of global warming, McEwan likes to do his homework. Solar describes the tragicomic exploits of quantum physicist, Nobel laureate and philanderer Michael Beard as he misappropriates an idea to develop a solar-driven method to split water into its elements. The key, as the young researcher who came up with the notion explains, is quantum coherence.

“I wanted to give him a technology still on the lab bench”, says McEwan. He came across Fleming’s research in Nature or Science (he forgets which, but looks regularly at both), and decided that this was what he needed. After ‘rooting around’, he felt there was justification for supposing that a bright postdoc might have had the idea in 2000. It remained to fit that in with Beard’s supposed work in quantum physics. This task was performed with the help of Cambridge physicist Graham Mitchison, who ‘reverse-engineered’ Beard’s Nobel citation which appears in Solar’s appendix: “Beard’s theory revealed that the events that take place when radiation interacts with matter propagate coherently over a large scale compared to the size of atoms.”

Wednesday, June 15, 2011

The Anglican atheist

To be honest, I already suspected that Philip Pullman, literary darling of militant atheists (no doubt to his chagrin), is more religious than me, a feeble weak-tea religious apologist. But it is nice to have that confirmed in the New Statesman. Actually, ‘religious’ is not the right word, since Pullman is indeed (like me) an atheist. I had thought that ‘religiose’ would do it, but it does not – it means excessively and sentimentally religious, which Pullman emphatically isn’t. The word I want would mean ‘inclined to a religious sensibility’. Any candidates?

Pullman is writing is response to a request from Rowan Williams to explain what he means in calling himself a ‘Church of England atheist’. Pullman does so splendidly. Religion was clearly a formative part of his upbringing, and he considers that he cannot simply abandon that – he is attached to what Martin Rees has called the customs of his tribe, that being the C of E. But Pullman is an atheist because he sees no sign of God in the world. He admits that he can’t be sure about this, in which case he should strictly call himself an agnostic. But I’ve always been unhappy with that view of agnosticism, even though it is why Jim Lovelock considers atheism logically untenable (nobody really knows!). To me, atheism is an expression of belief, or if you like, disbelief, not a claim to have hard evidence to back it up. (I’m not sure what such evidence would even look like…)

What makes Pullman so thoughtful and unusual among atheists (and clearly this is why Rowan Williams feels an affinity with him) is that he is interested in religion: “Religion is something that human beings do and human activity is fascinating.” I agree totally, and that is one reason why I wrote Universe of Stone: I found it interesting how religious thought influenced and even motivated other modes of thought, particularly philosophical enquiry about the world. And this is what is so bleak about the view of people like Sam Harris and Harry Kroto, both of whom have essentially told me that they are utterly uninterested in why and how people are religious. They just wish people weren’t. They see religion as a collection of erroneous or unsupported beliefs about the physical world, and have no apparent interest in the human sensibilities that sometimes find expression in religious terms. This is a barren view, yes, but also a dangerous one, because it seems to instil a lack of interest in how religions arise and function in society. For Harris, it seems, there would be peace in the Middle East if there were no religion in the world. I am afraid I can find that view nothing other than childish, and it puzzles me the Richard Dawkins, who I think shares some of Pullman’s ‘in spite of himself’ attraction to religion and has a more nuanced position, is happy to keep company with such views.

Pullman is wonderfully forthright in condemning the stupidities and bigotries that exist in the Anglican Church – its sexism and no doubt (though he doesn’t mention it) its homophobia. “These demented barbarians”, he says, “driven by their single idea that God is obsessed by sex as they are themselves, are doing their best to destroy what used to be one of the great characteristics of the Church of England, namely a sort of humane liberal tolerance.” Well yes, though one might argue that this was a sadly brief phase. And of course, for the idea that God is as obsessed with sex as we are, one must ultimately go back to St Augustine, whose loathing of the body was a strong factor in his more or less single-handed erection (sorry) of original sin at the centre of the Christian faith. But according to some religious readers of Universe of Stone, I lack the religious sensibility to appreciate what Augustine and his imitators, such as Bernard of Clairvaux, were trying to express with their bigotry.

Elsewhere in the same issue of New Statesman, Terry Eagleton implies that it is wrong to harp on about such things because religion (well, Christianity) must be judged on the basis of its most sophisticated theology rather than on how it is practised. Eagleton would doubtless consider Pullman’s vision of a God who might be usurped and exiled, or gone to focus on another corner of the universe, or old and senile, theologically laughable. For God is not some bloke with a cosmic crown and a wand, wandering around the galaxies. I’m in the middle here (again?). Certainly, insisting as Harris does that you are only going to pick fights with the religious literalists who take the Bible as a set of rules and a description of cosmic history, and have never given a moment’s thought to the kind of theology Rowan Williams reads, is the easy option. But so, in a way, is insisting that religion can’t be blamed for the masses who practise a debased form of it. That would be my criticism of Karen Armstrong too, who presents a reasonable and benign, indeed even a wise view of Christianity that probably the majority of its adherents wouldn’t recognize as their own belief system. Religion must be judged by what it does, not just what it says. But the same is true, I fear, of science.

Oh dear, and you know, I was being so good in keeping silent as Sam Harris’s book was getting resoundingly trashed all over the place.

Sunday, June 12, 2011

Go with the Flow

Nicholas Lezard has always struck me as a man with the catholic but highly selective tastes (in literature if not in standards of accommodation) that distinguish the true connoisseur. Does my saying this have anything to do with the fact that he has just singled out my trilogy on pattern formation in the Guardian? How can you even think such a thing? But truly, it is gratifying to have this modest little trio of books noticed in such a manner. I can even live with the fact that Nicholas quotes a somewhat ungrammatical use of the word “prone” from Flow (he is surely literary enough to have noticed, but too gentlemanly to mention it).

Monday, June 06, 2011

Musical intelligence

In the latest issue of Nature I have interviewed the composer Eduardo Reck Miranda about his experimental soundscapes, pinned to a forthcoming performance of one of them at London’s South Bank Centre. Here’s the longer version of the exchange.
_______________________________________________

Eduardo Reck Miranda is a composer based at the University of Plymouth in England, where he heads the Interdisciplinary Centre for Computer Music Research. He studied computer science as well as music composition, and is a leading researcher in the field of artificial intelligence in music. He also worked on phonetics and phonology at the Sony Computer Science Laboratory in Paris. He is currently developing human-machine interfaces that can enable musical performance and composition for therapeutic use with people with extreme physical disability.

Miranda’s compositions combine conventional instruments with electronically manipulated sound and voice. His piece Sacra Conversazione, composed between 2000 and 2003, consists of five movements in which string ensemble pieces are combined with pre-recorded ‘artificial vocalizations’ and percussion. A newly revised version will be performed at the Queen Elizabeth Hall, London, on 9 June as part of a programme of electronic music, Electronica III. Nature spoke to him about the way his work combines music with neurology, psychology and bioacoustics.

In Sacra Conversazione you are aiming to synthesize voice-like utterances without semantic content, by using physical modelling and computer algorithms to splice sounds from different languages in physiologically plausible ways. What inspired this work?

The human voice is a wonderfully sophisticated musical instrument. But in Sacra Conversazione I focused on the non-semantic communicative power of the human voice, which is conveyed mostly by the timbre and prosody of utterances. (Prosody refers to the acoustical traits of vocal utterances characterized by their melodic contour, rhythm, speed and loudness.)

Humans seem to have evolved some sort of ‘prosodic fast lane’ for non-semantic vocal information in the auditory pathways of the brain, from the ears to regions that processes emotion, such as the amygdala. There is evidence that non-semantic content of speech is processed considerably faster than semantic content. We can very often infer the emotional content and intent of utterances before we process their semantic, or linguistic, meaning. I believe that this aspect of our mind is one of the pillars of our capacity for music.

You say that some of the sounds you used would be impossible to produce physiologically, and yet retain an inherent vocal quality. Do you know why that is?

Let me begin by explaining how I began to work on this piece. I started by combining single utterances from a number of different languages – over a dozen, as diverse as Japanese, English, Spanish, Farsi, Thai and Croatian – to form hundreds of composite utterances, or ‘words’, as if I were creating the lexicon for a new artificial language. I carefully combined utterances by speakers of similar voice and gender and I used sophisticated speech-synthesis methods to synthesise these new utterances. It was a painstaking job.

I was surprised that only about 1 in 5 of these new ‘words’ sounded natural to me. The problem was in the transitions between the original utterances. For example, whereas the transition from say Thai utterance A to Japanese utterance B did not sound right, the transition of the former to Japanese utterance C was acceptable. I came to believe that the main reason is physiological. When we speak, our vocal mechanism needs to articulate a number of different muscles simultaneously. I suspect that even though we may be able to synthesise physiologically implausible utterances artificially, the brain would be reluctant to accept them.

Then I moved on to synthesize voice using a physical model of the vocal tract. I used a model with over 20 variables, each of which roughly represents a muscle of the vocal tract (see E. R. Miranda, Leonardo Music Journal 15, 8-16 (2005)). I found it extremely difficult to co-articulate the variables of the model to produce decent utterances, which explains why speech technology for machines is still is very much reliant on splicing and smoothing methods. On the other hand, I was able to produce surreal vocalizations that, while implausible for humans to produce, retain a certain degree of coherence because of the physiological constraints embedded in the model.

Much of the research in music cognition uses the methods of neuroscience to understand the perception of music. You appear to be more or less reversing this approach, using music to try to understand processes of speech production and cognition. What makes you think this is possible?

The choice of research methodology depends on the aims to the research. The methods of cognitive neuroscience are largely aimed at proving hypotheses. One formulates a hypothesis to explain a certain aspect of cognition and then designs experiments aimed at proving it.

My research, however, is not aimed at a describing how music perception works. Rather, I am interested in creating new approaches to musical composition informed by research into speech production and cognition. This requires a different methodology, which is more exploratory: do it first and reflect upon the outcomes later.

I feel that cognitive neuroscience research methods force scientists to narrow the concept of music, whereas I am looking for the opposite: my work is aimed at broadening the concept of music. I should not think that both approaches are incompatible: one could certainly inform and complement the other.

What have you learnt from your work about how we make and perceive sound?

One of the things I’ve learnt is that perception of voice – and, I suspect, auditory perception in general – seems to be very much influenced by the physiology of vocal production.

Much of your work has been concerned with the synthesis and manipulation of voice. Where does music enter into it, and why?

Metaphorically speaking, synthesis and manipulation of voice are only the cogs, nuts and bolts. Music really happens when one starts to assemble the machine. It is extremely hard to describe how I composed Sacra Conversazione, but inspiration played a big role. Creative inspiration is beyond the capability of computers, yet finding its origin is the Holy Grail of the neurosciences. How can the brain draw and execute plans on our behalf implicitly, without telling us?

What are you working on now?

Right now I am orchestrating raster plots of spiking neurons and the behaviour of artificial life models for Sound to Sea, a large-scale symphonic piece for orchestra, church organ, percussion, choir and mezzo soprano soloist. The piece was commissioned by my university, and will be premiered in 2012 at the Minster Church of St Andrew in Plymouth.

Do you feel that the evolving understanding of music cognition is opening up new possibilities in music composition?

Yes, to a limited extent. Progress will probably emerge from the reverse: new possibilities in musical composition contributing to the development of such understanding.

What do you hope audiences might feel when listening to your work? Are you trying to create an experience that is primarily aesthetic, or one that challenges listeners to think about the relationship of sound to language? Or something else?

I would say both. But my primary aim is to compose music that is interesting to listen to and catches the imagination of the audience. I would prefer my music to be appreciated as a piece of art rather than as a challenging auditory experiment. However, if the music makes people think about, say, the relationship of sound to language, I would be even happier. After all, music is not merely entertainment.

Although many would regard your work as avant-garde, do you feel part of a tradition that explores the boundaries of sound, voice and music? Arnold Schoenberg, for example, aimed to find a form of vocalization pitched between song and speech, and indeed the entire operatic form of recitative is predicated on a musical version of speech.

Absolutely. The notion of avant-garde disconnected from tradition is too naïve. If anything, to be at the forefront of something you need the stuff in the background. Interesting discoveries and innovations do not happen in a void.

Sunday, June 05, 2011

Are we all doomed?

That’s the question that New Statesman put to a range of folks, including me. My answer was truncated in the magazine, which is fair enough but somewhat gave the impression that I fully bought into Richard Gott’s Copernican principle. In fact I consider it to be an amusing as well as a thought-provoking idea, but not obviously more than what I depict it as in the second paragraph of my full answer below. So here, for what it’s worth, is the complete answer.
__________________________________________________________________________
There is a statistical answer to this. If you assume, as common sense suggests you should, that there is nothing special about us as humans, then it is unlikely we are among the first or last people ever to exist. A conservative guess at the trajectory of future population growth then implies that humanity has between 5,000 and 8 million years left. Whether that’s a sentence of doom or a reprieve is a matter of taste.

Alternatively, you might choose to say that we know absolutely nothing about our ‘specialness’ in this respect, and so this is just an argument that manufactures apparent knowledge out of ignorance. If you prefer this point of view, it forces us to confront our current apocalyptic nightmares. Will nuclear war, global warming, superbugs, or a rogue asteroid finish us off within the century? The last of these, at least, can be assigned fairly secure (and long) odds. As for the others, prediction is a mug’s game (which isn’t to say that all those who’ve played are mugs). I’d recommend enough pessimism to take seriously the tremendous challenges we face today, and enough optimism to think it’s worth the effort.

Wednesday, May 25, 2011

Steve Jones gets unnatural

I’ve just discovered a review of Unnatural in the Lancet by Steve Jones. As one might expect, he has an interesting and quite particular take on it. It’s one with which, happily, I agree.

Monday, May 23, 2011

Belated Prospect

I realise that I meant to put up earlier my May column from Prospect. Almost time for the June column now, but here goes.
________________________________________________________

The notion that God has an inordinate fondness for beetles, credited to the biologist J. B. S. Haldane, retains a whiff of solipsism. For beetles are not so unlike us: multicellular, big enough to see, and legged. But God surely favours single-celled organisms far more. Beetles and humans occupy two nearby tips on the tree of life, while single-celled life forms have two of the three fundamental branches all to themselves: bacteria and archaea, so alike that it was only in the 1970s that the latter were awarded their own branch. Archaea have a different biochemistry to bacteria – their metabolism usually produces methane – and they are found everywhere, including the human gut.

Our place on the ‘tree of life’ now looks like it may be even more insignificant, for a team at the University of California, working with genomics pioneer Craig Venter, claims to have found hints of a fourth major branch in the tree, again populated only by single-celled organisms. These branches, called domains, are the most basic divisions in the Linnaean system of biological classification. We share our domain, the eukaryotes (distinguished by the way their cells are structured), with plants, fungi and yet more monocellular species.

Like most things Venter is involved in, the work is controversial. But perhaps not half so controversial as Venter’s belief, expressed in a panel debate titled ‘What is life?’ in Arizona in February, that all life on Earth might not even have a common origin. “I think the tree of life is an artefact of some early scientific studies, which are not really holding up”, he said, to the alarm of fellow panellist Richard Dawkins. His suggestion that there may be merely a “bush of life” only made matters worse.

Drop in the ocean

Despite the glee of creationists, there was nothing in Venter’s speculative remark that need undermine the case for Darwinian evolution. The claim of a fourth domain is backed by a little more evidence, but remains highly tentative. The data were gathered on a now famous round-the-world cruise that Venter undertook between 2003 and 2007 on his yacht to gather genomic information about the host of unknown microorganisms in the oceans. The rapid gene-analysing techniques that he helped to develop allow the genes of different organisms to be rapidly compared in order to identify evolutionary relationships between them. By looking at the same group of genes in two different organisms, one can deduce where in the tree of life they shared a common ancestor.

Using Venter’s data, Jonathan Eisen in California discovered that two families of genes in these marine microbes each seem to show a branch that doesn’t fit on the conventional tree of life. It’s possible that these genes might have been acquired from some unknown forms of virus (viruses are excluded from the tree altogether). The more exciting alternative is that they flag up a new domain. If so, its inhabitants would seem so far to be quite rare – a minor anomaly, like the Basque language, that has persisted quietly for billions of years. But since we are ignorant about perhaps 99 per cent of species on the planet, who knows?

Thinking big

The European Union is looking for big ideas. Really big ones. Its Flagship programme offers to fund two scientific projects to the tune of €1 bn over the next ten years. These must be “ambitious large-scale, science-driven, visionary research initiatives that aim to achieve a scientific breakthrough, provid[ing] a strong and broad basis for future technological innovation and economic exploitation in a variety of areas, as well as novel benefits for society.” In other words, they’ve got to achieve a heck of a lot, and will have truckloads of money to do so.

Six of the applications – all of them highly collaborative, international and interdisciplinary – have now been selected for a year of pilot funding, starting in May. They range from the highly technical to the borders of science fiction.

One promises to develop graphene, the carbon material that won last year’s physics Nobel prize, into a practical fabric for information technologies. Another proposes to truly figure out how the brain works; a third will integrate information technology with medicine to realise the much-advertised ‘personalized medicine’. But these things will all be pursued regardless of the Flagship scheme. More extraordinary, and therefore both more enticing and more risky, are two proposals to develop intelligent, sensitive artificial agents – characterized here as Guardian Angels or Robot Companions – that will help us individually throughout our lives. The sixth proposal (which received the highest rating) is to develop massive computer-simulation systems to model the entire ‘living Earth’, offering a ‘crisis observatory’ that will forecast global problems ranging from wars to economic meltdowns to natural disasters – the latter now all too vivid. The two initiatives to receive full funding will be selected in mid-2012 for launch in 2013.

Friday, May 20, 2011

The chief designer


I have a review of the RSC’s play Little Eagles in Nature this week. Here it is. Too late now to catch the play, I fear, but I thought it was impressive – even though Andrew Billen has some fair criticisms in the New Statesman.
____________________________________________________________________________
Little Eagles
A play by Rona Munro, directed by Roxana Silbert
Hampstead Theatre, London, until 7 May

It is a curious year of anniversaries for the former Soviet military-industrial complex. Fifty years ago the cosmonaut Yuri Gagarin became the first person in space, orbiting the world for 108 minutes in the Vostok spacecraft. And 25 years ago, Reactor 4 of the Chernobyl nuclear plant exploded and sent a cloud of radioactive debris across northern Europe.

One triumph, one failure; each has been marked independently. But while Little Eagles, Rona Munro’s play commissioned by the Royal Shakespeare Company for the Gagarin anniversary, understandably makes no mention of the disaster in Ukraine a quarter of a century later, the connections assert themselves throughout. Most obviously, both events were the fruits of the Cold War nuclear age. The rockets made by Sergei Korolyov, the chief architect of the Soviet space programme and the play’s central character, armed President Khrushchev with intercontinental ballistic missiles before they took Gagarin to the stars.

But more strikingly, we see the space programme degenerate along the same lines that have now made an exclusion zone of Chernobyl. Impossible demands from technically clueless officials and terror at the consequences of neglecting them eventually compromise the technologies fatally – most notably here in the crash of Soyuz 1 in 1967, killing cosmonaut Vladimir Komarov. Gagarin was the backup pilot for that mission, but it was clear that he was by then too valuable a trophy ever to be risked in another spaceflight. All the same, he died a year later during the routine training flight of a jet fighter.

Callous disregard for life marks Munro’s play from beginning to end. We first see Korolyov in the Siberian labour camp where he was sent during Stalin’s purge of the officer class just before the Second World War. As the Soviets developed their military rocket programme, the stupidity of sending someone so brilliant to a virtual death sentence dawned on the regime, and he was freed to resume work several years later. During the 1950s Korolyov wrested control of the whole enterprise, becoming known as the Chief Designer.

Munro’s Korolyov seems to offer an accurate portrait of the man, if the testimony of one of his chief scientists is anything to go by: “He was a king, a strong-willed purposeful person who knew exactly what he wanted… he swore at you, but he never insulted you. The truth is, everybody loved him.” As magnetically played by Darrell D’Silva, you can see why: he is a swaggering, cunning, charming force of nature, playing the system only to realise his dream of reaching the stars. He clearly reciprocates the love of his ‘little eagles’, the cosmonauts chosen with an eye on the Vostok capsule’s height restrictions.

But for his leaders, rocketry was merely weaponry, or a way of demonstrating superiority over their foes in the West. Korolyov becomes a hero for beating the Americans with Sputnik, and then with Vostok. But when the thuggish, foul-mouthed Khrushchev (a terrifying Brian Doherty) is retired in 1964 in favour of the icily efficient Leonid Brezhnev, the game changes. The new leader sees no virtue in Korolyov’s dream of a Mars mission, and is worried instead that the Americans will beat them to the moon. The rushed and bungled Soyuz 1, launched after Korolyov’s death in 1966, was the result.

Out of this fascinating but chewy material, Munro has worked wonders to weave a tale that is intensely human and, aided by the impressive staging, often beautiful and moving. Gagarin’s own story is here a subplot, and not fully worked through – we start to see his sad descent into the vodka bottle, grounded as a toy of the Politburo, but not his ignominious end. There is just a little too much material here for Munro to shoehorn in. But that is the only small complaint in this satisfying and wise production.

What it becomes in the end is a grotesque inversion of The Right Stuff, Tom Wolfe’s account of the US space programme made into an exhilarating movie in 1983. Wolfe’s celebration was a fitting tribute to the courage and ingenuity that ultimately took humans to the moon, but an exposure of the other side of the coin was long overdue. There is something not just awful but also grand and awesome in the grinding resolve of the Soviets to win the space race relying on just the Chief Engineer “and convicts and some university students”, as Korolyov’s doctor puts it.

Little Eagles shows us the mix of both noble and ignoble impulses in the space race that the US programme, with its Columbus rhetoric, still cannot afford to acknowledge. It recognizes the eye-watering glory of seeing the stars and the earth from beyond the atmosphere, but at the same time reveals the human spaceflight programmes as utterly a product of their tense, cheat-beating times, a nationalistic black hole for dollars and roubles (and now, of yuan too). Crucially, it leaves the final judgement to us. “They say you changed the whole sky and everything under it”, Korolyov’s doctor (and conscience) says to him at the end. “What does that mean?”

Wednesday, May 18, 2011

The Achilles' heel of biological complexity

Here’s the pre-edited version of my latest news story for Nature. This is such an interesting issue that I plan to write a more detailed piece on it for Chemistry World soon.
_____________________________________________________________________________
The complex web of protein interactions in our cells may be masking an ever-worsening problem.

Why are we so complicated? You might imagine that we’ve evolved that way because it conveys adaptive benefits. But a new study in Nature [1] suggests that the complexity in the molecular ‘wiring’ of our genome – the way our proteins talk to each other – may be simply a side effect of a desperate attempt to stave off problematic random mutations in the proteins’ structure.

Ariel Fernández, working at Chicago University and now at the Mathematics Institute of Argentina in Buenos Aires, and Michael Lynch of Indiana University in Bloomington argue that complexity in the network of our protein interactions arises because our relatively small population size, compared with single-celled organisms, makes us especially vulnerable to ‘genetic drift’: changes in the gene pool due to the reproductive success of certain individuals by chance rather than by superior fitness.

Whereas natural selection tends to weed out harmful mutations in genes and their related proteins, genetic drift does not. Fernández and Lynch argue that the large number of physical interactions between our proteins – now a crucial component of how information is transmitted in our cells – compensates for the reduction in protein stability wrought by drift. But this response comes at a cost.

It might mask the accumulation of structural weaknesses in proteins to a point where the problem can no longer be contained. Then, say Fernández and Lynch, proteins might be liable to misfold spontaneously – as they do in so-called diseases such as Alzheimer’s, Parkinson’s and prion diseases, caused by misfolded proteins in the brain.

If so, this means we may be fighting a losing race. Genetic drift may eat away at the stability of our proteins until they are overwhelmed, leaving us a sickly species.

This would imply that Darwinian evolution isn’t necessary benign in the long run. By finding a short-term solution to drift, it might merely be creating a time-bomb. “Species with low population are ultimately doomed by nature’s strategy of evolving complexity”, says Fernández.

The work provides “interesting and important news”, according to William Martin, a specialist in molecular evolution at the University of Düsseldorf in Germany. Martin says it shows that evolution of eukaryotes – relatively complex organisms like us, with a cellular ‘nucleus’ that houses the chromosomes – “can be substantially affected by drift.”

Drift is a bigger problem for small populations – those of multicelled eukaryotic organisms – than for large ones, because survival by chance rather than by fitness is statistically more likely for small numbers. Many random mutations in a gene, and thus in the protein made from it, will harm the protein’s resistance to unfolding: the protein’s folded-up shape becomes more apt to loosen as water molecules intrude into it. This loss of shape weakens the protein’s ability to function.

Such problems can be avoided if proteins stick loosely to one another so as to shelter the regions vulnerable to water. Fernández and Lynch say that these associations between proteins – a key feature of the cell biology of eukaryotes – may have therefore initially been a passive response to genetic drift. Over time, certain protein-protein interactions may be selected by evolution for useful functions, such as sending molecular signals across cell membranes.

Using protein structures reported in the Protein Data Bank, the two researchers verified that disruption of the interface between proteins and water, caused mostly by exposure of ‘sticky’ parts of the folded peptide chain [full disclosure: these are actually parts of the chain that hydrogen-bond to one another; exposure to water enables the water molecules to compete for the hydrogen bonding. Ariel Fernández has previously explored how such regions may be ‘wrapped’ in hydrophobic chain segments to keep water away], leads to a greater propensity for a protein to associate with others. They also showed that drift could account for this ‘poor wrapping’ of proteins.

On this view, genome complexity doesn’t offer intrinsic evolutionary advantages, but is a kind of knee-jerk response to the chance appearance of ‘needy proteins’ – which ends up exposing us to serious risks.

“I believe prions are indicators of this gambit gone too far”, says Fernandez. “The proteins with the largest accumulation of structural defects are the prions, soluble proteins so poorly wrapped that they relinquish their functional fold and aggregate”. Prions cause disease by triggering the misfolding of other proteins.

“If genetic variability resulting from random drift keeps increasing, we as a species may end up facing more and more fitness catastrophes of the type that prions represent”, Fernandez adds. “Perhaps the evolutionary cost of our complexity is too high a price to pay in the long run.”

However, Martin doubts that drift alone can account for the difference in complexity between prokaryotes (single-celled organisms without a cell nucleus) and eukaryotes. His previous work has indicated that bioenergetics also plays a strong role [2]. For example, says Martin, prokaryotes with small population sizes are symbiotic, which tend to degenerate, not to become complex. “Population genetics is just one aspect of the complexity issue”, he says.

References
1. Fernandez, A. & Lynch, M. Nature doi:10.1038/nature09992 (2011).
2. Lane, N. & Martin, W. Nature 467, 929-934 (2010).

Monday, May 09, 2011

Unnatural happenings

There is a smart review of Unnatural in The Age by Damon Young. I don’t just say it is smart because it is positive – he engages intelligently with the issues. This bit made me smile: “Because he's neither a religious nor scientific fundamentalist, Ball's ideas may draw flak from both.” Well, indeed.

And I recently spoke to David Lemberg about the book for a podcast on the very nice Alden Bioethics blog run out of Albany Medical Center in New York. It’s available here.

Sunday, May 08, 2011

Are scientific reputations boosted artificially?

Here’s my latest Muse for Nature News.
_________________________________________________________

Scientific reputations emerge in a collective manner. But does this guarantee that fame rests on merit?

Does everyone in science get the recognition they deserve? Well obviously, your work hasn’t been sufficiently appreciated by your peers, but what about everyone else? Yes, I know he is vastly over-rated, and it’s a mystery why she gets invited to give so many keynote lectures, but that aside – is science a meritocracy?

How would you judge? Reputation is often a word-of-mouth affair; grants, awards and prizes offer a rather more concrete measure of success. But increasingly, scientific excellence is measured by citation statistics, not least by the ubiquitous h-index [1], which seeks to quantify the impact of your total oeuvre. Do all or any of these things truly reflect the worth of one’s scientific output?

Many would probably say: sort of. Most good work gets recognized eventually, and most Nobel prizes are applauded and deemed long overdue, rather than denounced as undeserved. But not always. Sometimes important work doesn’t get noticed in the author’s lifetime, and it’s a fair bet that some never comes to light at all. There’s surely an element of chance and luck in the establishment of reputations.

A new paper in PLoS ONE by Santo Fortunato of the Institute for Scientific Interchange in Turin, Italy, Dirk Helbing of ETH in Zurich, Switzerland, and coworkers aims to shed some light on the mechanism by which citations are accrued [2]. They have found that some landmark papers of Nobel laureates quite quickly give their authors a sudden boost in citation rate – and that this boost extends to the author’s earlier papers too, even if they were in unrelated areas.

For example, citations to a pivotal 1989 paper by chemistry Nobel laureate John Fenn on electrospray ionization mass spectrometry [3] took off exponentially, but also raised the citation profile of at least six of Fenn’s older papers. These peaks in citation rate stand out remarkably clearly for several laureates (some of whom have more than one peak), and might be a useful indicator both of important breakthroughs and of scientific performance.

This behaviour could seem reassuring or disturbing, depending on your inclination. On the one hand, some of these researchers were not particularly well known before they published their landmark papers – and yet the value of the work does seem to have been recognized, overcoming the rich-get-richer effect by which those already famous tend more easily to accrue more fame [4]. This boost could help innovative new ideas to take root. On the other hand, such a rise to prominence brings a new rich-get-richer effect, for it awards ‘unearned’ citations to the researcher’s other papers.

And the findings seem to imply that citations are sometimes selected not because they are necessarily the best or most appropriate but to capitalize on the prestige and presumed authority of the person cited. This further distorts a picture that already contains a rich-get-richer element among citations themselves. An earlier analysis suggested that some citations become common largely by chance, benefitting from a feedback effect in which they are chosen simply because others have chosen them before [5].

But at root, what this finding underscores is that science is a social enterprise, with all the consequent quirks and nonlinearities. That has potential advantages, but also drawbacks. In an ideal world, every researchers would reach an independent judgement about the value of a paper or a body of work, and the sum of these judgements should then reflect something fundamental about its worth.

That, however, is no longer an option, not least because there is simply too much to read – no one can hope to keep up with all that happens in their field, let alone in related ones. As a result, the scientific community must act as a collective search engine that hopefully alights on the most promising material. The question is whether this social network is harnessed efficiently, avoiding blind alleys while not overlooking gems.

No one really knows the answer to that. But some social-science studies highlight the possible consequences. For example, it seems that selections made ostensibly on merit are somewhat capricious when others’ choices are taken into account: objectively ‘good’ and ‘bad’ material still tends on average to be seen as such, but feedbacks can create a degree of randomness in what succeeds and fails [6]. Doubtless the same effects operate in the political sphere – so that democracy is a somewhat compromised meritocracy – and also in economics, which is why prices frequently deviate from their ‘fundamental’ value.

But Helbing suggests that there is probably an optimal balance between independence and group-think. A computer model of people exiting a crowded room in an emergency shows that it empties most efficiently when there is just the right amount of follow-the-crowd herding [7]. Are scientific reputations forged in this optimal regime? And if not, what would it take to engineer more wisdom into this particular crowd?

References
1. Hirsch, J. E. Proc. Natl Acad. Sci. USA 102, 16569-16572 (2005).
2. Mazloumian, A., Eom, Y.-H., Helbing, D., Lozano, S. & Fortunato, S. PLoS ONE 6(5), e18975 (2011).
3. Fenn, J. B., Mann, M., Meng, C. K., Wong, S. F. & Whitehouse, C. M., Science 246, 64-71 (1989).
4. Merton, R. K. Science 159, 56-63 (1968).
5. Simkin, M. V. & Roychowdhury, V. P. Ann. Improb. Res. 11, 24-27 (2005).
6. Salganik, M. J., Dodds, P. S. & Watts, D. J. Science 311, 854-856 (2006).
7. Helbing, D., Farkas, I. & Vicsek, T. Nature 407, 487-490 (2000).

Friday, May 06, 2011

A discourse on method

Actually (to pick up from the previous post), I’d meant to put my last Crucible column up here too. So here it is now.
__________________________________________________

What’s wrong with this claim? “Replication of results is a crucial part of the scientific method. Experimental errors come rapidly to light when researchers prove unable to reproduce the claims of others. In this way, science has a built-in mechanism for self-correction.”

The insistence on replication – as the motto of the Royal Society puts it, ‘take no one’s word for it’ (Nullius in verba) – has indeed long been one of science’s great strengths. It explains why pathological science such as cold fusion and polywater was rather quickly consigned to the dustbin while equally striking claims such as high-temperature superconductivity have entered the textbooks.

But too often this view of the ‘scientific method’ – itself a slippery concept – is regarded as a regular aspect of science in action, rather than an expression of the ideal. Rather few experiments are replicated verbatim, as it were, not least because science is too competitive and busy to spend one’s time doing what someone has already done. Important claims are bound to get checked as others rush to follow up on the work, but mundane stuff will probably never be tested – it will simply sink unheeded into the literature.

No one should be surprised or unduly alarmed at that – if work isn’t important enough to warrant replication, it matters little if it is flawed. And although the difficulty of publishing negative results probably hinders the correction process and favours exaggerated claims, information technologies might now offer solutions.1 What matters more is that replication isn’t just a problem in practice; it’s a problem in theory.

The concept emerged along with experimental science itself in the late sixteenth century. Before that, experiments – when they were done at all – were typically considered not a test of your hypothesis but a demonstration that it was right. If ‘experience’ didn’t fit with theory, no one felt a compelling urge to modify the theory, not least because the world was not considered law-bound it quite the same way it is today. Even though the early experimentalists, often working outside the academic mainstream, decided they needed to filter recipes and reports by attempting to verify them before recording them as fact, the tradition of experiment-as-demonstration persisted for a long time. Many of the celebrated trials shown to the Fellows of the Royal Society were like that.

But in any case, it would be wrong to suppose that the failure of an experiment to verify a hypothesis or to replicate a prior claim should be grounds for their rejection. Robert Boyle appreciated this in his ‘Two Essays, concerning the Unsuccessfulness of Experiments’ (1661). There are many reasons, he wrote, why an experiment might not work as anticipated: the equipment might be faulty, or the reagents not fresh, for example. That was amply borne out (albeit in reverse) by the recent discovery that a crucial step (first reported in 1918) in the alleged total synthesis of quinine by Robert Woodward and William Doering in 1944 depended on a catalyst being aged.2 The very fact that it took 90 years to test that step is itself a comment on how replication really functions in science.

The problem of replication was highlighted by Boyle’s own famous experiments with the air pump. By raising the possibility of a vacuum, these studies posed a serious challenge to the prevailing Aristotelian philosophy. So the stakes were very high. But because of the imperfections of the apparatus, it was no easy matter even for Boyle to reproduce some of his findings. And because the air pump was a hugely sophisticated piece of scientific kit– it has been dubbed the cyclotron of its age – it was very expensive, so very few others were in a position to try the experiments. Even if they did, the designs differed, so one couldn’t be sure that the same procedures were being followed.3 That essentially no replications could be attempted without first-hand experience of Boyle’s instrument reflects today’s situation, in which hardly any complicated experimental procedure can be replicated reliably without direct contact between the labs involved. Even then, the only way to calibrate your apparatus may be against that whose results you’re trying to test.

Which raises the question: if your attempted replication ‘fails’, where is the error? Have you neglected something? Or was the original claim wrong? Or was it right for the wrong reasons? The possibilities are endless. Indeed, the philosophers Pierre Duhem and Willard Van Orman Quine have independently pointed out that, from a strictly logical perspective, no hypothesis can ever be tested or an experimental replication assayed, because the problem is under-determined: discrepancies can never be logically localized to a particular cause. Science makes progress regardless, and what is perhaps surprising is that the ‘scientific method’ remains so effective when it is in truth ramshackle, makeshift and logically shaky.

These issues seem more pertinent than ever. Who, for example, is going to check the findings from the Large Hadron Collider?

References
1. J. Schooler, Nature 470, 437 (2011).
2. A. C. Smith & R. M. Williams, Angew. Chem. Int. Edn 47, 1736–1740 (2008).
3. S. Shapin & S. Schaffer, Leviathan and the Air-Pump (Princeton University Press, Princeton, 1985).

Thursday, May 05, 2011

Science and religion - even chemists aren't immune

Oh, it’s risky, I know. But I offer the following mild observations about the recent Templeton Prize in my Crucible column in Chemistry World. When I wrote it, on the day of the announcement, I didn’t realise quite what a lot of shrieking the award would elicit. There is, by the way, a sentence in the final para, omitted in the published version, that makes the meaning of my final sentence a little more apparent. Herein lies a tale.
_______________________________________________________________________

The astronomer Martin Rees, until recently President of the Royal Society, seems nonchalant, even bemused, about receiving this year’s Templeton Prize for work at the interface of science and religion. Not only has he seemingly little idea to what to do with the £1m prize money, but he confesses to knowing little about the Templeton Foundation beyond what appeared in a recent Nature article [1], and wasn’t sure why he had been selected.

According to the Pennsylvania-based Templeton Foundation, set up by the late billionaire John Templeton to develop links between science and spirituality, the prize is awarded to people who have “expanded our vision of human purpose and ultimate reality”. In giving it to Rees, the foundation says that his “profound insights on the cosmos have provoked vital questions that speak to humanity’s highest hopes and worst fears”.

One thing Rees must have known, however, is that his award would be controversial. Some scientists see it as an attempt to buy respectability for the Foundation through the names of illustrious scientists. In its early days the award went to religious figures such as Billy Graham and Mother Teresa. But Rees joins a list of winners that now includes cosmologists George Ellis and John Barrow, physicists Paul Davies, Freeman Dyson and Charles Townes, and biologist Francisco Ayala. This reflects the Foundation’s energetic determination over the past two decades to focus on interactions between science and religion – topics that some sceptics say have no shared ground. Chemistry Nobel laureate Harry Kroto, one of those who has condemned Rees’ acceptance of the prize, suggests that to qualify you just have to be an eminent scientist prepared to be nice – or at least not rude – about religion.

Rees is no stranger to this disputed territory. He presided over the sacking of the Royal Society’s director of education Michael Riess, an ordained Church of England minister, after remarks that were construed as defending the teaching of creationism in schools. Rees also drew fire from the inclusion of a service at St Paul’s Cathedral, led by the Archbishop of Canterbury, in the Royal Society’s 350th anniversary celebrations last year. Rees has said publicly that he has no religious beliefs but occasionally attends church services and recognizes their social role. He takes the pragmatic view that, in battling the anti-scientific extremes of religious fundamentalism, he’d rather have the Archbishop and other moderates on his side. For others, the distance between evidence-based science and faith-based religion is too great to make common cause.

Chemistry might seem too remote from the Templeton Foundation’s goals for the issue of whether to accept its ‘tainted’ money ever to arise. Historically, of course, many chemists were profoundly religious. For Robert Boyle, investigating all aspects of nature was a holy duty that deepens our reverence for God’s works. Michael Faraday had to juggle his science and his profound non-conformist Christian beliefs.

Yet surely chemical research can’t directly speak to religious questions today? Don’t be so sure. In 2005 I took part in a Templeton-funded symposium called “Water of Life: Counterfactual Chemistry and Fine-Tuning in Biochemistry”. While I won’t pretend to have been indifferent to the venue on the shore of Lake Como, I would have declined were it not for the stellar list of other delegates. The meeting was motivated by Harvard biologist Lawrence Henderson’s 1913 book The Fitness of the Environment, in which he suggested that water is ‘biophilic’, with physical and chemical properties remarkably fine-tuned to support life. The question put to the gathering was: are they really?

Among the many contributions, Ruth Lynden-Bell and Pablo Debenedetti described computer simulations of ‘counterfactual water’ in which the properties of the molecule were slightly altered to see if it retained its unique liquid properties [2]. For example, the tetrahedral hydrogen-bonded motif remains, in distorted form, if the H-O-H bond angle is changed from 109.5 degrees to 90 degrees, but the structure becomes more like that of a ‘normal’ liquid as the hydrogen-bond strength is decreased. This notion of a ‘modified chemistry’ thus may probe how far the chemical world is contingent and how far it is inevitable. Of course, one could say that there is no contingence: things are as they are and not otherwise. But fine-tuning arguments in cosmology confront the mystery of why the laws of nature seem geared to enable our existence. If there’s plenty of slack, there’s no mystery to explain. Counterfactual scenarios can also explore the supposed uniqueness of water as life’s solvent, irrespective of any metaphysical implications.

If you want to know what the meeting concluded, you’ll have to read the book [3]. It has only recently been published, in part because some university presses seemed nervous of the association with the Templeton Foundation. Wary at the outset of an underlying agenda, I saw no evidence of it at the meeting: it was good science all the way. Sceptics are right to ask questions about the Foundation’s motives, but they need to be open-minded about the answers. When such scepticism stands in the way of solid science, we are all the losers.

1. M. M. Waldorp, Nature 470, 323 (2011).
2. R. M. Lynden-Bell & P. G. Debenedetti, J. Phys. Chem. B 109, 6527 (2005).
3. R. M. Lynden-Bell, S. Conway Morris, J. D. Barrow, J. L. Finney & C. L. Harper (eds). Water and Life: the Unique Properties of H2O. CRC Press, Boca Raton, 2010.

Sunday, April 24, 2011

The Information - a review

I have a review of James Gleick's new book in the Observer today. Here it is. He does an enviable job, on the whole - this is better than Chaos.

________________________________________________________________

The Information: A History, a Theory, a Flood
James Gleick
Fourth Estate, 2011
ISBN 978-0-00-722573-6

Too much information: the complaint du jour, but also toujours. Alexander Pope quipped that the printing press, “a scourge for the sins of the learned”, would lead to “a deluge of Authors [that] covered the land”. Robert Burton, the Oxford anatomist of melancholy, confessed in 1621 that he was drowning in books, pamphlets, news and opinions. All the twittering and tweeting today, the blogs and wikis and apparent determination to archive even the most ephemeral and trivial thought has, as James Gleick observes in this magisterial survey, something of the Borgesian about it. Nothing is forgotten; the world imprints itself on the informatosphere at a scale approaching 1:1, each moment of reality creating an indelible replica.

But do we gain from it, or was T. S. Eliot right to say that “all our knowledge brings us nearer to our ignorance”? Gleick is refreshingly upbeat. In the face of the information flood that David Foster Wallace called Total Noise, he says, “we veer from elation to dismay and back”. But he is confident that we can navigate it, challenging the view of techno-philosopher Jean-Pierre Dupuy that “ours is a world about which we pretend to have more and more information but which seems to us increasingly devoid of meaning”. Yet this relationship between information and meaning is the crux of the matter, and it is one that Gleick juggles but does not quite get to grips with. I’ll come back to that.

This is not, however, a book that merely charts the rising tide of information, from the invention of writing to the age of Google. To grasp what information truly means – to explain why it is shaping up as a unifying principle of science – he has to embrace linguistics, logic, telecommunications, codes, computing, mathematics, philosophy, cosmology, quantum theory and genetics. He must call as witnesses not only Charles Babbage, Alan Turing and Kurt Gödel, but also Borges, Poe and Lewis Carroll. There are few writers who could accomplish this with such panache and authority. Gleick, whose Chaos in 1987 helped to kick-start the era of modern popular science and who has also written acclaimed biographies of Richard Feynman and Isaac Newton, is one.

At the heart of the story is Claude Shannon, whose eclectic interests defy categorization today and were positively bizarre in the mid twentieth century. Having written a visionary but ignored doctoral thesis on genetics, Shannon wound up in the labs of the Bell Telephone Company, where electrical logic circuitry was being invented. There he worked (like Turing, who he met in 1943) on code-breaking during the Second World War. And in 1948 he published in Bell’s obscure house journal a theory of how to measure information – not just in a phone-line signal but in a random number, a book, a genome. Shannon’s information theory looms over everything that followed.

Shannon’s real point was that information is a physical entity, like energy or matter. The implications of this are profound. For one thing, manipulating information in a computer then has a minimum energy cost set by the laws of physics. This is what rescues the second law of thermodynamics (entropy or disorder always increases) from the hypothetical ‘demon’ invoked by James Clerk Maxwell in the nineteenth century to undermine it. By observing the behaviour of individual molecules, Maxwell’s demon seemed able to engineer a ‘forbidden’ decrease in entropy. But that doesn’t undo the sacrosanct second law, since processing the necessary information (more precisely, having to discard some of it – forgetting is the hard part) incurs a compensating entropic toll. In effect the demon instead turns information to energy, something demonstrated last year by a group of Japanese physicists – sadly too late for Gleick.

In quantum physics the role of information goes even deeper: at the level of fundamental particles, every event can be considered a transaction in information, and our familiar classical world emerges from the quantum by the process of erasing information. In quantum terms, Gleick says, “the universe is computing its own destiny.” By this point we are a long way from cuneiform and Morse code, though he makes the path commendably clear.

Moreover, Gleick does so with tremendous verve, which is mostly exhilarating, sometimes exhausting and occasionally coy. He is bracingly ready to use technical terms without definition – nonlinear, thermodynamic equilibrium – rightly refusing any infantilizing hand-holding. What impresses most is how he delves beneath the surface narrative to pull out the conceptual core. Written language, he explains, did not simply permit us to make thoughts permanent – it changed thinking itself, enabling abstraction and logical reasoning. Language is a negotiation whose currency is information. A child learning to read is not simply turning letters into words but is learning how to exploit (often recklessly) the redundancies in the system. She reads ‘this’ as ‘that’ not because she confuses the phonemes but because she knows that only a few of them may follow ‘th’, and it’s less effort to guess. Read the whole word, we tell her, but we don’t do it ourselves. That’s why we fail to spot typos: we’ve got the message already. Language elaborates to no informational purpose; the ‘u’ after ‘q’ could be ditched wholesale. Text messaging now lays bare this redundancy: we dnt nd hlf of wht we wrt.

Shannon’s take on language is disconcerting. From the outset he was determined to divorce information from meaning, making it equivalent to something like surprise or unpredictability. That’s why a random string of letters is more information-rich, in Shannon’s sense, than a coherent sentence. There is a definite value in his measure, not just in computing but in linguistics. Yet to broach information in the colloquial sense, somewhere meaning must be admitted back into all the statistics and correlations.

Gleick acknowledges the tension between information as Shannon’s permutation of bits and information as agent of meaning, but a reconciliation eludes him. When he explains the gene with reference to a Beethoven sonata, he says that the music resides neither in acoustic waves nor annotations on paper: ‘the music is the information’. But where and what is that information? Shannon might say, and Gleick implies, that it is in the pattern of notes that Beethoven conceived. But that’s wrong. The notes become music only in the mind of a listener primed with the cognitive, statistical and cultural apparatus to weave them into coherent and emotive forms. This means there is no bounded information set that is the music – it is different for every listener (and every performance), sometimes subtly, sometimes profoundly. The same for literature.

Lest you imagine that this applies only to information impinging on human cognition, it is equally true of the gene. Gleick too readily accepts the standard trope that genes – the abstract symbolic sequence – contain the information needed to build an organism. That information is highly incomplete. Genes don’t need to supply it all, because they act in a molecular milieu that fills in the gaps. It’s not that the music, or the gene, needs the right context to deliver its message – without that context, there is no message, no music, no gene. An information theory that considers just the signal and neglects the receiver is limited, even misleading.

It is the only serious complaint about what is otherwise a deeply impressive and rather beautiful book.

Tuesday, April 19, 2011

Universal blues

I have written a news story and a leader for Nature on a new paper examining the notion that there are universal grammatical principles in language. Here they are, in that order. But I must say that, much as the results reported by Dunn et al. chime with my instinctive resistance to universal theories of anything, the comments I’ve received on the paper make me a little sceptical that it does what it claims. Time will tell, I suppose.
__________________________________________________________________________

Linguists debate whether languages share universal grammatical features.

Languages evolve in their own idiosyncratic fashion, rather than being governed by universal rules. That’s the conclusion of a new study which compares the grammar of several hundred languages in the light of their evolutionary trees.

Psychologist Russell Gray of the University of Auckland in New Zealand and his coworkers examine the relationships between traits such as the ordering of verbs and nouns in four families representing more than 2,000 languages, and find no sign of any persistent, universal guiding principles [1].

It’s already proving to be a controversial claim. “There is nothing in the paper that brings into question the views that they are arguing against”, says linguist Matthew Dryer of the State University of New York at Buffalo.

There is thought to be around 7,000 languages in the world, which show tremendous diversity in structure. Some have complex ways of making composite words (such as Finnish), others have simple, short and invariant words (such as Mandarin Chinese). Some put verbs first in a sentence, others in the middle and others at the end.

But many linguists suspect there be some universal logic behind this bewildering variety – common cognitive factors that underpin grammatical structures. Two of the most prominent ‘universalist’ theories of language have been proposed by American linguists Noam Chomsky and Joseph Greenberg.

Chomsky tried to account for the astonishing rapidity with which children assimilate complicated and subtle grammatical rules by supposing that we are all born with an innate capacity for language, presumably housed in brain modules specialized for language. He suggested that this makes children able to generalize the grammatical principles of their native tongue from a small set of ‘generative rules’.

Chomsky supposed that languages change and evolve when children reset the parameters of these rules. A single change should induce switches in several related traits in the language.

Greenberg took a more empirical approach, enumerating many observed shared traits between languages. Many of these concerned word order. For example, a conditional clause normally precedes its conclusion: “if he’s right, he’ll be famous.” Greenberg argued that these universals reflect fundamental biases, probably for cognitive reasons. “The Greenbergian word order universals have the strongest claim to empirical validity of any universalist claim about language”, says Gray’s coauthor Michael Dunn of the Max Planck Institute for Psycholinguistics at Nijmegen.

Both of these ideas have implications for the family tree of language evolution. In Chomsky’s case, as languages evolve, certain features should co-vary because they are products of the same underlying parameter. Greenberg’s idea also implies co-dependencies between certain grammatical features of a language but not others. For example, the word order for verb-subject pairs shouldn’t depend on that for object-verb pairs.

To test these predictions, Gray and colleagues used the methods of phylogenetic analysis developed for evolutionary biology to reconstruct four family trees representative of more than 2,000 languages: Austronesian, Indo-European, Bantu and Uto-Aztecan. For each family they looked at eight word-order features and used statistical methods to calculate the changes that each pair of features had evolved independently or in a correlated way. This allowed them to deduce the webs of co-dependence among the features and compare them to what the theories of Chomsky and Greenberg predict.

They found that neither of these two models matched the evidence. Not only do the co-dependencies differ from those expected from Greenberg’s word-order ‘universals’, but they are different for each family. In other words, the deep grammatical structure of each family is different from that of each of the others: each family has evolved its own rules, so there is no reason to suppose that these are governed by universal cognitive factors.

What’s more, even when a particular co-dependency of traits was shared by two families, the researchers could show that it came about in different ways for each – that the commonality may be coincidental. They conclude that the languages – at least in their word-order grammar – have been shaped in culture-specific ways and not by universals.

Other experts express some skepticism about the new results, albeit for rather different reasons. Martin Haspelmath at the Max Planck Institute for Evolutionary Anthropology in Leipzig, Germany, says he agrees with the conclusions but that “for specialists they are nothing new”. “It’s long been known that grammatical properties and dependencies are lineage-specific”, he says.

Meanwhile, Dryer, who has previously presented evidence that supports Greenberg’s position, is not persuaded that the results make a convincing case. “There are over a hundred language families that the authors ignore but which provide strong support for the views they are arguing against”, he says. There is no reason to expect a consistent pattern of word-order relationships within families, he adds, regardless of whether they are shaped by universal constraints.

Haspelmath feels it may be more valuable to look for what languages share in common than how they (inevitably) differ. Even if cultural evolution is the primary factor in shaping them, he says, “it would be very hard to deny that cognitive biases play no role at all.”

“Comparative linguists have focused on the universals and cognitive explanations because they wanted to explain something”, he adds. “Saying that cultural evolution is at play basically means that we can’t explain why languages are the way they are – which is largely true, but it’s not the whole truth.”

1. Dunn, M., Greenhill, S. J., Levinson, S. C & Gray, R. D. Nature 10.1038/nature089923 (2011).
*********************

A search for universals has characterized the scientific enterprise at least since Aristotle. In some ways, this quest for common principles underlying the diversity of the universe defines science: without it there is no order and pattern, but merely as many explanations as there are things in the world. Newton’s laws of motion, the oxygen theory of combustion and Darwinian evolution each united a host of different phenomena in a single explicatory framework.

One view takes this impulse for unification to its extreme: to find a Theory of Everything that offers a single generative equation for all we see. It is becoming ever less clear, however, that such a theory – if it exists – can be considered a simplification, given the proliferation of dimensions and universes it might entail. Nonetheless, unification of sorts remains a major goal.

This tendency in the natural sciences has long been evident in the social sciences too. Darwinism seems to offer justification: if all humans share common origins, it seems reasonable to suppose that cultural diversity must also be traceable to more constrained origins. Just as the bewildering variety of courtship rituals might all be considered forms of sexual selection, so perhaps the world’s languages, music, social and religious customs and even history could be governed by universal features. Filtering out what is contingent and unique from what is shared in common might enable us to understand how complex cultural behaviours arose and what ultimately guides them in evolutionary or cognitive terms.

That, at least, is the hope. But a comparative study of linguistic traits by Dunn et al. (online publication doi:10.1038/nature09923) supplies a sharp reality check on efforts to find universality in the global spectrum of languages. The most famous of these was initiated by Noam Chomsky, who postulated that humans are born with an innate language-acquisition capacity – a brain module or modules specialized for language – that dictates a universal grammar. Just a few generative rules are then sufficient to unfold the entire fundamental structure of a language, which is why children can learn it so quickly. Languages would diversify through changes to the ‘parameter settings’ of the generative rules.

In contrast, Joseph Greenberg took a more empirical approach to universality, identifying a long list of traits (particularly in word order) shared by many languages, which are considered to represent biases that result from cognitive constraints. Chomsky’s and Greenberg’s are not by any means the only theories on the table for how languages evolve, but they make the strongest predictions about universals. Dunn et al. have put them to the test by using phylogenetic methods to examine the four family trees that between them represent over 2,000 languages. A generative grammar should show patterns of language change that are independent of the family tree or the pathway tracked through it, while Greenbergian universality predicts strong co-dependencies between particular types of word-order relations (and not others). Neither of these patterns is borne out by the analysis, suggesting that the structures of the languages are lineage-specific and not governed by universals.

This doesn’t mean that cognitive constraints are irrelevant, nor that there are no other universals dictated by communication efficiency. It’s surely inevitable that cognition sets limits on, say, word length or the total number of phonemes. But such ‘universals’ seem likely to be relatively trivial features of languages, just as may be the case for putative universals in music and other aspects of culture. We should perhaps learn the lesson of Darwinism: a ‘universal’ mechanism of adaptation says little of interest, in itself, about how a particular feature got to be the way it is, or how it works. This truth has dawned on physicists too: universal equations are all very well, but particular solutions are what the world actually consists of, and those particulars are generally the result of contingent history.

Newton's Rainbow


Here’s the pre-edited text of my review for Nature of a new play about Isaac Newton, which I saw recently at the Royal Society and enjoyed more than I thought I might. But you’ll only catch it now if you’re in Toronto or Boston, I believe.
__________________________________________________________

Let Newton Be!
A play by Craig Baxter, directed by Patrick Morris and produced by the Menagerie Theatre Company
Touring until 30 April

Isaac Newton perplexes and fascinates not just because he was a transitional figure in the history of science but because he was a very odd man. The difficulty has been in distinguishing those two things. The temptation to portray him as a man torn between science and religion, or flitting from mathematical physics to superstitious alchemy, is the modern legacy of a tradition of positivistic science history that today’s historians are still working to dispel. In his passion for none of these things was Newton particularly unusual in his time. What made him odd was not so much what he believed but how he lived: isolated from intimate relationships, sensitive to every slight, at the same time vain and yet so indifferent to adulation that he could barely be persuaded to write the Principia.

All that, quite apart from his towering status in science, naturally makes him an attractive figure for biographers. Among those who have grappled with his story are the leading science historians Richard Westfall (whose 1980 biography is still the standard reference) and A. Rupert Hall, and the science writer James Gleick. It also supplies fertile soil for more inventive explorations of his life, of which Let Newton Be! is one. This new play by Craig Baxter was commissioned by the Faraday Institute for Science and Religion at Cambridge University, and has benefited from the input of, among others, Rob Iliffe, the head of the Newton Project to place all of Newton's writings online, and the astrophysicist John Barrow.

To conjure up this mercurial man, Baxter elected to use almost entirely Newton’s own words, or those of some of his contemporaries, such as his rival and critic Gottfried Leibniz. Moreover, Newton – the only character in the piece, apart from brief appearances by the likes of Leibniz and Edmond Halley – is played here simultaneously by three actors, one of them a woman. It sounds like a gimmick, but isn’t: the device allows us to see different facets of the man, though happily not as reductively conflicting voices.

The play’s structure is largely chronological. We see Newton as a boy in the family home at Woolsthorpe, an undergraduate at Trinity College Cambridge, then as Lucasian professor of mathematics (appointed in 1669 at the age of 27). We see him take his retractable telescope to the Royal Society and, stung by what he perceived as the antagonism of the London virtuosi, retreat into religious exegesis, until Halley cajoles him to write down his proof of elliptical planetary orbits – a treatise that expands into the Principia. Feted and now somewhat pompous, he becomes Warden of the Royal Mint and the President of the Royal Society.

The original material is well used. There is a reconstruction of Newton’s famous prism experiment (or roughly so – his experimentum crucis of around 1666, when he reconstituted white light from the spectrum, is notoriously difficult to reproduce). But we also get Newton’s sometimes surreal, obsessive lists of sins committed (“I lied about a Louse”), and the only time we are lectured to is in one of the real lectures on optics that Newton was obliged to provide in the Lucasian chair, at which he proves to be hilariously inept.

In such ways, the play delivers an impressive quantity of Newton’s thought. In particular, it sets out to emphasize just how much of his work was religious – as Iliffe confirmed in a post-performance panel discussion, Newton considered this his central mission, with the seminal scientific works on light, motion and gravity being almost tossed off before breakfast. The natural theology that motivated much of the science – the idea that by exploring the natural world we deepen our appreciation of God’s wisdom and power – was the conventional position of most seventeenth-century scientists, most notably Robert Boyle, and was their defence against accusations of materialistic atheism. Newton was anything but a materialist: that his gravity was an occult force acting at a distance was precisely what Leibniz considered wrong with it, while for Newton this force was actively God’s doing.

But I’m not sure how much would be comprehensible to anyone coming new to Newton. It is characteristic of the play’s intelligence that we don’t get any nonsense with falling apples, but neither are we really told what distinguished Newton’s ideas on gravity from the many that went before (especially Descartes’ vortices and the belief that it is a form of magnetism, both of which ideas Newton shared at some point). His work on the additive colour mixing of light is beautifully illustrated but not actually alluded to; likewise his laws of motion. Moreover, the play lacks a real narrative – there is no tension, nothing to be resolved, for in the end it is a biography, however inventively told. But that was, after all, its brief, and it is probably a more enjoyable hour and a half with Newton than anyone ever had in his lifetime.

Tuesday, April 12, 2011

The Naked Oceans

The Naked Scientists is (are?) running a series of podcasts called The Naked Oceans. The latest one has an interview with me about Ernst Haeckel and his images of radiolarians.

Monday, April 11, 2011

Chaos promotes prejudice


Here’s my latest news story for Nature, pre-editing.
_______________________________________________________________

A disorderly environment makes people more inclined to put others in boxes.

Messy surroundings make us more apt to stereotype people, according to a new study by a pair of social scientists in the Netherlands.

Diederik Stapel and Siegwart Lindenberg of Tilburg University asked subjects to complete questionnaires that probed their judgements about certain social groups while in everyday environments (a street and a railway station) that were either messy or clean and orderly. They found small but significant and systematic differences in the responses: there was more stereotyping in the former cases than the latter.

The researchers say that social discrimination could therefore be counteracted by diagnosing and removing signs of disorder and decay in public environments. They report their findings in Science today [1].

Psychologist David Schneider of Rice University in Houston, Texas, a specialist in stereotyping, calls this “an excellent piece of work which speaks not only to a possibly important environmental cause, but also supports a major potential theoretical explanation for some forms of prejudice.”

The influence of environment on behaviour has long been suspected by social scientists and criminologists. The ‘broken windows’ hypothesis of sociologists James Q. Wilson and George Kelling supposes that people are more likely to commit criminal and anti-social acts when they see evidence of others having done so – for example, in public places with signs of decay and neglect.

This idea motivated the famous zero-tolerance policy on graffiti on the New York subway in the late 1980s (on which Kelling acted as a consultant), which is credited with a role in improving the safety of the network. Lindenberg and his coworkers conducted experiments in Dutch urban settings in 2008 that supported an influence of the surroundings on people’s readiness to act unlawfully or antisocially [2].

But could evidence of social decay, even at the mild level of littering, also affect our unconscious discriminatory attitudes towards other people? To test that possibility, Stapel and Lindenberg devised a variety of disorderly environments in which to test these attitudes.

In their questionnaires, participants were asked for example to rate Muslims, homosexuals and Dutch people according to various positive, negative and unrelated stereotypes. For example, the respective stereotypes for homosexuals were (creative, sweet), (strange, feminine) and (impatient, intelligent).

In one experiment, passers-by in the busy Utrecht railway station were asked to participate by coming to sit in a row of chairs, for the reward of a candy bar or an apple. The researchers took advantage of a cleaners’ strike, which had left the station dirty and litter-strewn. They then returned to do the same testing after the strike was over and the station was clean.

As well as probing these responses, the experiment examined unconscious negative responses to race. All the participants were white, while one place at the end of the row of chairs was already taken by a black or white Dutch person. In the messy station, people sat on average further from the black person than the white one, while in the clean station there was no statistical difference in these distances.

In another experiment, the researchers aimed to eliminate differences in cleanliness of the environments while preserving the disorder. The participants were approached on a street in an affluent Dutch city. But in one case the street had been made more disorderly by the removal of a few paving slabs and the addition of a badly parked car and an ‘abandoned’ bicycle. Again, disorder boosted stereotyping.

Stapel and Lindenberg suspect that stereotyping may be an attempt to compensate for mess: it could be, they say, “a way to cope with chaos, a mental cleaning device” that partitions other people neatly into predefined categories.

In support of that idea, they showed participants pictures of disorderly and orderly situations, such as a bookcase with dishevelled and regularly stacked books, before asking them to complete both the stereotyping survey and another one that probed their perceived need for structure, including questions such as “I do not like situations that are uncertain”. Both stereotyping and the need for structure were higher in people viewing the disorderly pictures.

Sociologist Robert Sampson of Harvard University says that the study is “clever and well done”, but is cautious about how to interpret the results. “Disorder is not necessarily chaotic’, he says, “and is subject to different social meanings in ongoing or non-manipulated environments. There are considerable subjective variations within the same residential environment on how disorder is rated – the social context matters.”

Therefore, Sampson says, “once we get out of the lab or temporarily induced settings and consider the everyday contexts in which people live and interact, we cannot simply assume that interventions to clean up disorder will have invariant effects.” 

Schneider agrees that the implications of the work for public policy are not yet clear. One question we’d need to answer is how long these kinds of effects last”, he says. “There is a possibility that people may quickly adapt to disorder. So I would be very wary of concluding that people who live in unclean and disordered areas are more prejudiced because of that.” Stapel acknowledges this: “people who constantly live in disorder get used to it and will not show the effects we find. Disorder in our definition is something that is unexpected.”

References
1. D. A. Stapel & S. Lindenberg, Science 332, 251-253 (2011).
2. K. Keizer, S. Lindenberg & L. Steg, Science 322, 1681 (2008).