Monday, September 19, 2011

Talking to Yo-Yo Ma

I recently interviewed Yo-Yo Ma for the Financial Times. The article is now published, but here is the original version. It goes without saying that this was an honour to do, but it turned out also to be a huge pleasure, as Yo-Yo is so engaging, unaffected and thoughtful – it’s easy to see why the UN selected him as a Peace Ambassador. From what I’ve heard so far, his new CD is pretty fabulous too. Forgive me if I’m sounding too much the fanboy here – he’s just a very nice bloke.
____________________________________________________________________

When Yo-Yo Ma was asked to identify a private passion for this article, Sony sent back the message ‘Yo-Yo Ma is interested in everything.’ I’d have happily discussed Everything with Ma, and initially he seems determined to make that happen. His first question to me (were we doing this thing the right way round?) is about the latest technology for splitting water to make hydrogen as a fuel, a trick borrowed from photosynthesis in plants. This turns out to be an offshoot of his interest in water and rivers, a topic that could evidently have engaged us throughout the short time I was allotted in Ma’s frantic schedule during his visit to London for a performance at the Proms.

In view of all this, it comes as no surprise to discover that Ma’s fascination with neuroscience – this is what I’m allegedly there to discuss – is not a hobby like jam-making or long-distance running, but is merely one of the many facets of what begins to emerge as his grand vision: to foster a creative society. One might even be forgiven for suspecting that the music-making for which Ma enjoys world renown happens almost by chance to be the avenue through which he pursues this goal. It could equally, perhaps, have been anthropology, which Ma studied at university.

As Ma began playing the cello at age 4, however, it seems unlikely that his musical career left much to chance. A child prodigy, he performed before Presidents Eisenhower and Kennedy and was conducted by Leonard Bernstein. He then studied at the renowned Juilliard School in New York City before completing a liberal arts degree at Harvard. What followed is the kind of glittering career that all too readily becomes a numbing litany of awards and accolades that have left Ma described as ‘one of the most recognizable classical musicians on the planet’. He was the natural choice to take Pablo Casals’ part when the concert for Kennedy’s inauguration, at which Casals performed, was restaged for its 50th anniversary last January.

So far, so conventionally awe-inspiring. But the stereotype of the stratospheric virtuoso doesn’t last a moment once Ma appears, fresh from premiering Graham Fitkin’s intense Cello Concerto at the Royal Albert Hall – written for Ma – the night before. Isn’t he too young, for starters? (56 in October, since you ask.) And instead of gravitas or world-weariness, he has a boyish enthusiasm for, well, everything.

But I shouldn’t be surprised that Ma is no remote creature of the highbrow concert circuit. He has appeared on Sesame Street and (in cartoon form) on The Simpsons, he can be heard on the soundtrack to Crouching Tiger, Hidden Dragon, and he is a UN Peace Ambassador. He has performed with Sting and Bobby McFerrin, and his latest CD is a bluegrass collaboration, The Goat Rodeo Sessions.

I’m not meant to be talking about any of that, though – the topic on the table is neuroscience. We’ll get there, but there’s a broader agenda: to unite the notorious Two Cultures of C. P. Snow. Ma has been reading Richard Holmes’ The Age of Wonder, which describes how Keats, Coleridge and Shelley shared with Humphry Davy and William Herschel a passion for the marvels and mysteries of the natural world. “This is what happened in the 1800s”, Ma says. “Maybe we’re in another point in time where we actually need both specialists and generalists. The word amateur used to be a positive term. Nowadays if you’re an amateur, you’re a dilettante, you’re not serious.”

I profess my own exhilaration at Holmes’ demand that we should be impatient with “the old, rigid debates and boundaries” – that we need “a wider, more generous, more imaginative” way of writing about science that can locate it within the rest of culture. “That’s exactly what I’d hope for,” Ma agrees. “I love quoting [Nobel laureate physicist Richard] Feynman, who said that nature has a much greater imagination than humans, but she guards her secrets jealously. So his job as a scientist is to unlock some of those secrets, and interpret them for you. That’s what music tries to do. If I’m trying to describe something that someone else wrote, I have to get into that world and then I have to find a way to ensure that what I think is there lives in you also.”

Perhaps neuroscience can create bridges because the brain is the crucible within which art, science and all of culture are forged, presumably with the same tools. This is the seat of the creativity that we channel into discovery and expression: looking out and looking in. For Ma, the work of neuroscientist Antonio Damasio on homeostasis expresses something of where these creative impulses come from. Homeostasis is the tendency of all living things to maintain the internal conditions necessary for their continuation, and Damasio considers all non-conscious aspects of this self-preservation to be forms of emotion, whether they are basic reflexes, immune responses or ‘emotions-proper’ such as joy. “Life forms are always looking for homeostasis, equilibrium”, says Ma. So behaviours that promote it are responding to a need. “That made a lot of sense to me.”

His experiences among the Kalahari bushmen of southern Africa, who he visited for a documentary 15 years after he had studied them in his anthropology courses, convinced him that music can perform that function in many ways. “They do these trance dances that are for spiritual and religious purposes, it’s for medicine, it’s their art form, it’s everything. That matches all I’ve learnt about what music should be or could do.” It’s there because it fulfils fundamental needs. “Sound is one of our basic senses, so everyone uses sound to its maximum advantage: to promotes things that lead to homeostasis.”

But how does that magic work? I suggest that music is exploiting our instincts to make sense of our environment, to look for patterns, to develop hypotheses about our environment. It’s setting us puzzles. Ma is fascinated by how the brain’s plasticity ensures we have the capacity to solve them, to convert sensory data into a viable model of the world. “A newborn sees everything essentially upside down. But its brain is constantly interpreting what is being received, and at some stage it will just decide to turn all the information around.”

I mention Damasio’s insistence, in Descartes’ Error (1994), on the somatic component of the brain – that we are not Descartes’ disembodied mental homunculus directing a physical body, but that instead the self cannot be meaningfully imagined without being embedded in a body. This must be resonant for a musician? He concurs and suggests that the role of tactility in our mental well-being is under-appreciated. “That’s our largest organ.”

Ma sees this separation of intellect and mechanism, of the self and the body, as pernicious. “We’ve based so much of our educational system on it. At the music conservatory there’s a focus on the plumbing, not psychology. It’s about the engineering of sound, how to play accurately. But then going to university, the music professor would say ‘you can play very well, but why do you want to do it?’ Music is powered by ideas. If you don’t have clarity of ideas, you’re just communicating sheer sound.”

And this is about much more than intellectual transmission. It has to be packaged with emotion. “Passion is one great force that unleashes creativity, because if you’re passionate about something, then you’re more willing to take risks.” According to Damasio, there’s a deeper function of passion too. He challenged decades if not centuries of preconception about rationality by showing that emotion plays a vital part in it. Far from being a distraction, emotion is often the lubricant of good decision-making: when it is lacking, as in some people with mental impairments or deficits, the ability to make sound choices – or any choices at all – can evaporate.

He doesn’t want to stop. With his manager giving a gentle yet determined signal that our time is up, he exhorts me to ask one more question. So – how can music be made central to education, rather than an option at the periphery? His response makes the big vision a little more concrete: it is about finding ways to communicate ideas in a manner that yields the greatest harvest of creativity. “There is nothing more important today than to find a way to be knowledge-based creative societies. My job as a performer is to make sure that whatever happens in a performance lives in somebody else, that it’s memorable. It’s great if a person buys the CD or a ticket to the concert, but its only when the ideas are passed on that your job is done. If you forget tomorrow what you heard yesterday, there’s really not much point in you having been there – or me, for that matter. Now, isn’t that the purpose of education too? That’s when I realised that education and culture are the same. Once something is memorable, it’s living and you’re using it. That to me is the foundation of a creative society.”

Friday, September 16, 2011

Why chemistry is good for you

This is really just for the record (mine): I have a book review in Chemistry World here. It’s a challenge to get to the nub of a big, multi-author volume in 300 words or so…

Tuesday, September 13, 2011

Here is the political weather forecast

Here’s the pre-edited version of my latest story for Nature’s online news, with added bonus boxes. There was far too much interesting stuff in this paper to cram into 700 words or so. And more on the way from others working in this field: watch this space.
_____________________________________________________________________

Signs of impending social and political change may lie hidden in a sea of data.

You could have foreseen the Arab spring if only you’d been paying enough attention to the news. That’s the claim of a new study which shows how ‘data mining’ of news reportage can reveal the possibility of future crises well before they happen.

Computer scientist Kalev Leetaru at the University of Illinois in Champaign has trawled through a vast collection of open-access news reporting and examined the ‘tone’ of the news about Tunisia, Egypt and Libya, where long-established dictatorial political leaders have been deposed by public uprisings in the so-called Arab spring. In all cases, he says, there was a clear, steady trend towards a negative tone for about a decade before the revolts [1].

While this doesn’t predict either the course or the timing of the events during last spring and summer, Leetaru argues that it provided a clear indicator of an impending crisis. “I strongly doubt we'll ever get to the point where we can say ‘at 5:05PM next July 2nd there will be a riot of 20 people at such and such street corner’”, he says. “Rather, the value of this class of work lies in warning of changing moods and environments, and increased vulnerability to a sudden shock”.

Erez Lieberman Aiden of Harvard University, who has explored the mining of digitized literary texts for linguistic and historical trends, agrees. “Leetaru’s work is interesting not so much because it makes predictions, but because it points to the power and the opportunity latent in new ways of analyzing large-scale news databases”, he says.

Political scientist Thomas Chadefaux of the Swiss Federal Institute of Technology (ETH) in Zurich, Switzerland, calls the paper “a welcome addition to a field – political science – that has cared very little about finding early warning signals for war, or making predictions at all.”

Long-term trends can be subtle and hard to spot by subjective and partial monitoring of the news. But they might presage crises more reliably than does a focus on the short term. For example, while there was talk during the spring of the possibility of similar public uprisings in Saudi Arabia, reflected in a rather negative tone in the news there during March 2011, the long-term data showed that spell to be no worse than other fluctuations in recent years – there was no worsening trend. On this basis, one would have predicted the failure of the Arab spring to unseat the Saudi rulers.

“If we think of the vast array of digital information around us today as an ocean of information, up to this point we've largely been studying the surface”, says Leetaru. “The idea behind this work is to poke our heads beneath the water for a moment to show that there's a vast world down there that we've been missing”. He thinks that automated news analysis that looks for information about mood, tone or spatial references could supply something like a political weather forecast, “offering updated assessments every few minutes for the entire planet and pointing out emerging patterns that might warrant further investigation.”

Leetaru has used the immense collection of news reports in the Summary of World Broadcasts (SWB), a monitoring service set up by the British intelligence service just before World War II to assess world opinion. The SWB now includes newspaper articles, television and radio broadcasts, periodicals and a variety of other online resources from over 130 countries.

Previous efforts to extract ‘buried’ information from vast literary resources – an approach dubbed ‘culturomics’ – have tended to focus on quantifying the occurrence of certain key words [2]. In contrast, Leeratu conducted ‘sentiment mining’ of the sources by assessing their positive or negative tone, looking for evaluation words such as ‘terrible’, ‘awful’ or ‘good’. He used computer algorithms to convert these data trawls into a single parameter that quantified the tone of the news, normalized so that the long-term average value is zero.

For Egypt, the tone in early 2011 fell to a negative value seen only once before in the past three decades. What’s more, at that same time the tone of the coverage specifically mentioning the (now deposed) president Hosni Mubarak reached its lowest ever level for his almost 30-year rule. Similar falls to highly unusual low points were found for Tunisia and Libya.

This didn’t in itself predict when those crises would happen – it seems likely, for example, that rocketing food prices helped to trigger the Arab spring revolts [3]. But it might reveal when a region or state is ripe for unrest. Dirk Helbing, a specialist in modeling of social systems at ETH, compares it to the case of traffic flow: computer models can help to spot when traffic is in a potentially unstable state, but the actual triggers for jams may be random and unpredictable.

By the same token, it remains to be seen whether this approach can spot signs of trouble in advance, rather than retrospectively finding them foreshadowed in the media. “It is obviously much easier to find precursory signs when you know where to look than to do it blindly”, says Chadefaux.

But if news mining does turn out to offer a crystal ball, “the question is what kinds of use we’ll make of this information”, says Helbing. “Will governments act in a responsive way to avoid crises, say by improving people’s living conditions, or will they use it to police dissatisfied people in a preventative way?”

References
1. Leeratu, K. First Monday 16(9) (online only), 5 September 2011. Available here.
2. Michel, J. B. et al., Science 331, 176-182 (2010).
3. Lagi, M., Betrand, K. Z. & Bar-Yam, Y. http://arxiv.org/abs/1108.2455 (2011).

Read all about it

Where is Osama bin Laden?

Leetaru also looked at whether the sources of news reports might provide information about the spatial location of events. He analysed all media references to Osama bin Laden since 1979 to look for co-occurrences of geographical places. Between bin Laden’s rise to media prominence in the 1990s and his capture and killing in 2011, the most common associations were with northern Pakistan, within a 200-km radius of the cities of Islamabad and Peshawar – the region in which he was finally found.

How the world looks from here

News sources are often criticized for being too parochial. That turns out to be a valid complaint, at least for the US news: Leetaru found that even the New York Times, a relatively ‘internationalist’ newspaper, constantly refers reports in other countries back to the US. “Nearly every foreign location it covers is mentioned alongside a US city, usually Washington DC”, he says.

By looking for such co-references to specific cities or other geographical landmarks throughout the world, Leetaru extracted a map of how the global news links nations into ‘world civilizations’. For SWB these correspond largely to the recognized geographical affiliations: Australasia, the Middle East (including much of northeast Africa), the Americas and so forth. But there are anomalies: Spain is linked to South America, and France and Portugal to southern Africa, showing that the imprint of imperial history is still felt in the world. Strikingly, however, the ‘map’ derived from the New York Times alone is rather different: on this measure, the US has its own distinctive view of the world. That matters, says Leetaru. “Understanding how a given country groups the rest of the world gives you critical information on how to approach that country in terms of shaping policy”, he says.

Here’s some more bad news

If you’ve been feeling that the news is always bad these days, you’ve got a point. It has been getting steadily worse for the past 30 years, according to the trend in the tone of the entire data set in the SWB since 1979.

Wednesday, September 07, 2011

Welcome to the Futur(ICT)

I am at a meeting in Italy that is thrashing out the proposal for the FuturICT project, a leading contender for the EU’s Flagship Initiatives scheme which seeks to provide huge funding over ten years for ‘transformative’ initiatives in information and communications technologies. FuturICT is to my mind the most potentially transformative of all the shortlisted candidates, but we’ll see what happens. In the meantime, it is very exciting to see what is being planned. It is in the light of this initiative, and after discussion with its leader Dirk Helbing, that I put down the thoughts below a week or two ago. It seems that events like this one are now almost daily adding to the arguments for why we need something like FuturICT. But Lord knows if we can wait ten years for it.

_____________________________________________________________________

This must be said first: no one really understands what is going on. It’s generally acknowledged that Twitter didn’t cause the Arab Spring – but what did? Labour has been right to avoid pinning the riots on the government cuts – but then, what do we pin them on? Every economist has an explanation for the financial crisis, different to a greater or lesser degree than the others. But it happened somehow.

Can you imagine these things happening two decades ago? The riots in Croydon, Beckenham and Bromley, were not like those in Toxteth and Brixton in the 1980s, not least precisely because of their location, but also because there was no forewarning: the police were justified in saying that they’d had no precedent to prepare them. For all that it looks superficially like the collapse of the Soviet Union, the Arab Spring too was something new. And if the financial crisis was like the Great Depression, we’d know what to do. It was partly about risk hidden so deeply as to cause paralytic fear; it was also about instruments too complicated for users to understand, and about legal and financial systems labyrinthine enough to permit deception, stupidity and knavishness to thrive.

What is qualitatively new about these events is the crucial role of interdependence and interaction and the almost instantaneous transmission of information through social, economic and political networks. That novelty does not by itself explain why they happened, much less help us to identify solutions or ameliorate the unwelcome consequences. But it points to something perhaps even more important: the world has changed. And it is not going to change back. The poverty of the political response to the riots is understandable, because, although they do not like to admit it, politicians are faced with uncharted territory and they do not know how to navigate it. This is a dangerous situation, because it means that the pressure to be seen to be responding may force political leaders to improvise solutions that fail entirely to acknowledge the nature of the problem and therefore stand a good chance of making things worse. Harsh sentencing and housing evictions might conceivably reassure the public that there are strong hands at the helm, but there is no credible, objective evidence that they will prevent recurrences in the future. That we can one moment celebrate the power of social-network technologies to instil change and mobilize crowd movements, and the next demand that these technologies be shut down in times of civil unrest shows that we have no idea how to manage these things, or even what to think about them except that somehow they matter.

In retrospect, the significance of the terrorist attacks almost exactly ten years ago now looks to be that they marked the advent of this new world order – one of decentralization, of fears and dangers so diffuse and distributed as to be impossible to vanquish and perhaps even to define. And what was the response on that occasion? Old-fashioned declarations of war between nations, which are now revealed to be not just ineffective but disastrous. The assassination of Hitler would have probably halted a war; in assassinating Osama bin Laden, there was no war to stop.

This is why politicians and decision makers need to learn a new language, or they will simply lose the capacity to govern, to manage economies, to create stable societies, to keep the world worth living in. Here are some of the words they must come to terms with: complexity, network theory, phase transitions, critical points, emergence, agent-based modelling, social ecology. And they will need to learn the key lesson of the management of complex, interacting systems: solutions cannot be imposed, but must be coaxed out of the dynamic system itself. Earthquakes may never be exactly predictable, but it is possible that they can be managed by mapping out in great detail the accumulating strains that give rise to them, and applying local nudges and shocks to relieve the stressed and minimize the danger and costs of crises. There is no political discourse yet that permits analogous answers, not least because they require investment in such things as unglamorous data-gathering techniques and long-term research that carries no guarantee of quick fixes.

Aspirations towards a science of society date back to the Enlightenment. But not only have they never been fulfilled, they now need to recognize that they must describe a different society from the one in which Adam Smith or even John Maynard Keynes lived. There is some good news in all this: we now have the conceptual and computational tools to create a science that can model the state we’re in – not just politically and socially but environmentally, for no answer to the global crises of environment and ecosystems will work if it is not embedded in a credible socioeconomic context. We cannot, in all honesty, yet know how much any of this will help. Perhaps some ills of the world will always elude rational prediction or solution. But if we don’t even try, it is hard to avoid concluding that we’ll deserve all we get.

Thursday, September 01, 2011

In search of a third culture

Here is my latest Crucible column for Chemistry World. I’ve also written a Chem World blog about the ASCI exhibition, which shows some of the images.
__________________________________________________________________________

Sciart – the clumsy label commonly attached to collaborations between scientists and artists – means many things to many people. Some, like the physicist Arthur I. Miller who has written about the conceptual connections between relativity and cubism, see it as a way of bridging the Two Cultures divide that might ultimately produce a ‘third culture’ in which art and science are not separate endeavours. Others, such as biologist Lewis Wolpert, are sceptical that it is more than just a fad that allows artists to misappropriate scientific ideas, and that science stands to gain nothing from it.

Recently the French physicist Jean Marc Levy-Leblond, who has a deep appreciation of contemporary arts, launched a stinging attack on the whole genre in a book pointedly titled La science (n’)e(s)t (pas) l’art (Editions Hermann, Paris, 2010), in which he criticizes the naivety of most sciart discourse and argues that the most artists and scientists can realistically hope for are platonic ‘brief encounters’. Although not intended as a riposte, a forthcoming book called Survival of the Beautiful (Bloomsbury, 2011) by musician and animal-song specialist David Rothenberg certainly offers one. Rothenberg argues that we should take seriously the possibility that there is an aesthetic sense at play in nature – for example in the way female peacocks and bower birds react to the elaborate displays of males – and that this can speak to our own artistic sensibilities. He asserts that, despite Wolpert’s claim, it is possible to find cases of science having benefitted from art. And he devotes considerable space to a discussion of chemists’ visual language, instincts and aesthetics by Roald Hoffmann, who developed these themes in his book The Same and Not the Same (Columbia University Press, 1995).

The arguments will doubtless continue. Levy-Leblond is right to ridicule some claims of finding ‘art in science’ – he calls fractal imagery ‘techno-kitsch’, and is critical of scientists’ attachment to an old-fashioned notion of beauty, which for chemists seems archaically tied up with Platonic ideas about symmetry. And it’s true that some of the most successful interactions of art and science, such as Michael Frayn’s play Copenhagen, did not arise from any self-conscious process of enticing artists and scientists into the same room. But if we let a thousand flowers bloom, some are likely to smell good.

That’s evident from a new exhibition of digital art organized by the New York-based Art & Science Collaborations, Inc. (ASCI), a veteran of the sciart (or as they prefer, art-science) field which was formed by artist Cynthia Pannucci in 1988 to ‘raise public awareness about artists and scientists using science and technology to explore new forms of creative expression’. This is ASCI’s thirteenth annual digital-art competition, and this year it celebrates the International Year of Chemistry. ‘Digital2011: The Alchemy of Change’ called for submissions from artists and scientists to ‘show us their vision of this deeply fundamental, magical enabler of life called chemistry’. A selection of the entries will be displayed at the New York Hall of Science from September to next February.

The results are nothing if not eclectic. All of the images have been created by digital manipulation – sometimes of photographic images, sometimes purely computer-generated. Their occasionally colourful, ‘decorative’ quality would doubtless be dismissed by Levy-Leblond as more ‘digital kitsch’. Others place gleaming ball-and-stick models of molecules against images of supernovae and other cosmic phenomena in a way that puts me in mind of the graphical abstracts of JACS and Angewandte Chemie – not by any means unpleasant, but hardly inspiring art. Still others explore the artificially enhanced textures and colours of crystals, flows, precipitates, decay – images that have intrigued many artists in the past, and which raise again Rothenberg’s question of whether nature ‘is more beautiful than it needs to be’.

I enjoyed most of all the images that seem to push up against the limits of what is knowable, expressible and visualizable in chemistry. The alchemists felt those limits keenly and resorted to allegory and metaphor, as Andrew Krasnow does with his bizarre ‘bartender’ mixing up the coloured oxidation states of vanadium. Robbin Juris uses cellular automata to conjure up collages of ‘i(c)onic bonds’ that look simultaneously like pages from a quantum-theory textbook and cubist abstractions. David Hylton’s pearlescent forms put me in mind of the surrealist Roberto Matta, who was himself interested in quantum physics. And Julie Newdoll’s schematic ‘molecules’, developed in association with biochemist Robert Stroud, are like strange symbolic machines whose workings remain obscure.

It’s a shame to have to single out just these few. The exhibition should offer a thought-provoking view of how chemistry looks from outside, and why it is still a rich stimulus to the imagination.

Friday, August 26, 2011

Dude looks like a lady

When I was talking recently in Barcelona at a music conference, I was interviewed by a Spanish newspaper, which has now published the piece. From what I can tell (courtesy of Google Translate), it is I think best described as a loose improvisation based around our conversation. And perhaps the better for it, who knows? But I like best one of the reader comments:
“Language is very intellectual, good photo, but looks like a woman, perhaps the combination has made the smart person.”
In my experience, however, that is a little unfair to Spanish women.

Wednesday, August 24, 2011

Did Einstein discover E=mc2?

A lot of people have strong opinions about that, as is clear from the comments that have followed on from my article of this title for Physics World. (I particularly liked "For an objective account, see Albert Einstein: The Incorrigible Plagiarist." Yup, sounds like an objective book to me.) The piece is here, but the pre-edited version is below. There's a fair bit more that I'd have liked to explore here - it's a deeply interesting issue. The biggest revelation for me was not so much seeing that there were several well-founded precursors for the equivalence of mass and energy, but finding that this equivalence seems to have virtually nothing to do with special relativity. Tony Rothman said to me that "I've long maintained that the conventional history of science, as presented in the media, textbooks and by the stories scientists tell themselves is basically a collection of fairy tales." I'd concur with that.
________________________________________________________________

Who discovered that E=mc2? It’s not as easy a question as you might think. Scientists ranging from James Clerk Maxwell and Max von Laue to a string of now obscure early twentieth-century physicists have been proposed as the true discovers of the mass-energy equivalence now popularly credited to Einstein’s theory of special relativity. These claims have spawned headlines accusing Einstein of plagiarism, but many are spurious or barely supported. Yet two physicists have now shown that Einstein’s famous formula does have a complicated and somewhat ambiguous genesis – which has little to do with relativity.

One of the more plausible precursors to E=mc2 is attributed to Fritz Hasenörhl, a physics professor at the University of Vienna. In a 1904 paper, Hasenörhl clearly wrote down the equation E=3/8mc2. Where did he get it from, and why is the constant of proportionality wrong? Stephen Boughn of Haverford College in Pennsylvania and Tony Rothman of Princeton University examine this question in a preprint.

“I had run across Hasenöhrl's name a number of times with no real explanation as to what he did”, Rothman explains. “One of my old professors, E.C.G. Sudarshan, once remarked that he gave Hasenöhrl credit for mass-energy equivalence. So around Christmas time last year, I said to Steve, ‘why don't we spend a couple hours after lunch one day looking at Hasenöhrl's papers and see what he did wrong?’ Well, two hours turned into eight months, because the problem ended up being extremely difficult.”

Hasenöhrl’s name has a certain notoriety now, as he is commonly invoked by anti-Einstein cranks. His reputation as the man who really discovered E=mc2 owes much to the efforts of the anti-Semitic and pro-Nazi physics Nobel laureate Philipp Lenard, who sought to separate Einstein’s name from the theory of relativity so that it was not seen as a product of ‘Jewish science’.

Yet all this does Hasenörhl a disservice. He was Ludwig Boltzmann’s student and successor at Vienna, and was lauded by Erwin Schrödinger among others. “Hasenohrl was probably the leading Austrian physicist of his day”, says Rothman. He might have achieved much more if he had not been killed in the First World War.

The relationship of energy and mass was already widely discussed by the time Hasenörhl considered the matter. Henri Poincaré had stated that electromagnetic radiation had a momentum and thus effectively a mass according to E=mc2. German physicist Max Abraham argued that a moving electron interacts with its own field E0 to acquire an apparent mass given by E0=3/4mc2. All this was based on classical electrodynamics, assuming an ether theory. “Hasenöhrl, Poincaré, Abraham and others suggested that there must be an inertial mass associated with electromagnetic energy, even though they may have disagreed on the constant of proportionality”, says Boughn.

Robert Crease, a philosopher and historian of science at Stony Brook University in New York, agrees. “Historians often say that, had there been no Einstein, the community would have converged on special relativity shortly”, he says. “Events were pushing them kicking and screaming in that direction.” Boughn and Rothman’s work, he says, shows that Hasenöhrl was among those headed this way.

Hasenörhl approached the problem by asking whether a black body emitting radiation changes in mass when it is moving relative to the observer. He calculated that the motion adds a mass of 3/8c2 times the radiant energy. The following year he corrected this to 3/4c2.

However, no-one has properly studied Hasenörhl’s derivation to understand his reasoning or why the prefactor is wrong, say Bough and Rothman. That’s not easy, they admit. “The papers are by today’s standards presented in a cumbersome manner and are not free of error. The greatest hindrance is that they are written from an obsolete world view, which can only confuse the reader steeped in relativistic physics.” Even Enrico Fermi apparently did not bother to read Hasenörhl’s papers properly before concluding wrongly that the discrepant 3/4 prefactor was due to the electron self-energy identified by Abraham.

“What Hasenörhl really missed in his calculation was the idea that if the radiators in his cavity are emitting radiation, they must be losing mass, so his calculation wasn't consistent”, says Rothman. “Nevertheless, he got half of it right. If he had merely said that E is proportional to m, history would probably have been kinder to him.”

But if that’s the case, where does relativity come into it? Actually, it doesn’t. While Einstein’s celebrated 1905 paper ‘On the electrodynamics of moving bodies’ clearly laid down the foundations of relativity by abandoning the ether and making the speed of light invariant, his derivation of E=mc2 did not depend on those assumptions. You can get the right answer with classical physics, says Rothman, all in an ether theory without c being either constant or the limiting speed. “Although Einstein begins relativistically, he approximates away all the relativistic bits, and you are left with what is basically a classical calculation."

Physicist Clifford Will of Washington University in St Louis, a specialist on relativity, considers the preprint “very interesting”. Boughn and Rothman “are well regarded physicists”, he says, and as a result he “tend[s] to trust their analysis”. However, the controversies that have been previously aroused over the issue of priority perhaps accounts for some of the reluctance of historians of physics to comment when contacted by Physics World.

Did Einstein know of Hasenörhl’s work? “I can't prove it, but I am reasonably certain that Einstein must done, and just decided to do it better”, says Rothman. But failure to cite it was not inconsistent with the conventions of the time. In any event, Einstein asserted his priority for the mass-energy relationship when this was challenged by Johannes Stark (who credited it in 1907 to Max Planck). Both Hasenörhl and Einstein were at the famous first Solvay conference in 1911, along with most of the other illustrious physicists of the time. “One can only imagine the conversations”, say Boughn and Rothman.

Tuesday, August 02, 2011

A philosophical question

Here’s my latest Crucible column for Chemistry World.
___________________________________________________________________________________

“Philosophy is dead” is an assertion that, coming from most people, would be dismissed as idle, unconsidered, even meaningless. (What, all of it? Political philosophy? Moral philosophy? The philosophy of music?) But when Stephen Hawking announced this in his recent book with Leonard Mlodinow, The Grand Design, it was greeted as the devastating judgement of a sage and sent philosophers scurrying to the discussion boards to defend their subject (more properly, to defend Hawking’s presumed target of metaphysics).

Yet many chemists may be unaware that a philosophy of chemistry existed in the first place. Isn’t chemistry about practical, tangible matters, or – when theoretical issues are concerned – questions of right and wrong, not the fuzzy and abstract issues popularly associated with philosophy? On the contrary, at least two journals (Hyle and Foundations of Chemistry) and the International Society for the Philosophy of Chemistry have insisted for some years that there are profound chemical questions of a philosophical nature.

These questions might not seem quite as urgent as how to make stereoselective carbon-carbon bonds, but they should at the very least make chemists reflect about the nature of their daily craft. What is the ontological status of ‘laws’ of chemistry? To what extent are molecular structures metaphorical? What’s more, the philosophy of chemistry impinges directly on chemistry’s public image. As Eric Scerri, editor-in-chief of Foundations of Chemistry, says, “Most philosophers of science believe that chemistry has been reduced to physics and is therefore of no fundamental interest. They believe that chemistry has no ‘big ideas’ to compare with quantum mechanics and relativity in physics and Darwin’s theory in biology” [1].

The philosophy of chemistry excites lively, often impassioned debate. Those unquiet waters have recently been agitated by an extensive overview of the topic published in the Stanford Encyclopedia of Philosophy, a widely used online reference source, by Michael Weisberg, Paul Needham and Robin Hendry, all three respected philosophers of science [2]. It’s an ambitious affair, accommodating everything from the evolution since ancient times of theories of matter to the nature of the chemical bond and interpretations of quantum theory. The piece has proved controversial because the authors have presented points of view on several of these issues that are not universally shared.

Much of the debate hinges on the fact that the concepts and principles used by chemists – the notion of elements, molecules, bonds, structure, or the idea much debated by these philosophers that ‘water is H2O’ – lack philosophical rigour. Arguments about whether gaseous helium contains atoms or molecules, or whether the element sodium refers to a grey metal or to atoms with 11 protons, are frequently rehearsed in lab coffee rooms. That these hardly affect the practicalities of chemical synthesis doesn’t detract from their validity as philosophical conundrums.

Take, for example, Needham’s claim that isotopes of the ‘same’ element should in fact be considered different elements [3]. Clearly there is rather little difference between 35Cl and 37Cl, but if ‘element’ is pinned to chemical identity, are H and D really the ‘same’? Indeed, does not even the tiniest isotope effect blur any strict definition based on chemical behaviour rather than proton number? Perhaps the Austrian chemist Friedrich Paneth was right to regard the notion of an element as something ‘transcendental’.

Even more controversially, Hendry takes a view long developed by him and others such as Guy Woolley that the concept of molecular structure is mere metaphor, rendered logically incoherent by quantum mechanics. To distinguish methanol from dimethyl ether, we need to first put the nuclei in position by hand and then apply the Born-Oppenheimer approximation to the quantum equations so that only the electrons move. Without this approximation, the raw Hamiltonian for nuclei and electrons is identical for both isomers.

Hendry asserts that the isomers exist as quantum superpositions, from which a particular isomer emerges only when the wavefunction is collapsed by observation. Scerri argues [4], in contrast, that this collapse happens naturally and inevitably because of environment-induced decoherence. Even if so, the image is disconcerting: molecular structures exist because of their environment, not as intrinsic entities. What of molecules isolated in interstellar space, almost a closed system? Regardless of the position one takes, it remains unclear how, or if, molecular structure can be extracted directly from quantum theory, as opposed to being rationalized post hoc – relative energies can be computed, for sure, but that’s not the same. Ultimately these questions might have answers in physics; at least for the moment, they are philosophical.

References
1. E. R. Scerri, J. Chem. Ed. 77, 522-526 (2000).
2. M. Weisberg, P. Needham & R. Hendry, ‘Philosophy of Chemistry’, Stanford Encyclopedia of Philosophy.
3. P. Needham, Stud. Hist. Phil. Sci., 39, 66–77 (2008).
4. E. R. Scerri, Found. Chem. 13, 1-7 (2011).

Friday, July 29, 2011

The reason why not

I just discovered that this book review I wrote recently for The National, a UAE newspaper, was published back in early June. It doesn’t seem to have altered much in the editing, but here it is anyway.
__________________________________________________________________________

The Reason Why:
The Miracle of Life of Earth

by John Gribbin
Allen Lane, 2011; ISBN 978 1 846 14327 4
219 pages
£20.00

In 1950 the Italian physicist Enrico Fermi was walking to lunch at the Los Alamos National Laboratory with his colleagues from the Manhattan Project. They were discussing a recent spate of UFO reports, and as they sat down to eat, Fermi challenged the company. If the cosmos is full of space-faring aliens, he said, “Where is everybody?”

In The Reason Why, veteran science writer John Gribbin answers Fermi’s ‘paradox’ by saying that we have seen no sign of aliens because they don’t exist. Not, at least, in our Milky Way Galaxy – and beyond that, the distances are so vast that it is hardly worth asking. “We are alone, and we had better get used to the idea”, he concludes.

The likelihood of intelligent life on other planets has been conditioned since the 1960s by the thinking of Cornell astronomer Frank Drake, whose eponymous equation divides the question into its component parts, the probabilities of each of which one might conceivably hope to quantify or at least estimate: how many stars have planets, how many are Earth-like, and so on.

Depending on your taste, the Drake equation is either a logical way of getting purchase on a profound question, or an attempt to manufacture knowledge from ignorance. In trying to get a meaningful number by multiplying very big ones, very small ones, and very uncertain ones, the Drake equation seems more like guesswork disguised as maths.

Gribbin, however, asserts that just about every one of the necessary conditions for intelligent life to emerge has a low, perhaps minuscule, probability. Their combination then makes it highly unlikely that we have any galactic neighbours eagerly trying to make contact. For instance, only a relatively small part of our galaxy is inhabitable – the crowded interior is bathed in sterilizing radiation from black holes and supernovae. Only stars of a certain age have enough heavy chemical elements to make Earth-like planets and dwellers thereon. Only a few such stars lack partners that pull planetary orbits into extreme shapes, making climate variations unendurably extreme.

The specialness of the Earth is particularly apparent in the make-up of our solar system. For example, we are protected from more frequent impacts of asteroids and comets, like the one that seems to have sent the dinosaurs to extinction 65 million years ago, by the immense size of Jupiter, more a failed star than a planet, whose gravity sucks up these stray objects. One such, comet Shoemaker-Levy 9, ploughed into the giant planet in 1994, leaving a scar the size of the Earth.

Gribbin is especially good on the benign effect of the Moon. The Earth is unusual in having a moon so large in relation to the planet itself, which is now believed to have been created when a proto-Earth stumbled into another planet-like object called Theia with which it shared an orbit 4.5 billion years ago. The rocky debris clumped to form the Moon, while the traumatized, molten Earth swallowed Theia’s iron core to give it an unusually large core today, the source of the strong geomagnetic field that deflects harmful particles streaming from the Sun. This impact probably left the Earth spinning fast (a Venusian day lasts the best part of an Earthly year) and tilted on its axis, from which our seasons ensue. What’s more, the Moon’s gravity stops this tilt from being righted by the influence of Jupiter. Before the debris coalesced into the lunar globe, its gravity created awesome tides on the more rapidly spinning Earth that rose and fell several kilometres every two hours or so. Even though the barren Moon was too light to hold an atmosphere of its own, life on Earth would be very different – perhaps impossible – without it.

This ‘rare Earth’ case has been made before, but Gribbin gives the arguments a fresh shine. Yet he assembles them in a legalistic rather than strictly scientific manner. That’s to say, he marshals (generally impeccable) science to argue his case rather than objectively to investigate the possibilities. For example, he predicates a discussion of the ‘habitable zone’ of the solar system – a crucial part of the argument – on the claim that “it is reasonable to assume that ‘life as we know it’ does require the presence of liquid water.” That Trekkie-inspired ‘as we know it’ is back-covering, and reminds me of a conference I once attended that was convened to ask if life in the cosmos could exist without water. Speaker after speaker insisted that it could not, since that never happens on Earth, which was of course merely a statement that life adapted to water can’t do without it. Now, there are arguments why water might be essential for life anywhere, but they are subtle and not the ones Gribbin casually gives. More to the point, they are still arm-waving and do nothing to dent a counter-claim that it is reasonable to suggest that non-aqueous life is possible.

Such solipsism pervades the book, and is implicit in Fermi’s paradox to begin with. It supposes that intelligent life will think as we do now, with a determination to find and populate other inhabited worlds – and moreover, will have already done so in a way that leaves a mark so prominent that we’ll find it within the first 50 years (a comically short span in cosmic terms) of looking. Are even we so determined? If it would be unwise to conclude from the parlous state of human space exploration that this is just a phase civilizations quickly grow out of, the current situation is nonetheless even less suggestive of the opposite. Worse, since spaceflight seems increasingly likely to be a private enterprise, Gribbin implies that mega-rich philanthropists with a penchant for spaceflight like Virgin’s Richard Branson and Microsoft’s Paul Allen follow inexorably from the laws of physics.

The same historical determinism colours his belief that space-faring civilizations are a one-shot affair on inhabitable planets. If we foul up after having used all of the surface deposits of fossil fuels, he says, we’ll never again be able to claw our way out of a state of barbarism. But this assumes that apocalypse comes only after the oil and coal are exhausted, and moreover that a re-emergent civilization would stall not at the Stone Age but at the pre-industrial enlightenment. In this definition, a civilization capable of producing Aristotle, let alone Newton, doesn’t qualify as intelligent. The challenge of getting from Newton to Neil Armstrong without plentiful oil is a good pretext for a science-fiction novel, but it hardly proves anything else.

Gribbin’s account of the chance events that allowed humans to evolve from slime is particularly unpersuasive of any broader conclusions. It sounds increasingly like the kind of enumeration of contingency and coincidence that invites us to marvel at how ‘unlikely’ it is that we ever met our spouses. Once Gribbin starts invoking a highly speculative cometary impact on Venus to explain the Cambrian explosion in which complex life diversified about 540 million years ago, one senses that he is determinedly picking out a precarious path to a foregone conclusion.

None of this is to say that The Reason Why is a bad book. On the contrary, it is as lucid, well researched and enjoyable as Gribbin always is, and supplies a peerless guide to the way stars and planets are formed. And as a polemic, it is entirely justified in being selective with the evidence. Besides, many of Gribbin’s astrophysical arguments for the rarity of life are robust, and as such they make a convincing case that the Galaxy is not teeming with life that is loftily or mischievously ignoring us.

Yet the book fails to offer any philosophical perspective. The specialness of humanity has in history been asserted almost always as a theological issue, whether to counter Copernicus or Darwin. If Gribbin is right and we just got phenomenally lucky – that the laws of physics are so miserly about allowing matter to become self-aware – this is sufficiently peculiar to warrant more comment. Even atheists might then forgive theologians from taking an interest, just as they do in the ‘fine-tuning’ that seemingly makes physical laws exquisitely geared to support matter and life in the first place. Gribbin can suggest only that, if we’re alone in the galaxy, we have an even greater responsibility to our planet. It would be nice to think so, but see how far that gets you at the next climate summit.

Wednesday, July 20, 2011

No fit state

I’ve got a piece in the latest issue of Prospect (not yet online) about the recent report on the state of the oceans from the IPSO project. Here’s what the full draft looked like.
__________________________________________________________

“Unprecedented… shocking… what we face is a globally significant extinction event.” These judgements on the state of the global oceans, pronounced by the scientists who attended a recent workshop of the International Programme on the State of the Ocean (IPSO), sound truly scary. The future of the ocean’s ecosystem look “far worse than we had realised”, says IPSO’s director, Oxford zoologist Alex Rogers. “If the ocean goes down, it’s game over.”

When the IPSO report was released in June, it made apocalyptic headlines. But such is the prevailing public mood on climate and environmental change that strong words may do little to alter opinions. Sceptics will dismiss them as scaremongering in a bid for research funding, while they will fuel righteous indignation among those already convinced of impending catastrophe. And if you haven’t already made up your mind, this seems an invitation to paralysing despair.

So how seriously should we take the IPSO report? According to Hugh Ducklow, director of the Ecosystems Center at Woods Hole, Massachusetts, one of the US’s most prestigious marine biology laboratories, it isn’t exaggerating. “If anything”, says Ducklow (who is not a part of IPSO), “the true state of the ocean is likely worse than the report indicates.”

The IPSO workshop, held in Oxford in April, brought together leading marine scientists, legal experts and NGO representatives. They considered threats to ocean ecosystems ranging from over-exploitation of fish stocks to acidification of the waters, caused by increased amounts of dissolved carbon dioxide (CO2) as atmospheric levels of this greenhouse gas rise. Many fish populations have been literally decimated – even since the report was released, a paper in Science says that the state of some species of high commercial value, such as bluefin tuna, is worse than thought. Almost half of the world’s coral reefs, the most diverse ecosystems on the planet, have disappeared in the past 50 years, and the rest are now under severe threat because of overfishing, global warming and ocean acidification. But perhaps the greatest concern rests with the unglamorous plankton on which the entire the food chain depends. The microscopic plants (phytoplankton) that bloom seasonally in the upper ocean dictate the cycling of carbon, particularly CO2, between the ocean and atmosphere. But some phytoplankton are toxic, and when their growth is artificially stimulated by nutrients in fertilizers and sewage (a process called eutrophication), they can poison their environment. Worse, bacteria feeding on the decaying phytoplankton may use up all the available oxygen in the water, turning it into a dead zone for other life. In the longer term oxygen depletion (hypoxia or, if total, anoxia) is also caused in deep water by warming of the upper ocean, which suppresses the circulation of oxygen-rich surface water to the depths.

It’s not just marine biology that stands at risk. The melting of Arctic sea ice has been far faster than expected – summer at the North Pole could be essentially ice-free within 30-40 years. This doesn’t affect sea level, but is disastrous for Arctic life and the influx of fresh water could change patterns of ocean circulation. The melting of grounded ice from Antarctica and Greenland, however, is also proceeding apace – at least as quickly as the worst-case predictions of climate models. Coupled to expansion of water caused by warming, this means that sea-level rise is also tracking worst-case models: it could reach four feet or so by 2100, which will redraw the map of many coastlines.

Perhaps most troubling of all, the IPSO group concluded that these individual processes seem to exacerbate one another. For example, coral reefs damaged by ocean warming are further weakened by pollution and the overfishing of reef populations, making them even more fragile. The worry is that the combination of stresses could push ecosystems to a tipping point at which they collapse catastrophically.

Such things have happened naturally several times in the distant past. The geological record clearly shows at least five global mass extinctions, in which most species all around the planet vanished, as well as many more minor extinction events. The reasons for them are still not fully understood, but the prevailing ocean conditions in which they occurred are similar in some ways – warming, anoxia and acidification – to those we are seeing now. “We now face losing marine species and entire marine ecosystems, such as coral reefs, in a single generation”, the IPSO report avers. “Unless action is taken now, the consequences of our activities are at high risk of causing the next globally significant extinction event in the ocean.”

Sounds bad? Ducklow thinks that feedbacks and synergies could make things even worse. “Working in Antarctica, we’re seeing profound changes rippling through the food chain and affecting biogeochemical processes such as CO2 uptake.” Ducklow admits that any conclusions he and his colleagues have drawn so far, like those of the IPSO team, are based on inadequate observations – over too small a spatial scale, and for too short a time. But his informed hunch is that this merely means we’re not seeing the worst of it. “I expect that as we pass through another decade, with increased concern and surveillance, we will discover things are worse, not better, than we think.”

Ducklow isn’t alone in confirming that the IPSO report’s warnings are not exaggerated. “I agree that the oceans have been greatly impacted by human activity”, says Andrew Watson at the University of East Anglia, one of the foremost UK experts on the interactions of oceans and climate. “They have changed enormously and alarmingly fast over the past 100 years or so.” In Watson’s view, analogies with past mass extinctions are appropriate. “We suspect that at past crises, the real killer was widespread ocean anoxia. This is something that eventually the changes brought about by humans, particularly increased eutrophication and global warming, could bring on.”

But has IPSO pitched its warning wisely? The team seems to have sided with the view of some climatologists, such as NASA scientist James Hansen, that concerns will be heeded only if voiced forcefully, even stridently. Watson isn’t convinced. “In human terms such a change to the life-support systems of the Earth is still a long way in the future. Such disasters unfold over very long time scales compared to a human life: thousands or tens of thousands of years.” So while Watson feels that “the report authors state their case that way with the best of intentions” and agrees on the urgent need for action, he feels uncomfortable with some of the alarming statements. “We create a false impression if we say that we have to act tomorrow to save the Earth or ‘it will be game over’. I don’t find that kind of environmental catastrophism very helpful because it simply fuels a bad-tempered ideological and political argument instead of a well-informed scientific one.”

It’s an irresolvable dilemma forced on the scientists by manufactured controversy and political inaction: risk either being ignored or damned as alarmists. However, the tone of the report is a side issue; all agree on the necessary response. “What’s really needed is a long-term plan to reduce our impact on the oceans,” says Watson. Ducklow insists that this must include not just serious and immediate regulation of fishing, pollution and carbon emissions, but “a comprehensive, global ocean observation system, including ecological and biogeochemical measurements, to determine the current and evolving state of the ocean’s health.” Any suggestion that this is merely a gambit for more research funds now deserves nothing but scorn.

Monday, July 18, 2011

Body shock

Earlier this month I went to a discussion about SciArt – more specifically, BioArt – at the GV Art gallery in London. Debates about science and art can all too readily become exercises in navel gazing, but this one wasn’t, thanks to the interesting folks involved. I’ve written a piece about it for the Prospect blog, and since it is available essentially unedited and for free, I won’t copy the text here.

Thursday, July 14, 2011

Arsenic and old wallpaper

Here’s my Crucible column for the July issue of Chemistry World. We haven’t heard the end of this story, I’m sure.
_________________________________________________

Was William Morris, socialist and utopian prophet of environmentalism, a hypocrite? That uncomfortable possibility was raised in 2003 by biochemist Andrew Meharg of the University of Aberdeen [1]. Meharg described chemical analysis of one of the famous floral wallpapers produced by Morris’s company in the mid-nineteenth century, which showed the foliage to be printed using an arsenic-containing green pigment - either Scheele’s Green (copper arsenite) or Emerald Green (copper acetoarsenite). A rather more incriminating fact was that the arsenic surely came from the Devon Great Consols mines (originally copper mines) owned by Morris’s family in a business of which Morris himself was a director until 1876. Morris’s immense wealth came partly from these mines, whose operations polluted the surrounding land and left derelict flues that are still hazardous today.

The clincher seemed to be that Morris knew of the claims by physicians that arsenic was toxic, but casually dismissed them. “As to the arsenic scare”, he wrote to the dyer Thomas Wardle in 1885, “a greater folly it is hardly possible to imagine… My belief about it all is that the doctors find their patients ailing, don’t know what’s the matter with them, and in despair put it down to the wall papers.”

Once Meharg expanded on this story in a book [2], it seemed that Morris’s reputation was tarnished irreparably. But now the accusations have been challenged by Patrick O’Sullivan of the William Morris Society, who asserts that the situation is by no means so clear-cut [3].

You might wonder if the William Morris Society offers an unbiased voice. But who else would be sufficiently motivated, not to mention well placed, to re-examine what is now widely assumed to be a cut-and-dried conviction? In any event, let’s consider the facts. O’Sullivan points out that the ‘arsenic scare’ of the nineteenth century by no means reflected the consensus of the medical community. Not until 1892 was the odour of arsenic wallpapers linked to the formation of a volatile arsenic compound by the action of a mould that grows in damp conditions. The gas was correctly identified as trimethylarsine only in the 1930s. And a recent review states that this gas is not highly toxic if inhaled, and is unlikely to be produced in significant quantities by the mould anyway [4]. So it isn’t clear that poisoning from arsenic-printed wallpapers was at all common in the nineteenth century – Morris may have been right to suggest that this was a convenient explanation for the multitude of ailments that afflicted people, especially children, during that age.

This, however, does not really absolve Morris. One might expect a man of his espoused principles to have taken seriously any suggestion that his company was making poisonous products, especially considering that the toxicity of arsenic itself was well established – Carl Wilhelm Scheele had felt obliged to reveal this ingredient of his green pigment in the 1770s for that very reason. O’Sullivan points out that Morris resigned as director of Devon Great Consols and sold his shares in the business two years before becoming politically active and six years before putting forward his socialist views. Perhaps, then, he was no hypocrite but realised that his position was no longer consistent with his new ideals?

But that remains a generous interpretation. That Morris was still so confidently denying the dangers of arsenic greens in 1885, without any sound scientific basis either way, somewhat suggests a determination to deny responsibility. And while Morris seems to have treated his workers well, the letter O’Sullivan quotes to justify why he did not make the company a socialist collective is an all-too-familiar refrain from hard-line socialists and Marxists: that such ‘palliatives’ merely delay the revolution. Quite aside from the conditions of workers in the wallpaper works, those in the mines (where arsenic was collected as the white trioxide, condensed from vapour) were undoubtedly awful: the safety precautions were crude in the extreme, and arsenic poisoning in copper mines had been known since at least the Middle Ages.

Most troubling of all is Morris’s silence on the matter. If he changed his mind about his business activities, should one not expect some sign of, if not remorse, then at least reflection? O’Sullivan has made a good argument for re-opening the case, but the suspicion lingers that Morris was no more scrupulous than most of us in examining his conscience.

References

1. A. Meharg, Nature 423, 688 (2003).
2. A. Meharg, Venomous Earth (Macmillan, London, 2005).
3. P. O’Sullivan, William Morris Society Newsletter, Spring 2011. Available here.
4. W. R. Cullen & R. Bentley, J. Envir. Monit. 7, 11-15 (2005).

Tuesday, July 05, 2011

The (digital) art of chemistry

Here’s a bit of naked advertising, because it’s for a good cause. The competition below, organized by ASCI in New York, should be fun if it can draw the right caliber of entries. And since I am a judge, that’s clearly what I hope. ASCI has been described to me by a very reliable witness in the following terms: “they are the largest and most active group of SciArt people and have been doing wonderful work for 20 or so years now.” So go on: give it a shot, and/or spread the word.
________________________________________________________________

Announcing the Open Call for...

"DIGITAL2011: The Alchemy of Change"
An international digital print competition/exhibition to be held at the New York Hall of Science, September 3, 2011 - February 5, 2012

Organized by Art & Science Collaborations, Inc. (ASCI)

DEADLINE: July 17, 2011
GUIDELINES here

CO-JURORS:
Robert Devcic, owner-director of GV Art London gallery
Philip Ball, writer and noted author of popular science books

INTRODUCTION
Humans, animals, insects, trees, plants, oceans, and air -- indeed, all that we see, taste, smell, touch, and breathe, contain molecular processes of physical transformation; a dynamic dance of change. This magic of transition, called alchemy by our earliest scientists, became the science of chemistry. It describes both the physical structure and characteristic actions of matter. It allows for all organic and inorganic change to take place -- brain synapses to fire, oxygen to be formed from carbon dioxide and water during photosynthesis; the transformation of gases in our solar system; along with the ability of proteins to turn our genes on/off. If you extend your imagination beyond the epithelial surface of your body, or into the ether that carries cosmic dust, or even into your kitchen, chemistry can inspire wonder. Like a fabulous menu of concocted primordial soups, when exposed to changes in temperature, pressure, or speed, chemistry can create a stick of dynamite or a magnificent soufflé!

For this exhibition, we celebrate the International Year of Chemistry by inviting artists and scientists to show us their vision of this deeply fundamental, magical enabler of life called chemistry.

Friday, June 24, 2011

Movie characters mimic each other's speech patterns


Here’s my latest news story for Nature News.
****************************************************
Script writers have internalized the unconscious social habits of everyday conversations.

Quentin Tarantino's 1994 film Pulp Fiction is packed with memorable dialogue — 'Le Big Mac', say, or Samuel L. Jackson's biblical quotations. But remember this exchange between the two hitmen, played by Jackson and John Travolta?

Vincent (Travolta): "Antwan probably didn't expect Marsellus to react like he did, but he had to expect a reaction".
Jules: "It was a foot massage, a foot massage is nothing, I give my mother a foot massage."

Computer scientists Cristian Danescu-Niculescu-Mizil and Lillian Lee of Cornell University in Ithaca, New York, see the way Jules repeats the word 'a' used by Vincent as a key example of 'convergence' in language. "Jules could have just as naturally not used an article," says Danescu-Niculescu-Mizil. "For instance, he could have said: 'He just massaged her feet, massaging someone's feet is nothing, I massage my mother's feet.'"

The duo show in a new study that such convergence, which is thought to arise from an unconscious urge to gain social approval and to negotiate status, is common in movie dialogue. It "has become so deeply embedded into our ideas of what conversations 'sound like' that the phenomenon occurs even when the person generating the dialogue [the scriptwriter] is not the recipient of the social benefits", they say.

“For the last forty years, researchers have been actively debating the mechanism behind this phenomenon”, says Danescu-Niculescu-Mizil. His study, soon to be published in a workshop proceedings [1], cannot yet say if the ‘mirroring’ tendency is hard-wired or learnt, but it shows that it does not rely on the spontaneous prompting of another individual and the genuine desire for his or her approval.

“This is a convincing and important piece of work, and offers valuable support for the notion of convergence”, says philologist Lukas Bleichenbacher at the University of Zurich in Switzerland, a specialist on language use in the movies.

The result is all the more surprising given that movie dialogue is generally recognized to be a stylized, over-polished version of real speech, serving needs such as character and plot development that don’t feature in everyday life. “The method is innovative, and kudos to the authors for going there”, says Howie Giles, a specialist in communication at the University of California at Santa Barbara.

"Fiction is really a treasure trove of information about perspective-taking that hasn't yet been fully explored," agrees Molly Ireland, a psychologist at the University of Texas at Austin. "I think it will play an important role in language research over the next few years."

But, Giles adds, "I see no reason to have doubted that one would find the effect here, given that screenwriters mine everyday discourse to make their dialogues appear authentic to audiences".

That socially conditioned speech becomes an automatic reflex has long been recognized. “People say ‘oops’ when they drop something”, Danescu-Niculescu-Mizil explains. “This probably arose as a way to signal to other people that you didn't do it intentionally. But people still say ‘oops’ even when they are alone! So the presence of other people is no longer necessary for the ‘oops’ behaviour to occur – it has become an embedded behavior, a reflex.”

He and Lee wanted to see if the same was true for conversational convergence. To do that, they needed the seemingly unlikely situation in which the person generating the conversation could not expect any of the supposed social advantages of mirroring speech patterns. But that’s precisely the case for movie script-writers.

So the duo looked at the original scripts of about 250,000 conversational exchanges in movies, and analysed them to identify nine previously recognized classes of convergence.

They found that such convergence is common in the movie dialogues, although less so than in real life – or, standing proxy for that here, in actual conversational exchanges held on Twitter. In other words, the writers have internalized the notion that convergence is needed to make dialogue ‘sound real’. “The work makes a valid case for the use of ‘fictional’ data”, says Bleichenbacher.

Not all movies showed the effect to the same extent. “We find that in Woody Allen movies the characters exhibit very low convergence”, says Danescu-Niculescu-Mizil – a reminder, he adds, that “a movie does not have to be completely natural to be good.”

Giles remarks that, rather than simply showing that movies absorb the unconscious linguistic habits of real life, there is probably a two-way interaction. “Audiences use language devices seen regularly in the movies to shape their own discourse”, he points out. In particular, people are likely to see what types of speech ‘work well’ in the movies in enabling characters to gain their objectives, and copy that. “One might surmise that movies are the marketplace for seeing what’s on offer, what works, and what needs purchasing and avoiding in buyers own communicative lives”, Giles says.

Danescu-Niculescu-Mizil hopes to explore another aspect of this blurring of fact and fiction. “We are currently exploring using these differences to detect ‘faked’ conversations”, he says. “For example, I am curious to see whether some of the supposedly spontaneous dialogs in so-called ‘reality shows’ are in fact all that real.”

1. C. Danescu-Niculescu-Mizil & L. Lee, Proc. ACL Workshop on Cognitive Modeling and Computational Linguistics, Portland, Oregon, 76-87 (Association for Computing Machinery Press, New York, 2011). Available as a preprint here.

I received some interesting further comments on the work from Molly Ireland, which I had no space to include fully. They include some important caveats, so here they are:

I think it's important to keep in mind, as the authors point out, that fiction can't necessarily tell us much about real-life dialog. Scripts can tell us quite a bit about how people think about real-life dialog though. Fiction is really a treasure trove of information about perspective-taking that hasn't been fully explored in the past. Between Google books and other computer science advances (like the ones showcased in this paper), it's become much easier to gain access to millions of words of dialog in novels, movies, and plays. I think fiction will play an important role in language and perspective-taking research over the next few years.

Onto their findings: I'm not surprised that the authors found convergence between fictional characters, for a couple of reasons. They mention Martin Pickering and Simon Garrod's interaction alignment model in passing. Pickering and Garrod basically argue that people match a conversation partner's language use because it's easier to reuse language patterns that you've just processed than it is to generate a completely novel utterance. Their argument is partly based on syntactic priming research that shows that people match the grammatical structures of sentences they've recently been presented with – even when they're alone in a room with nothing but a computer. So first of all, we know that people match recently processed language use in the absence of the social incentives that the authors mention (e.g., affection or approval).

Second, all characters were written by the same author (or the same 2-3 authors in some scripts). People have fairly stable speaking styles. So even in the context of scriptwriting, where authors are trying to write distinct characters with different speaking styles, you would expect two characters written by one author with one relatively stable function word fingerprint to use function words similarly (although not identically, if the author is any good).

The authors argue that self-convergence would be no greater than other-convergence if these cold, cognitive features of language processing [the facts that people tend to (a) reuse function words from previous utterances and (b) consistently sound sort of like themselves, even when writing dialog for distinct characters] were driving their findings. That would only be true if authors failed to alter their writing style at all between characters. Adjusting one's own language style when imagining what another person might say probably isn't conscious. It's probably an automatic consequence of taking another person's perspective. An author would have to be a pretty poor perspective-taker for all of his characters to sound exactly like he sounds in his everyday life.

Clearly I'm skeptical about some of the paper's claims, but I would be just as skeptical about any exploration into a new area of research using an untested measure of language convergence (including my own research). I think that the paper's findings regarding sex differences in convergence and differences between contentious and neutral conversations could turn out to be very interesting and should be looked at more closely – possibly in studies involving non-experts. I would just like to look into alternate explanations for their findings before making any assumptions about their results.

Thursday, June 23, 2011

Einstein and his precursors

From time to time, Nature used to receive (and doubtless still does) crank letters claiming that Einstein was not the first to derive E=mc2, but that this equation was first written down, after a fashion, by one Friedrich Hasenörhl, an Austrian physicist with a perfectly respectable, if unremarkable, pedigree and career who was killed in the First World War. This was a favourite ploy of those cranks whose mission in life was to discredit Einstein’s theory of relativity – so much so that I had two such folks discuss it in my novel The Sun and Moon Corrupted. But not until now, while reading Alan Beyerchen’s Scientists Under Hitler (Yale University Press, 1977), did I realise where this notion originated. The idea was put about by Philipp Lenard, the Nobel prizewinner and virulently anti-Semitic German physicist and member of the Nazi party. Lenard put forward the argument in his 1929 book Grosse Naturforscher (Great Natural Researchers), in which he sought to establish that all the great scientific discoveries had been made by people of Aryan-Germanic stock (including Galileo and Newton). Lenard was deeply jealous of Einstein’s international fame, and as a militaristic, Anglophobic nationalist Lenard found Einstein’s pacifism and internationalism abhorrent. It’s a little comical that this nasty little man felt the need to find an alternative to Einstein at all, given that he was violently (literally) opposed to relativity and a staunch believer in the aether. In virtually all respects Lenard fits the profile of the scientific crank (bitter, jealous, socially inadequate, feeling excluded), and he offers a stark (that’s a pun) reminder that a Nobel prize is no guarantee even of scientific wisdom, let alone any other sort. So there we are: all those crank citations of the hapless Hasenöhrl – this is a popular device of the devotees of Viktor Schauberger, the Austrian forest warden whose bizarre ideas about water and vortices led him to be conscripted by the Nazis to make a ‘secret weapon’ – have their basis in Nazi ‘Aryan physics’.

Friday, June 17, 2011

Quantum life

I have a feature in this week’s Nature on quantum biology, and more specifically, on the phenomenon of quantum coherence in photosynthesis. Inevitably, lots of material from the draft had to be cut, and it was a shame not to be able to make the point (though I’m sure I won’t be the first to have made it) that ‘quantum biology’ properly begins with Schrödinger’s 1944 book What is Life? (Actually one can take it back still further, to Niels Bohr: see here.) Let me, though, just add here the full version of the box on Ian McEwan’s Solar, since I found it very interesting to hear from McEwan about the genesis of the scientific themes in the novel.
_______________________________________________________________________________

The fact is, no one understands in detail how plants work, though they pretend they do… How your average leaf transfers energy from one molecular system to another is nothing short of a miracle… Quantum coherence is key to the efficiency, you see, with the system sampling all the energy pathways at once. And the way nanotechnology is heading, we could copy this with the right materials… Quantum coherence in photosynthesis is nothing new, but now we know where to look and what to look at.

These words are lifted not from a talk by any of the leaders in this nascent field but from the pages of Solar, a 2010 novel by the British writer Ian McEwan. A keen observer of science, who has previously scattered it through his novels Enduring Love and Saturday and has spoken passionately about the dangers of global warming, McEwan likes to do his homework. Solar describes the tragicomic exploits of quantum physicist, Nobel laureate and philanderer Michael Beard as he misappropriates an idea to develop a solar-driven method to split water into its elements. The key, as the young researcher who came up with the notion explains, is quantum coherence.

“I wanted to give him a technology still on the lab bench”, says McEwan. He came across Fleming’s research in Nature or Science (he forgets which, but looks regularly at both), and decided that this was what he needed. After ‘rooting around’, he felt there was justification for supposing that a bright postdoc might have had the idea in 2000. It remained to fit that in with Beard’s supposed work in quantum physics. This task was performed with the help of Cambridge physicist Graham Mitchison, who ‘reverse-engineered’ Beard’s Nobel citation which appears in Solar’s appendix: “Beard’s theory revealed that the events that take place when radiation interacts with matter propagate coherently over a large scale compared to the size of atoms.”

Wednesday, June 15, 2011

The Anglican atheist

To be honest, I already suspected that Philip Pullman, literary darling of militant atheists (no doubt to his chagrin), is more religious than me, a feeble weak-tea religious apologist. But it is nice to have that confirmed in the New Statesman. Actually, ‘religious’ is not the right word, since Pullman is indeed (like me) an atheist. I had thought that ‘religiose’ would do it, but it does not – it means excessively and sentimentally religious, which Pullman emphatically isn’t. The word I want would mean ‘inclined to a religious sensibility’. Any candidates?

Pullman is writing is response to a request from Rowan Williams to explain what he means in calling himself a ‘Church of England atheist’. Pullman does so splendidly. Religion was clearly a formative part of his upbringing, and he considers that he cannot simply abandon that – he is attached to what Martin Rees has called the customs of his tribe, that being the C of E. But Pullman is an atheist because he sees no sign of God in the world. He admits that he can’t be sure about this, in which case he should strictly call himself an agnostic. But I’ve always been unhappy with that view of agnosticism, even though it is why Jim Lovelock considers atheism logically untenable (nobody really knows!). To me, atheism is an expression of belief, or if you like, disbelief, not a claim to have hard evidence to back it up. (I’m not sure what such evidence would even look like…)

What makes Pullman so thoughtful and unusual among atheists (and clearly this is why Rowan Williams feels an affinity with him) is that he is interested in religion: “Religion is something that human beings do and human activity is fascinating.” I agree totally, and that is one reason why I wrote Universe of Stone: I found it interesting how religious thought influenced and even motivated other modes of thought, particularly philosophical enquiry about the world. And this is what is so bleak about the view of people like Sam Harris and Harry Kroto, both of whom have essentially told me that they are utterly uninterested in why and how people are religious. They just wish people weren’t. They see religion as a collection of erroneous or unsupported beliefs about the physical world, and have no apparent interest in the human sensibilities that sometimes find expression in religious terms. This is a barren view, yes, but also a dangerous one, because it seems to instil a lack of interest in how religions arise and function in society. For Harris, it seems, there would be peace in the Middle East if there were no religion in the world. I am afraid I can find that view nothing other than childish, and it puzzles me the Richard Dawkins, who I think shares some of Pullman’s ‘in spite of himself’ attraction to religion and has a more nuanced position, is happy to keep company with such views.

Pullman is wonderfully forthright in condemning the stupidities and bigotries that exist in the Anglican Church – its sexism and no doubt (though he doesn’t mention it) its homophobia. “These demented barbarians”, he says, “driven by their single idea that God is obsessed by sex as they are themselves, are doing their best to destroy what used to be one of the great characteristics of the Church of England, namely a sort of humane liberal tolerance.” Well yes, though one might argue that this was a sadly brief phase. And of course, for the idea that God is as obsessed with sex as we are, one must ultimately go back to St Augustine, whose loathing of the body was a strong factor in his more or less single-handed erection (sorry) of original sin at the centre of the Christian faith. But according to some religious readers of Universe of Stone, I lack the religious sensibility to appreciate what Augustine and his imitators, such as Bernard of Clairvaux, were trying to express with their bigotry.

Elsewhere in the same issue of New Statesman, Terry Eagleton implies that it is wrong to harp on about such things because religion (well, Christianity) must be judged on the basis of its most sophisticated theology rather than on how it is practised. Eagleton would doubtless consider Pullman’s vision of a God who might be usurped and exiled, or gone to focus on another corner of the universe, or old and senile, theologically laughable. For God is not some bloke with a cosmic crown and a wand, wandering around the galaxies. I’m in the middle here (again?). Certainly, insisting as Harris does that you are only going to pick fights with the religious literalists who take the Bible as a set of rules and a description of cosmic history, and have never given a moment’s thought to the kind of theology Rowan Williams reads, is the easy option. But so, in a way, is insisting that religion can’t be blamed for the masses who practise a debased form of it. That would be my criticism of Karen Armstrong too, who presents a reasonable and benign, indeed even a wise view of Christianity that probably the majority of its adherents wouldn’t recognize as their own belief system. Religion must be judged by what it does, not just what it says. But the same is true, I fear, of science.

Oh dear, and you know, I was being so good in keeping silent as Sam Harris’s book was getting resoundingly trashed all over the place.

Sunday, June 12, 2011

Go with the Flow

Nicholas Lezard has always struck me as a man with the catholic but highly selective tastes (in literature if not in standards of accommodation) that distinguish the true connoisseur. Does my saying this have anything to do with the fact that he has just singled out my trilogy on pattern formation in the Guardian? How can you even think such a thing? But truly, it is gratifying to have this modest little trio of books noticed in such a manner. I can even live with the fact that Nicholas quotes a somewhat ungrammatical use of the word “prone” from Flow (he is surely literary enough to have noticed, but too gentlemanly to mention it).

Monday, June 06, 2011

Musical intelligence

In the latest issue of Nature I have interviewed the composer Eduardo Reck Miranda about his experimental soundscapes, pinned to a forthcoming performance of one of them at London’s South Bank Centre. Here’s the longer version of the exchange.
_______________________________________________

Eduardo Reck Miranda is a composer based at the University of Plymouth in England, where he heads the Interdisciplinary Centre for Computer Music Research. He studied computer science as well as music composition, and is a leading researcher in the field of artificial intelligence in music. He also worked on phonetics and phonology at the Sony Computer Science Laboratory in Paris. He is currently developing human-machine interfaces that can enable musical performance and composition for therapeutic use with people with extreme physical disability.

Miranda’s compositions combine conventional instruments with electronically manipulated sound and voice. His piece Sacra Conversazione, composed between 2000 and 2003, consists of five movements in which string ensemble pieces are combined with pre-recorded ‘artificial vocalizations’ and percussion. A newly revised version will be performed at the Queen Elizabeth Hall, London, on 9 June as part of a programme of electronic music, Electronica III. Nature spoke to him about the way his work combines music with neurology, psychology and bioacoustics.

In Sacra Conversazione you are aiming to synthesize voice-like utterances without semantic content, by using physical modelling and computer algorithms to splice sounds from different languages in physiologically plausible ways. What inspired this work?

The human voice is a wonderfully sophisticated musical instrument. But in Sacra Conversazione I focused on the non-semantic communicative power of the human voice, which is conveyed mostly by the timbre and prosody of utterances. (Prosody refers to the acoustical traits of vocal utterances characterized by their melodic contour, rhythm, speed and loudness.)

Humans seem to have evolved some sort of ‘prosodic fast lane’ for non-semantic vocal information in the auditory pathways of the brain, from the ears to regions that processes emotion, such as the amygdala. There is evidence that non-semantic content of speech is processed considerably faster than semantic content. We can very often infer the emotional content and intent of utterances before we process their semantic, or linguistic, meaning. I believe that this aspect of our mind is one of the pillars of our capacity for music.

You say that some of the sounds you used would be impossible to produce physiologically, and yet retain an inherent vocal quality. Do you know why that is?

Let me begin by explaining how I began to work on this piece. I started by combining single utterances from a number of different languages – over a dozen, as diverse as Japanese, English, Spanish, Farsi, Thai and Croatian – to form hundreds of composite utterances, or ‘words’, as if I were creating the lexicon for a new artificial language. I carefully combined utterances by speakers of similar voice and gender and I used sophisticated speech-synthesis methods to synthesise these new utterances. It was a painstaking job.

I was surprised that only about 1 in 5 of these new ‘words’ sounded natural to me. The problem was in the transitions between the original utterances. For example, whereas the transition from say Thai utterance A to Japanese utterance B did not sound right, the transition of the former to Japanese utterance C was acceptable. I came to believe that the main reason is physiological. When we speak, our vocal mechanism needs to articulate a number of different muscles simultaneously. I suspect that even though we may be able to synthesise physiologically implausible utterances artificially, the brain would be reluctant to accept them.

Then I moved on to synthesize voice using a physical model of the vocal tract. I used a model with over 20 variables, each of which roughly represents a muscle of the vocal tract (see E. R. Miranda, Leonardo Music Journal 15, 8-16 (2005)). I found it extremely difficult to co-articulate the variables of the model to produce decent utterances, which explains why speech technology for machines is still is very much reliant on splicing and smoothing methods. On the other hand, I was able to produce surreal vocalizations that, while implausible for humans to produce, retain a certain degree of coherence because of the physiological constraints embedded in the model.

Much of the research in music cognition uses the methods of neuroscience to understand the perception of music. You appear to be more or less reversing this approach, using music to try to understand processes of speech production and cognition. What makes you think this is possible?

The choice of research methodology depends on the aims to the research. The methods of cognitive neuroscience are largely aimed at proving hypotheses. One formulates a hypothesis to explain a certain aspect of cognition and then designs experiments aimed at proving it.

My research, however, is not aimed at a describing how music perception works. Rather, I am interested in creating new approaches to musical composition informed by research into speech production and cognition. This requires a different methodology, which is more exploratory: do it first and reflect upon the outcomes later.

I feel that cognitive neuroscience research methods force scientists to narrow the concept of music, whereas I am looking for the opposite: my work is aimed at broadening the concept of music. I should not think that both approaches are incompatible: one could certainly inform and complement the other.

What have you learnt from your work about how we make and perceive sound?

One of the things I’ve learnt is that perception of voice – and, I suspect, auditory perception in general – seems to be very much influenced by the physiology of vocal production.

Much of your work has been concerned with the synthesis and manipulation of voice. Where does music enter into it, and why?

Metaphorically speaking, synthesis and manipulation of voice are only the cogs, nuts and bolts. Music really happens when one starts to assemble the machine. It is extremely hard to describe how I composed Sacra Conversazione, but inspiration played a big role. Creative inspiration is beyond the capability of computers, yet finding its origin is the Holy Grail of the neurosciences. How can the brain draw and execute plans on our behalf implicitly, without telling us?

What are you working on now?

Right now I am orchestrating raster plots of spiking neurons and the behaviour of artificial life models for Sound to Sea, a large-scale symphonic piece for orchestra, church organ, percussion, choir and mezzo soprano soloist. The piece was commissioned by my university, and will be premiered in 2012 at the Minster Church of St Andrew in Plymouth.

Do you feel that the evolving understanding of music cognition is opening up new possibilities in music composition?

Yes, to a limited extent. Progress will probably emerge from the reverse: new possibilities in musical composition contributing to the development of such understanding.

What do you hope audiences might feel when listening to your work? Are you trying to create an experience that is primarily aesthetic, or one that challenges listeners to think about the relationship of sound to language? Or something else?

I would say both. But my primary aim is to compose music that is interesting to listen to and catches the imagination of the audience. I would prefer my music to be appreciated as a piece of art rather than as a challenging auditory experiment. However, if the music makes people think about, say, the relationship of sound to language, I would be even happier. After all, music is not merely entertainment.

Although many would regard your work as avant-garde, do you feel part of a tradition that explores the boundaries of sound, voice and music? Arnold Schoenberg, for example, aimed to find a form of vocalization pitched between song and speech, and indeed the entire operatic form of recitative is predicated on a musical version of speech.

Absolutely. The notion of avant-garde disconnected from tradition is too naïve. If anything, to be at the forefront of something you need the stuff in the background. Interesting discoveries and innovations do not happen in a void.