Friday, August 26, 2011

Dude looks like a lady

When I was talking recently in Barcelona at a music conference, I was interviewed by a Spanish newspaper, which has now published the piece. From what I can tell (courtesy of Google Translate), it is I think best described as a loose improvisation based around our conversation. And perhaps the better for it, who knows? But I like best one of the reader comments:
“Language is very intellectual, good photo, but looks like a woman, perhaps the combination has made the smart person.”
In my experience, however, that is a little unfair to Spanish women.

Wednesday, August 24, 2011

Did Einstein discover E=mc2?

A lot of people have strong opinions about that, as is clear from the comments that have followed on from my article of this title for Physics World. (I particularly liked "For an objective account, see Albert Einstein: The Incorrigible Plagiarist." Yup, sounds like an objective book to me.) The piece is here, but the pre-edited version is below. There's a fair bit more that I'd have liked to explore here - it's a deeply interesting issue. The biggest revelation for me was not so much seeing that there were several well-founded precursors for the equivalence of mass and energy, but finding that this equivalence seems to have virtually nothing to do with special relativity. Tony Rothman said to me that "I've long maintained that the conventional history of science, as presented in the media, textbooks and by the stories scientists tell themselves is basically a collection of fairy tales." I'd concur with that.
________________________________________________________________

Who discovered that E=mc2? It’s not as easy a question as you might think. Scientists ranging from James Clerk Maxwell and Max von Laue to a string of now obscure early twentieth-century physicists have been proposed as the true discovers of the mass-energy equivalence now popularly credited to Einstein’s theory of special relativity. These claims have spawned headlines accusing Einstein of plagiarism, but many are spurious or barely supported. Yet two physicists have now shown that Einstein’s famous formula does have a complicated and somewhat ambiguous genesis – which has little to do with relativity.

One of the more plausible precursors to E=mc2 is attributed to Fritz Hasenörhl, a physics professor at the University of Vienna. In a 1904 paper, Hasenörhl clearly wrote down the equation E=3/8mc2. Where did he get it from, and why is the constant of proportionality wrong? Stephen Boughn of Haverford College in Pennsylvania and Tony Rothman of Princeton University examine this question in a preprint.

“I had run across Hasenöhrl's name a number of times with no real explanation as to what he did”, Rothman explains. “One of my old professors, E.C.G. Sudarshan, once remarked that he gave Hasenöhrl credit for mass-energy equivalence. So around Christmas time last year, I said to Steve, ‘why don't we spend a couple hours after lunch one day looking at Hasenöhrl's papers and see what he did wrong?’ Well, two hours turned into eight months, because the problem ended up being extremely difficult.”

Hasenöhrl’s name has a certain notoriety now, as he is commonly invoked by anti-Einstein cranks. His reputation as the man who really discovered E=mc2 owes much to the efforts of the anti-Semitic and pro-Nazi physics Nobel laureate Philipp Lenard, who sought to separate Einstein’s name from the theory of relativity so that it was not seen as a product of ‘Jewish science’.

Yet all this does Hasenörhl a disservice. He was Ludwig Boltzmann’s student and successor at Vienna, and was lauded by Erwin Schrödinger among others. “Hasenohrl was probably the leading Austrian physicist of his day”, says Rothman. He might have achieved much more if he had not been killed in the First World War.

The relationship of energy and mass was already widely discussed by the time Hasenörhl considered the matter. Henri Poincaré had stated that electromagnetic radiation had a momentum and thus effectively a mass according to E=mc2. German physicist Max Abraham argued that a moving electron interacts with its own field E0 to acquire an apparent mass given by E0=3/4mc2. All this was based on classical electrodynamics, assuming an ether theory. “Hasenöhrl, Poincaré, Abraham and others suggested that there must be an inertial mass associated with electromagnetic energy, even though they may have disagreed on the constant of proportionality”, says Boughn.

Robert Crease, a philosopher and historian of science at Stony Brook University in New York, agrees. “Historians often say that, had there been no Einstein, the community would have converged on special relativity shortly”, he says. “Events were pushing them kicking and screaming in that direction.” Boughn and Rothman’s work, he says, shows that Hasenöhrl was among those headed this way.

Hasenörhl approached the problem by asking whether a black body emitting radiation changes in mass when it is moving relative to the observer. He calculated that the motion adds a mass of 3/8c2 times the radiant energy. The following year he corrected this to 3/4c2.

However, no-one has properly studied Hasenörhl’s derivation to understand his reasoning or why the prefactor is wrong, say Bough and Rothman. That’s not easy, they admit. “The papers are by today’s standards presented in a cumbersome manner and are not free of error. The greatest hindrance is that they are written from an obsolete world view, which can only confuse the reader steeped in relativistic physics.” Even Enrico Fermi apparently did not bother to read Hasenörhl’s papers properly before concluding wrongly that the discrepant 3/4 prefactor was due to the electron self-energy identified by Abraham.

“What Hasenörhl really missed in his calculation was the idea that if the radiators in his cavity are emitting radiation, they must be losing mass, so his calculation wasn't consistent”, says Rothman. “Nevertheless, he got half of it right. If he had merely said that E is proportional to m, history would probably have been kinder to him.”

But if that’s the case, where does relativity come into it? Actually, it doesn’t. While Einstein’s celebrated 1905 paper ‘On the electrodynamics of moving bodies’ clearly laid down the foundations of relativity by abandoning the ether and making the speed of light invariant, his derivation of E=mc2 did not depend on those assumptions. You can get the right answer with classical physics, says Rothman, all in an ether theory without c being either constant or the limiting speed. “Although Einstein begins relativistically, he approximates away all the relativistic bits, and you are left with what is basically a classical calculation."

Physicist Clifford Will of Washington University in St Louis, a specialist on relativity, considers the preprint “very interesting”. Boughn and Rothman “are well regarded physicists”, he says, and as a result he “tend[s] to trust their analysis”. However, the controversies that have been previously aroused over the issue of priority perhaps accounts for some of the reluctance of historians of physics to comment when contacted by Physics World.

Did Einstein know of Hasenörhl’s work? “I can't prove it, but I am reasonably certain that Einstein must done, and just decided to do it better”, says Rothman. But failure to cite it was not inconsistent with the conventions of the time. In any event, Einstein asserted his priority for the mass-energy relationship when this was challenged by Johannes Stark (who credited it in 1907 to Max Planck). Both Hasenörhl and Einstein were at the famous first Solvay conference in 1911, along with most of the other illustrious physicists of the time. “One can only imagine the conversations”, say Boughn and Rothman.

Tuesday, August 02, 2011

A philosophical question

Here’s my latest Crucible column for Chemistry World.
___________________________________________________________________________________

“Philosophy is dead” is an assertion that, coming from most people, would be dismissed as idle, unconsidered, even meaningless. (What, all of it? Political philosophy? Moral philosophy? The philosophy of music?) But when Stephen Hawking announced this in his recent book with Leonard Mlodinow, The Grand Design, it was greeted as the devastating judgement of a sage and sent philosophers scurrying to the discussion boards to defend their subject (more properly, to defend Hawking’s presumed target of metaphysics).

Yet many chemists may be unaware that a philosophy of chemistry existed in the first place. Isn’t chemistry about practical, tangible matters, or – when theoretical issues are concerned – questions of right and wrong, not the fuzzy and abstract issues popularly associated with philosophy? On the contrary, at least two journals (Hyle and Foundations of Chemistry) and the International Society for the Philosophy of Chemistry have insisted for some years that there are profound chemical questions of a philosophical nature.

These questions might not seem quite as urgent as how to make stereoselective carbon-carbon bonds, but they should at the very least make chemists reflect about the nature of their daily craft. What is the ontological status of ‘laws’ of chemistry? To what extent are molecular structures metaphorical? What’s more, the philosophy of chemistry impinges directly on chemistry’s public image. As Eric Scerri, editor-in-chief of Foundations of Chemistry, says, “Most philosophers of science believe that chemistry has been reduced to physics and is therefore of no fundamental interest. They believe that chemistry has no ‘big ideas’ to compare with quantum mechanics and relativity in physics and Darwin’s theory in biology” [1].

The philosophy of chemistry excites lively, often impassioned debate. Those unquiet waters have recently been agitated by an extensive overview of the topic published in the Stanford Encyclopedia of Philosophy, a widely used online reference source, by Michael Weisberg, Paul Needham and Robin Hendry, all three respected philosophers of science [2]. It’s an ambitious affair, accommodating everything from the evolution since ancient times of theories of matter to the nature of the chemical bond and interpretations of quantum theory. The piece has proved controversial because the authors have presented points of view on several of these issues that are not universally shared.

Much of the debate hinges on the fact that the concepts and principles used by chemists – the notion of elements, molecules, bonds, structure, or the idea much debated by these philosophers that ‘water is H2O’ – lack philosophical rigour. Arguments about whether gaseous helium contains atoms or molecules, or whether the element sodium refers to a grey metal or to atoms with 11 protons, are frequently rehearsed in lab coffee rooms. That these hardly affect the practicalities of chemical synthesis doesn’t detract from their validity as philosophical conundrums.

Take, for example, Needham’s claim that isotopes of the ‘same’ element should in fact be considered different elements [3]. Clearly there is rather little difference between 35Cl and 37Cl, but if ‘element’ is pinned to chemical identity, are H and D really the ‘same’? Indeed, does not even the tiniest isotope effect blur any strict definition based on chemical behaviour rather than proton number? Perhaps the Austrian chemist Friedrich Paneth was right to regard the notion of an element as something ‘transcendental’.

Even more controversially, Hendry takes a view long developed by him and others such as Guy Woolley that the concept of molecular structure is mere metaphor, rendered logically incoherent by quantum mechanics. To distinguish methanol from dimethyl ether, we need to first put the nuclei in position by hand and then apply the Born-Oppenheimer approximation to the quantum equations so that only the electrons move. Without this approximation, the raw Hamiltonian for nuclei and electrons is identical for both isomers.

Hendry asserts that the isomers exist as quantum superpositions, from which a particular isomer emerges only when the wavefunction is collapsed by observation. Scerri argues [4], in contrast, that this collapse happens naturally and inevitably because of environment-induced decoherence. Even if so, the image is disconcerting: molecular structures exist because of their environment, not as intrinsic entities. What of molecules isolated in interstellar space, almost a closed system? Regardless of the position one takes, it remains unclear how, or if, molecular structure can be extracted directly from quantum theory, as opposed to being rationalized post hoc – relative energies can be computed, for sure, but that’s not the same. Ultimately these questions might have answers in physics; at least for the moment, they are philosophical.

References
1. E. R. Scerri, J. Chem. Ed. 77, 522-526 (2000).
2. M. Weisberg, P. Needham & R. Hendry, ‘Philosophy of Chemistry’, Stanford Encyclopedia of Philosophy.
3. P. Needham, Stud. Hist. Phil. Sci., 39, 66–77 (2008).
4. E. R. Scerri, Found. Chem. 13, 1-7 (2011).

Friday, July 29, 2011

The reason why not

I just discovered that this book review I wrote recently for The National, a UAE newspaper, was published back in early June. It doesn’t seem to have altered much in the editing, but here it is anyway.
__________________________________________________________________________

The Reason Why:
The Miracle of Life of Earth

by John Gribbin
Allen Lane, 2011; ISBN 978 1 846 14327 4
219 pages
£20.00

In 1950 the Italian physicist Enrico Fermi was walking to lunch at the Los Alamos National Laboratory with his colleagues from the Manhattan Project. They were discussing a recent spate of UFO reports, and as they sat down to eat, Fermi challenged the company. If the cosmos is full of space-faring aliens, he said, “Where is everybody?”

In The Reason Why, veteran science writer John Gribbin answers Fermi’s ‘paradox’ by saying that we have seen no sign of aliens because they don’t exist. Not, at least, in our Milky Way Galaxy – and beyond that, the distances are so vast that it is hardly worth asking. “We are alone, and we had better get used to the idea”, he concludes.

The likelihood of intelligent life on other planets has been conditioned since the 1960s by the thinking of Cornell astronomer Frank Drake, whose eponymous equation divides the question into its component parts, the probabilities of each of which one might conceivably hope to quantify or at least estimate: how many stars have planets, how many are Earth-like, and so on.

Depending on your taste, the Drake equation is either a logical way of getting purchase on a profound question, or an attempt to manufacture knowledge from ignorance. In trying to get a meaningful number by multiplying very big ones, very small ones, and very uncertain ones, the Drake equation seems more like guesswork disguised as maths.

Gribbin, however, asserts that just about every one of the necessary conditions for intelligent life to emerge has a low, perhaps minuscule, probability. Their combination then makes it highly unlikely that we have any galactic neighbours eagerly trying to make contact. For instance, only a relatively small part of our galaxy is inhabitable – the crowded interior is bathed in sterilizing radiation from black holes and supernovae. Only stars of a certain age have enough heavy chemical elements to make Earth-like planets and dwellers thereon. Only a few such stars lack partners that pull planetary orbits into extreme shapes, making climate variations unendurably extreme.

The specialness of the Earth is particularly apparent in the make-up of our solar system. For example, we are protected from more frequent impacts of asteroids and comets, like the one that seems to have sent the dinosaurs to extinction 65 million years ago, by the immense size of Jupiter, more a failed star than a planet, whose gravity sucks up these stray objects. One such, comet Shoemaker-Levy 9, ploughed into the giant planet in 1994, leaving a scar the size of the Earth.

Gribbin is especially good on the benign effect of the Moon. The Earth is unusual in having a moon so large in relation to the planet itself, which is now believed to have been created when a proto-Earth stumbled into another planet-like object called Theia with which it shared an orbit 4.5 billion years ago. The rocky debris clumped to form the Moon, while the traumatized, molten Earth swallowed Theia’s iron core to give it an unusually large core today, the source of the strong geomagnetic field that deflects harmful particles streaming from the Sun. This impact probably left the Earth spinning fast (a Venusian day lasts the best part of an Earthly year) and tilted on its axis, from which our seasons ensue. What’s more, the Moon’s gravity stops this tilt from being righted by the influence of Jupiter. Before the debris coalesced into the lunar globe, its gravity created awesome tides on the more rapidly spinning Earth that rose and fell several kilometres every two hours or so. Even though the barren Moon was too light to hold an atmosphere of its own, life on Earth would be very different – perhaps impossible – without it.

This ‘rare Earth’ case has been made before, but Gribbin gives the arguments a fresh shine. Yet he assembles them in a legalistic rather than strictly scientific manner. That’s to say, he marshals (generally impeccable) science to argue his case rather than objectively to investigate the possibilities. For example, he predicates a discussion of the ‘habitable zone’ of the solar system – a crucial part of the argument – on the claim that “it is reasonable to assume that ‘life as we know it’ does require the presence of liquid water.” That Trekkie-inspired ‘as we know it’ is back-covering, and reminds me of a conference I once attended that was convened to ask if life in the cosmos could exist without water. Speaker after speaker insisted that it could not, since that never happens on Earth, which was of course merely a statement that life adapted to water can’t do without it. Now, there are arguments why water might be essential for life anywhere, but they are subtle and not the ones Gribbin casually gives. More to the point, they are still arm-waving and do nothing to dent a counter-claim that it is reasonable to suggest that non-aqueous life is possible.

Such solipsism pervades the book, and is implicit in Fermi’s paradox to begin with. It supposes that intelligent life will think as we do now, with a determination to find and populate other inhabited worlds – and moreover, will have already done so in a way that leaves a mark so prominent that we’ll find it within the first 50 years (a comically short span in cosmic terms) of looking. Are even we so determined? If it would be unwise to conclude from the parlous state of human space exploration that this is just a phase civilizations quickly grow out of, the current situation is nonetheless even less suggestive of the opposite. Worse, since spaceflight seems increasingly likely to be a private enterprise, Gribbin implies that mega-rich philanthropists with a penchant for spaceflight like Virgin’s Richard Branson and Microsoft’s Paul Allen follow inexorably from the laws of physics.

The same historical determinism colours his belief that space-faring civilizations are a one-shot affair on inhabitable planets. If we foul up after having used all of the surface deposits of fossil fuels, he says, we’ll never again be able to claw our way out of a state of barbarism. But this assumes that apocalypse comes only after the oil and coal are exhausted, and moreover that a re-emergent civilization would stall not at the Stone Age but at the pre-industrial enlightenment. In this definition, a civilization capable of producing Aristotle, let alone Newton, doesn’t qualify as intelligent. The challenge of getting from Newton to Neil Armstrong without plentiful oil is a good pretext for a science-fiction novel, but it hardly proves anything else.

Gribbin’s account of the chance events that allowed humans to evolve from slime is particularly unpersuasive of any broader conclusions. It sounds increasingly like the kind of enumeration of contingency and coincidence that invites us to marvel at how ‘unlikely’ it is that we ever met our spouses. Once Gribbin starts invoking a highly speculative cometary impact on Venus to explain the Cambrian explosion in which complex life diversified about 540 million years ago, one senses that he is determinedly picking out a precarious path to a foregone conclusion.

None of this is to say that The Reason Why is a bad book. On the contrary, it is as lucid, well researched and enjoyable as Gribbin always is, and supplies a peerless guide to the way stars and planets are formed. And as a polemic, it is entirely justified in being selective with the evidence. Besides, many of Gribbin’s astrophysical arguments for the rarity of life are robust, and as such they make a convincing case that the Galaxy is not teeming with life that is loftily or mischievously ignoring us.

Yet the book fails to offer any philosophical perspective. The specialness of humanity has in history been asserted almost always as a theological issue, whether to counter Copernicus or Darwin. If Gribbin is right and we just got phenomenally lucky – that the laws of physics are so miserly about allowing matter to become self-aware – this is sufficiently peculiar to warrant more comment. Even atheists might then forgive theologians from taking an interest, just as they do in the ‘fine-tuning’ that seemingly makes physical laws exquisitely geared to support matter and life in the first place. Gribbin can suggest only that, if we’re alone in the galaxy, we have an even greater responsibility to our planet. It would be nice to think so, but see how far that gets you at the next climate summit.

Wednesday, July 20, 2011

No fit state

I’ve got a piece in the latest issue of Prospect (not yet online) about the recent report on the state of the oceans from the IPSO project. Here’s what the full draft looked like.
__________________________________________________________

“Unprecedented… shocking… what we face is a globally significant extinction event.” These judgements on the state of the global oceans, pronounced by the scientists who attended a recent workshop of the International Programme on the State of the Ocean (IPSO), sound truly scary. The future of the ocean’s ecosystem look “far worse than we had realised”, says IPSO’s director, Oxford zoologist Alex Rogers. “If the ocean goes down, it’s game over.”

When the IPSO report was released in June, it made apocalyptic headlines. But such is the prevailing public mood on climate and environmental change that strong words may do little to alter opinions. Sceptics will dismiss them as scaremongering in a bid for research funding, while they will fuel righteous indignation among those already convinced of impending catastrophe. And if you haven’t already made up your mind, this seems an invitation to paralysing despair.

So how seriously should we take the IPSO report? According to Hugh Ducklow, director of the Ecosystems Center at Woods Hole, Massachusetts, one of the US’s most prestigious marine biology laboratories, it isn’t exaggerating. “If anything”, says Ducklow (who is not a part of IPSO), “the true state of the ocean is likely worse than the report indicates.”

The IPSO workshop, held in Oxford in April, brought together leading marine scientists, legal experts and NGO representatives. They considered threats to ocean ecosystems ranging from over-exploitation of fish stocks to acidification of the waters, caused by increased amounts of dissolved carbon dioxide (CO2) as atmospheric levels of this greenhouse gas rise. Many fish populations have been literally decimated – even since the report was released, a paper in Science says that the state of some species of high commercial value, such as bluefin tuna, is worse than thought. Almost half of the world’s coral reefs, the most diverse ecosystems on the planet, have disappeared in the past 50 years, and the rest are now under severe threat because of overfishing, global warming and ocean acidification. But perhaps the greatest concern rests with the unglamorous plankton on which the entire the food chain depends. The microscopic plants (phytoplankton) that bloom seasonally in the upper ocean dictate the cycling of carbon, particularly CO2, between the ocean and atmosphere. But some phytoplankton are toxic, and when their growth is artificially stimulated by nutrients in fertilizers and sewage (a process called eutrophication), they can poison their environment. Worse, bacteria feeding on the decaying phytoplankton may use up all the available oxygen in the water, turning it into a dead zone for other life. In the longer term oxygen depletion (hypoxia or, if total, anoxia) is also caused in deep water by warming of the upper ocean, which suppresses the circulation of oxygen-rich surface water to the depths.

It’s not just marine biology that stands at risk. The melting of Arctic sea ice has been far faster than expected – summer at the North Pole could be essentially ice-free within 30-40 years. This doesn’t affect sea level, but is disastrous for Arctic life and the influx of fresh water could change patterns of ocean circulation. The melting of grounded ice from Antarctica and Greenland, however, is also proceeding apace – at least as quickly as the worst-case predictions of climate models. Coupled to expansion of water caused by warming, this means that sea-level rise is also tracking worst-case models: it could reach four feet or so by 2100, which will redraw the map of many coastlines.

Perhaps most troubling of all, the IPSO group concluded that these individual processes seem to exacerbate one another. For example, coral reefs damaged by ocean warming are further weakened by pollution and the overfishing of reef populations, making them even more fragile. The worry is that the combination of stresses could push ecosystems to a tipping point at which they collapse catastrophically.

Such things have happened naturally several times in the distant past. The geological record clearly shows at least five global mass extinctions, in which most species all around the planet vanished, as well as many more minor extinction events. The reasons for them are still not fully understood, but the prevailing ocean conditions in which they occurred are similar in some ways – warming, anoxia and acidification – to those we are seeing now. “We now face losing marine species and entire marine ecosystems, such as coral reefs, in a single generation”, the IPSO report avers. “Unless action is taken now, the consequences of our activities are at high risk of causing the next globally significant extinction event in the ocean.”

Sounds bad? Ducklow thinks that feedbacks and synergies could make things even worse. “Working in Antarctica, we’re seeing profound changes rippling through the food chain and affecting biogeochemical processes such as CO2 uptake.” Ducklow admits that any conclusions he and his colleagues have drawn so far, like those of the IPSO team, are based on inadequate observations – over too small a spatial scale, and for too short a time. But his informed hunch is that this merely means we’re not seeing the worst of it. “I expect that as we pass through another decade, with increased concern and surveillance, we will discover things are worse, not better, than we think.”

Ducklow isn’t alone in confirming that the IPSO report’s warnings are not exaggerated. “I agree that the oceans have been greatly impacted by human activity”, says Andrew Watson at the University of East Anglia, one of the foremost UK experts on the interactions of oceans and climate. “They have changed enormously and alarmingly fast over the past 100 years or so.” In Watson’s view, analogies with past mass extinctions are appropriate. “We suspect that at past crises, the real killer was widespread ocean anoxia. This is something that eventually the changes brought about by humans, particularly increased eutrophication and global warming, could bring on.”

But has IPSO pitched its warning wisely? The team seems to have sided with the view of some climatologists, such as NASA scientist James Hansen, that concerns will be heeded only if voiced forcefully, even stridently. Watson isn’t convinced. “In human terms such a change to the life-support systems of the Earth is still a long way in the future. Such disasters unfold over very long time scales compared to a human life: thousands or tens of thousands of years.” So while Watson feels that “the report authors state their case that way with the best of intentions” and agrees on the urgent need for action, he feels uncomfortable with some of the alarming statements. “We create a false impression if we say that we have to act tomorrow to save the Earth or ‘it will be game over’. I don’t find that kind of environmental catastrophism very helpful because it simply fuels a bad-tempered ideological and political argument instead of a well-informed scientific one.”

It’s an irresolvable dilemma forced on the scientists by manufactured controversy and political inaction: risk either being ignored or damned as alarmists. However, the tone of the report is a side issue; all agree on the necessary response. “What’s really needed is a long-term plan to reduce our impact on the oceans,” says Watson. Ducklow insists that this must include not just serious and immediate regulation of fishing, pollution and carbon emissions, but “a comprehensive, global ocean observation system, including ecological and biogeochemical measurements, to determine the current and evolving state of the ocean’s health.” Any suggestion that this is merely a gambit for more research funds now deserves nothing but scorn.

Monday, July 18, 2011

Body shock

Earlier this month I went to a discussion about SciArt – more specifically, BioArt – at the GV Art gallery in London. Debates about science and art can all too readily become exercises in navel gazing, but this one wasn’t, thanks to the interesting folks involved. I’ve written a piece about it for the Prospect blog, and since it is available essentially unedited and for free, I won’t copy the text here.

Thursday, July 14, 2011

Arsenic and old wallpaper

Here’s my Crucible column for the July issue of Chemistry World. We haven’t heard the end of this story, I’m sure.
_________________________________________________

Was William Morris, socialist and utopian prophet of environmentalism, a hypocrite? That uncomfortable possibility was raised in 2003 by biochemist Andrew Meharg of the University of Aberdeen [1]. Meharg described chemical analysis of one of the famous floral wallpapers produced by Morris’s company in the mid-nineteenth century, which showed the foliage to be printed using an arsenic-containing green pigment - either Scheele’s Green (copper arsenite) or Emerald Green (copper acetoarsenite). A rather more incriminating fact was that the arsenic surely came from the Devon Great Consols mines (originally copper mines) owned by Morris’s family in a business of which Morris himself was a director until 1876. Morris’s immense wealth came partly from these mines, whose operations polluted the surrounding land and left derelict flues that are still hazardous today.

The clincher seemed to be that Morris knew of the claims by physicians that arsenic was toxic, but casually dismissed them. “As to the arsenic scare”, he wrote to the dyer Thomas Wardle in 1885, “a greater folly it is hardly possible to imagine… My belief about it all is that the doctors find their patients ailing, don’t know what’s the matter with them, and in despair put it down to the wall papers.”

Once Meharg expanded on this story in a book [2], it seemed that Morris’s reputation was tarnished irreparably. But now the accusations have been challenged by Patrick O’Sullivan of the William Morris Society, who asserts that the situation is by no means so clear-cut [3].

You might wonder if the William Morris Society offers an unbiased voice. But who else would be sufficiently motivated, not to mention well placed, to re-examine what is now widely assumed to be a cut-and-dried conviction? In any event, let’s consider the facts. O’Sullivan points out that the ‘arsenic scare’ of the nineteenth century by no means reflected the consensus of the medical community. Not until 1892 was the odour of arsenic wallpapers linked to the formation of a volatile arsenic compound by the action of a mould that grows in damp conditions. The gas was correctly identified as trimethylarsine only in the 1930s. And a recent review states that this gas is not highly toxic if inhaled, and is unlikely to be produced in significant quantities by the mould anyway [4]. So it isn’t clear that poisoning from arsenic-printed wallpapers was at all common in the nineteenth century – Morris may have been right to suggest that this was a convenient explanation for the multitude of ailments that afflicted people, especially children, during that age.

This, however, does not really absolve Morris. One might expect a man of his espoused principles to have taken seriously any suggestion that his company was making poisonous products, especially considering that the toxicity of arsenic itself was well established – Carl Wilhelm Scheele had felt obliged to reveal this ingredient of his green pigment in the 1770s for that very reason. O’Sullivan points out that Morris resigned as director of Devon Great Consols and sold his shares in the business two years before becoming politically active and six years before putting forward his socialist views. Perhaps, then, he was no hypocrite but realised that his position was no longer consistent with his new ideals?

But that remains a generous interpretation. That Morris was still so confidently denying the dangers of arsenic greens in 1885, without any sound scientific basis either way, somewhat suggests a determination to deny responsibility. And while Morris seems to have treated his workers well, the letter O’Sullivan quotes to justify why he did not make the company a socialist collective is an all-too-familiar refrain from hard-line socialists and Marxists: that such ‘palliatives’ merely delay the revolution. Quite aside from the conditions of workers in the wallpaper works, those in the mines (where arsenic was collected as the white trioxide, condensed from vapour) were undoubtedly awful: the safety precautions were crude in the extreme, and arsenic poisoning in copper mines had been known since at least the Middle Ages.

Most troubling of all is Morris’s silence on the matter. If he changed his mind about his business activities, should one not expect some sign of, if not remorse, then at least reflection? O’Sullivan has made a good argument for re-opening the case, but the suspicion lingers that Morris was no more scrupulous than most of us in examining his conscience.

References

1. A. Meharg, Nature 423, 688 (2003).
2. A. Meharg, Venomous Earth (Macmillan, London, 2005).
3. P. O’Sullivan, William Morris Society Newsletter, Spring 2011. Available here.
4. W. R. Cullen & R. Bentley, J. Envir. Monit. 7, 11-15 (2005).

Tuesday, July 05, 2011

The (digital) art of chemistry

Here’s a bit of naked advertising, because it’s for a good cause. The competition below, organized by ASCI in New York, should be fun if it can draw the right caliber of entries. And since I am a judge, that’s clearly what I hope. ASCI has been described to me by a very reliable witness in the following terms: “they are the largest and most active group of SciArt people and have been doing wonderful work for 20 or so years now.” So go on: give it a shot, and/or spread the word.
________________________________________________________________

Announcing the Open Call for...

"DIGITAL2011: The Alchemy of Change"
An international digital print competition/exhibition to be held at the New York Hall of Science, September 3, 2011 - February 5, 2012

Organized by Art & Science Collaborations, Inc. (ASCI)

DEADLINE: July 17, 2011
GUIDELINES here

CO-JURORS:
Robert Devcic, owner-director of GV Art London gallery
Philip Ball, writer and noted author of popular science books

INTRODUCTION
Humans, animals, insects, trees, plants, oceans, and air -- indeed, all that we see, taste, smell, touch, and breathe, contain molecular processes of physical transformation; a dynamic dance of change. This magic of transition, called alchemy by our earliest scientists, became the science of chemistry. It describes both the physical structure and characteristic actions of matter. It allows for all organic and inorganic change to take place -- brain synapses to fire, oxygen to be formed from carbon dioxide and water during photosynthesis; the transformation of gases in our solar system; along with the ability of proteins to turn our genes on/off. If you extend your imagination beyond the epithelial surface of your body, or into the ether that carries cosmic dust, or even into your kitchen, chemistry can inspire wonder. Like a fabulous menu of concocted primordial soups, when exposed to changes in temperature, pressure, or speed, chemistry can create a stick of dynamite or a magnificent soufflé!

For this exhibition, we celebrate the International Year of Chemistry by inviting artists and scientists to show us their vision of this deeply fundamental, magical enabler of life called chemistry.

Friday, June 24, 2011

Movie characters mimic each other's speech patterns


Here’s my latest news story for Nature News.
****************************************************
Script writers have internalized the unconscious social habits of everyday conversations.

Quentin Tarantino's 1994 film Pulp Fiction is packed with memorable dialogue — 'Le Big Mac', say, or Samuel L. Jackson's biblical quotations. But remember this exchange between the two hitmen, played by Jackson and John Travolta?

Vincent (Travolta): "Antwan probably didn't expect Marsellus to react like he did, but he had to expect a reaction".
Jules: "It was a foot massage, a foot massage is nothing, I give my mother a foot massage."

Computer scientists Cristian Danescu-Niculescu-Mizil and Lillian Lee of Cornell University in Ithaca, New York, see the way Jules repeats the word 'a' used by Vincent as a key example of 'convergence' in language. "Jules could have just as naturally not used an article," says Danescu-Niculescu-Mizil. "For instance, he could have said: 'He just massaged her feet, massaging someone's feet is nothing, I massage my mother's feet.'"

The duo show in a new study that such convergence, which is thought to arise from an unconscious urge to gain social approval and to negotiate status, is common in movie dialogue. It "has become so deeply embedded into our ideas of what conversations 'sound like' that the phenomenon occurs even when the person generating the dialogue [the scriptwriter] is not the recipient of the social benefits", they say.

“For the last forty years, researchers have been actively debating the mechanism behind this phenomenon”, says Danescu-Niculescu-Mizil. His study, soon to be published in a workshop proceedings [1], cannot yet say if the ‘mirroring’ tendency is hard-wired or learnt, but it shows that it does not rely on the spontaneous prompting of another individual and the genuine desire for his or her approval.

“This is a convincing and important piece of work, and offers valuable support for the notion of convergence”, says philologist Lukas Bleichenbacher at the University of Zurich in Switzerland, a specialist on language use in the movies.

The result is all the more surprising given that movie dialogue is generally recognized to be a stylized, over-polished version of real speech, serving needs such as character and plot development that don’t feature in everyday life. “The method is innovative, and kudos to the authors for going there”, says Howie Giles, a specialist in communication at the University of California at Santa Barbara.

"Fiction is really a treasure trove of information about perspective-taking that hasn't yet been fully explored," agrees Molly Ireland, a psychologist at the University of Texas at Austin. "I think it will play an important role in language research over the next few years."

But, Giles adds, "I see no reason to have doubted that one would find the effect here, given that screenwriters mine everyday discourse to make their dialogues appear authentic to audiences".

That socially conditioned speech becomes an automatic reflex has long been recognized. “People say ‘oops’ when they drop something”, Danescu-Niculescu-Mizil explains. “This probably arose as a way to signal to other people that you didn't do it intentionally. But people still say ‘oops’ even when they are alone! So the presence of other people is no longer necessary for the ‘oops’ behaviour to occur – it has become an embedded behavior, a reflex.”

He and Lee wanted to see if the same was true for conversational convergence. To do that, they needed the seemingly unlikely situation in which the person generating the conversation could not expect any of the supposed social advantages of mirroring speech patterns. But that’s precisely the case for movie script-writers.

So the duo looked at the original scripts of about 250,000 conversational exchanges in movies, and analysed them to identify nine previously recognized classes of convergence.

They found that such convergence is common in the movie dialogues, although less so than in real life – or, standing proxy for that here, in actual conversational exchanges held on Twitter. In other words, the writers have internalized the notion that convergence is needed to make dialogue ‘sound real’. “The work makes a valid case for the use of ‘fictional’ data”, says Bleichenbacher.

Not all movies showed the effect to the same extent. “We find that in Woody Allen movies the characters exhibit very low convergence”, says Danescu-Niculescu-Mizil – a reminder, he adds, that “a movie does not have to be completely natural to be good.”

Giles remarks that, rather than simply showing that movies absorb the unconscious linguistic habits of real life, there is probably a two-way interaction. “Audiences use language devices seen regularly in the movies to shape their own discourse”, he points out. In particular, people are likely to see what types of speech ‘work well’ in the movies in enabling characters to gain their objectives, and copy that. “One might surmise that movies are the marketplace for seeing what’s on offer, what works, and what needs purchasing and avoiding in buyers own communicative lives”, Giles says.

Danescu-Niculescu-Mizil hopes to explore another aspect of this blurring of fact and fiction. “We are currently exploring using these differences to detect ‘faked’ conversations”, he says. “For example, I am curious to see whether some of the supposedly spontaneous dialogs in so-called ‘reality shows’ are in fact all that real.”

1. C. Danescu-Niculescu-Mizil & L. Lee, Proc. ACL Workshop on Cognitive Modeling and Computational Linguistics, Portland, Oregon, 76-87 (Association for Computing Machinery Press, New York, 2011). Available as a preprint here.

I received some interesting further comments on the work from Molly Ireland, which I had no space to include fully. They include some important caveats, so here they are:

I think it's important to keep in mind, as the authors point out, that fiction can't necessarily tell us much about real-life dialog. Scripts can tell us quite a bit about how people think about real-life dialog though. Fiction is really a treasure trove of information about perspective-taking that hasn't been fully explored in the past. Between Google books and other computer science advances (like the ones showcased in this paper), it's become much easier to gain access to millions of words of dialog in novels, movies, and plays. I think fiction will play an important role in language and perspective-taking research over the next few years.

Onto their findings: I'm not surprised that the authors found convergence between fictional characters, for a couple of reasons. They mention Martin Pickering and Simon Garrod's interaction alignment model in passing. Pickering and Garrod basically argue that people match a conversation partner's language use because it's easier to reuse language patterns that you've just processed than it is to generate a completely novel utterance. Their argument is partly based on syntactic priming research that shows that people match the grammatical structures of sentences they've recently been presented with – even when they're alone in a room with nothing but a computer. So first of all, we know that people match recently processed language use in the absence of the social incentives that the authors mention (e.g., affection or approval).

Second, all characters were written by the same author (or the same 2-3 authors in some scripts). People have fairly stable speaking styles. So even in the context of scriptwriting, where authors are trying to write distinct characters with different speaking styles, you would expect two characters written by one author with one relatively stable function word fingerprint to use function words similarly (although not identically, if the author is any good).

The authors argue that self-convergence would be no greater than other-convergence if these cold, cognitive features of language processing [the facts that people tend to (a) reuse function words from previous utterances and (b) consistently sound sort of like themselves, even when writing dialog for distinct characters] were driving their findings. That would only be true if authors failed to alter their writing style at all between characters. Adjusting one's own language style when imagining what another person might say probably isn't conscious. It's probably an automatic consequence of taking another person's perspective. An author would have to be a pretty poor perspective-taker for all of his characters to sound exactly like he sounds in his everyday life.

Clearly I'm skeptical about some of the paper's claims, but I would be just as skeptical about any exploration into a new area of research using an untested measure of language convergence (including my own research). I think that the paper's findings regarding sex differences in convergence and differences between contentious and neutral conversations could turn out to be very interesting and should be looked at more closely – possibly in studies involving non-experts. I would just like to look into alternate explanations for their findings before making any assumptions about their results.

Thursday, June 23, 2011

Einstein and his precursors

From time to time, Nature used to receive (and doubtless still does) crank letters claiming that Einstein was not the first to derive E=mc2, but that this equation was first written down, after a fashion, by one Friedrich Hasenörhl, an Austrian physicist with a perfectly respectable, if unremarkable, pedigree and career who was killed in the First World War. This was a favourite ploy of those cranks whose mission in life was to discredit Einstein’s theory of relativity – so much so that I had two such folks discuss it in my novel The Sun and Moon Corrupted. But not until now, while reading Alan Beyerchen’s Scientists Under Hitler (Yale University Press, 1977), did I realise where this notion originated. The idea was put about by Philipp Lenard, the Nobel prizewinner and virulently anti-Semitic German physicist and member of the Nazi party. Lenard put forward the argument in his 1929 book Grosse Naturforscher (Great Natural Researchers), in which he sought to establish that all the great scientific discoveries had been made by people of Aryan-Germanic stock (including Galileo and Newton). Lenard was deeply jealous of Einstein’s international fame, and as a militaristic, Anglophobic nationalist Lenard found Einstein’s pacifism and internationalism abhorrent. It’s a little comical that this nasty little man felt the need to find an alternative to Einstein at all, given that he was violently (literally) opposed to relativity and a staunch believer in the aether. In virtually all respects Lenard fits the profile of the scientific crank (bitter, jealous, socially inadequate, feeling excluded), and he offers a stark (that’s a pun) reminder that a Nobel prize is no guarantee even of scientific wisdom, let alone any other sort. So there we are: all those crank citations of the hapless Hasenöhrl – this is a popular device of the devotees of Viktor Schauberger, the Austrian forest warden whose bizarre ideas about water and vortices led him to be conscripted by the Nazis to make a ‘secret weapon’ – have their basis in Nazi ‘Aryan physics’.

Friday, June 17, 2011

Quantum life

I have a feature in this week’s Nature on quantum biology, and more specifically, on the phenomenon of quantum coherence in photosynthesis. Inevitably, lots of material from the draft had to be cut, and it was a shame not to be able to make the point (though I’m sure I won’t be the first to have made it) that ‘quantum biology’ properly begins with Schrödinger’s 1944 book What is Life? (Actually one can take it back still further, to Niels Bohr: see here.) Let me, though, just add here the full version of the box on Ian McEwan’s Solar, since I found it very interesting to hear from McEwan about the genesis of the scientific themes in the novel.
_______________________________________________________________________________

The fact is, no one understands in detail how plants work, though they pretend they do… How your average leaf transfers energy from one molecular system to another is nothing short of a miracle… Quantum coherence is key to the efficiency, you see, with the system sampling all the energy pathways at once. And the way nanotechnology is heading, we could copy this with the right materials… Quantum coherence in photosynthesis is nothing new, but now we know where to look and what to look at.

These words are lifted not from a talk by any of the leaders in this nascent field but from the pages of Solar, a 2010 novel by the British writer Ian McEwan. A keen observer of science, who has previously scattered it through his novels Enduring Love and Saturday and has spoken passionately about the dangers of global warming, McEwan likes to do his homework. Solar describes the tragicomic exploits of quantum physicist, Nobel laureate and philanderer Michael Beard as he misappropriates an idea to develop a solar-driven method to split water into its elements. The key, as the young researcher who came up with the notion explains, is quantum coherence.

“I wanted to give him a technology still on the lab bench”, says McEwan. He came across Fleming’s research in Nature or Science (he forgets which, but looks regularly at both), and decided that this was what he needed. After ‘rooting around’, he felt there was justification for supposing that a bright postdoc might have had the idea in 2000. It remained to fit that in with Beard’s supposed work in quantum physics. This task was performed with the help of Cambridge physicist Graham Mitchison, who ‘reverse-engineered’ Beard’s Nobel citation which appears in Solar’s appendix: “Beard’s theory revealed that the events that take place when radiation interacts with matter propagate coherently over a large scale compared to the size of atoms.”

Wednesday, June 15, 2011

The Anglican atheist

To be honest, I already suspected that Philip Pullman, literary darling of militant atheists (no doubt to his chagrin), is more religious than me, a feeble weak-tea religious apologist. But it is nice to have that confirmed in the New Statesman. Actually, ‘religious’ is not the right word, since Pullman is indeed (like me) an atheist. I had thought that ‘religiose’ would do it, but it does not – it means excessively and sentimentally religious, which Pullman emphatically isn’t. The word I want would mean ‘inclined to a religious sensibility’. Any candidates?

Pullman is writing is response to a request from Rowan Williams to explain what he means in calling himself a ‘Church of England atheist’. Pullman does so splendidly. Religion was clearly a formative part of his upbringing, and he considers that he cannot simply abandon that – he is attached to what Martin Rees has called the customs of his tribe, that being the C of E. But Pullman is an atheist because he sees no sign of God in the world. He admits that he can’t be sure about this, in which case he should strictly call himself an agnostic. But I’ve always been unhappy with that view of agnosticism, even though it is why Jim Lovelock considers atheism logically untenable (nobody really knows!). To me, atheism is an expression of belief, or if you like, disbelief, not a claim to have hard evidence to back it up. (I’m not sure what such evidence would even look like…)

What makes Pullman so thoughtful and unusual among atheists (and clearly this is why Rowan Williams feels an affinity with him) is that he is interested in religion: “Religion is something that human beings do and human activity is fascinating.” I agree totally, and that is one reason why I wrote Universe of Stone: I found it interesting how religious thought influenced and even motivated other modes of thought, particularly philosophical enquiry about the world. And this is what is so bleak about the view of people like Sam Harris and Harry Kroto, both of whom have essentially told me that they are utterly uninterested in why and how people are religious. They just wish people weren’t. They see religion as a collection of erroneous or unsupported beliefs about the physical world, and have no apparent interest in the human sensibilities that sometimes find expression in religious terms. This is a barren view, yes, but also a dangerous one, because it seems to instil a lack of interest in how religions arise and function in society. For Harris, it seems, there would be peace in the Middle East if there were no religion in the world. I am afraid I can find that view nothing other than childish, and it puzzles me the Richard Dawkins, who I think shares some of Pullman’s ‘in spite of himself’ attraction to religion and has a more nuanced position, is happy to keep company with such views.

Pullman is wonderfully forthright in condemning the stupidities and bigotries that exist in the Anglican Church – its sexism and no doubt (though he doesn’t mention it) its homophobia. “These demented barbarians”, he says, “driven by their single idea that God is obsessed by sex as they are themselves, are doing their best to destroy what used to be one of the great characteristics of the Church of England, namely a sort of humane liberal tolerance.” Well yes, though one might argue that this was a sadly brief phase. And of course, for the idea that God is as obsessed with sex as we are, one must ultimately go back to St Augustine, whose loathing of the body was a strong factor in his more or less single-handed erection (sorry) of original sin at the centre of the Christian faith. But according to some religious readers of Universe of Stone, I lack the religious sensibility to appreciate what Augustine and his imitators, such as Bernard of Clairvaux, were trying to express with their bigotry.

Elsewhere in the same issue of New Statesman, Terry Eagleton implies that it is wrong to harp on about such things because religion (well, Christianity) must be judged on the basis of its most sophisticated theology rather than on how it is practised. Eagleton would doubtless consider Pullman’s vision of a God who might be usurped and exiled, or gone to focus on another corner of the universe, or old and senile, theologically laughable. For God is not some bloke with a cosmic crown and a wand, wandering around the galaxies. I’m in the middle here (again?). Certainly, insisting as Harris does that you are only going to pick fights with the religious literalists who take the Bible as a set of rules and a description of cosmic history, and have never given a moment’s thought to the kind of theology Rowan Williams reads, is the easy option. But so, in a way, is insisting that religion can’t be blamed for the masses who practise a debased form of it. That would be my criticism of Karen Armstrong too, who presents a reasonable and benign, indeed even a wise view of Christianity that probably the majority of its adherents wouldn’t recognize as their own belief system. Religion must be judged by what it does, not just what it says. But the same is true, I fear, of science.

Oh dear, and you know, I was being so good in keeping silent as Sam Harris’s book was getting resoundingly trashed all over the place.

Sunday, June 12, 2011

Go with the Flow

Nicholas Lezard has always struck me as a man with the catholic but highly selective tastes (in literature if not in standards of accommodation) that distinguish the true connoisseur. Does my saying this have anything to do with the fact that he has just singled out my trilogy on pattern formation in the Guardian? How can you even think such a thing? But truly, it is gratifying to have this modest little trio of books noticed in such a manner. I can even live with the fact that Nicholas quotes a somewhat ungrammatical use of the word “prone” from Flow (he is surely literary enough to have noticed, but too gentlemanly to mention it).

Monday, June 06, 2011

Musical intelligence

In the latest issue of Nature I have interviewed the composer Eduardo Reck Miranda about his experimental soundscapes, pinned to a forthcoming performance of one of them at London’s South Bank Centre. Here’s the longer version of the exchange.
_______________________________________________

Eduardo Reck Miranda is a composer based at the University of Plymouth in England, where he heads the Interdisciplinary Centre for Computer Music Research. He studied computer science as well as music composition, and is a leading researcher in the field of artificial intelligence in music. He also worked on phonetics and phonology at the Sony Computer Science Laboratory in Paris. He is currently developing human-machine interfaces that can enable musical performance and composition for therapeutic use with people with extreme physical disability.

Miranda’s compositions combine conventional instruments with electronically manipulated sound and voice. His piece Sacra Conversazione, composed between 2000 and 2003, consists of five movements in which string ensemble pieces are combined with pre-recorded ‘artificial vocalizations’ and percussion. A newly revised version will be performed at the Queen Elizabeth Hall, London, on 9 June as part of a programme of electronic music, Electronica III. Nature spoke to him about the way his work combines music with neurology, psychology and bioacoustics.

In Sacra Conversazione you are aiming to synthesize voice-like utterances without semantic content, by using physical modelling and computer algorithms to splice sounds from different languages in physiologically plausible ways. What inspired this work?

The human voice is a wonderfully sophisticated musical instrument. But in Sacra Conversazione I focused on the non-semantic communicative power of the human voice, which is conveyed mostly by the timbre and prosody of utterances. (Prosody refers to the acoustical traits of vocal utterances characterized by their melodic contour, rhythm, speed and loudness.)

Humans seem to have evolved some sort of ‘prosodic fast lane’ for non-semantic vocal information in the auditory pathways of the brain, from the ears to regions that processes emotion, such as the amygdala. There is evidence that non-semantic content of speech is processed considerably faster than semantic content. We can very often infer the emotional content and intent of utterances before we process their semantic, or linguistic, meaning. I believe that this aspect of our mind is one of the pillars of our capacity for music.

You say that some of the sounds you used would be impossible to produce physiologically, and yet retain an inherent vocal quality. Do you know why that is?

Let me begin by explaining how I began to work on this piece. I started by combining single utterances from a number of different languages – over a dozen, as diverse as Japanese, English, Spanish, Farsi, Thai and Croatian – to form hundreds of composite utterances, or ‘words’, as if I were creating the lexicon for a new artificial language. I carefully combined utterances by speakers of similar voice and gender and I used sophisticated speech-synthesis methods to synthesise these new utterances. It was a painstaking job.

I was surprised that only about 1 in 5 of these new ‘words’ sounded natural to me. The problem was in the transitions between the original utterances. For example, whereas the transition from say Thai utterance A to Japanese utterance B did not sound right, the transition of the former to Japanese utterance C was acceptable. I came to believe that the main reason is physiological. When we speak, our vocal mechanism needs to articulate a number of different muscles simultaneously. I suspect that even though we may be able to synthesise physiologically implausible utterances artificially, the brain would be reluctant to accept them.

Then I moved on to synthesize voice using a physical model of the vocal tract. I used a model with over 20 variables, each of which roughly represents a muscle of the vocal tract (see E. R. Miranda, Leonardo Music Journal 15, 8-16 (2005)). I found it extremely difficult to co-articulate the variables of the model to produce decent utterances, which explains why speech technology for machines is still is very much reliant on splicing and smoothing methods. On the other hand, I was able to produce surreal vocalizations that, while implausible for humans to produce, retain a certain degree of coherence because of the physiological constraints embedded in the model.

Much of the research in music cognition uses the methods of neuroscience to understand the perception of music. You appear to be more or less reversing this approach, using music to try to understand processes of speech production and cognition. What makes you think this is possible?

The choice of research methodology depends on the aims to the research. The methods of cognitive neuroscience are largely aimed at proving hypotheses. One formulates a hypothesis to explain a certain aspect of cognition and then designs experiments aimed at proving it.

My research, however, is not aimed at a describing how music perception works. Rather, I am interested in creating new approaches to musical composition informed by research into speech production and cognition. This requires a different methodology, which is more exploratory: do it first and reflect upon the outcomes later.

I feel that cognitive neuroscience research methods force scientists to narrow the concept of music, whereas I am looking for the opposite: my work is aimed at broadening the concept of music. I should not think that both approaches are incompatible: one could certainly inform and complement the other.

What have you learnt from your work about how we make and perceive sound?

One of the things I’ve learnt is that perception of voice – and, I suspect, auditory perception in general – seems to be very much influenced by the physiology of vocal production.

Much of your work has been concerned with the synthesis and manipulation of voice. Where does music enter into it, and why?

Metaphorically speaking, synthesis and manipulation of voice are only the cogs, nuts and bolts. Music really happens when one starts to assemble the machine. It is extremely hard to describe how I composed Sacra Conversazione, but inspiration played a big role. Creative inspiration is beyond the capability of computers, yet finding its origin is the Holy Grail of the neurosciences. How can the brain draw and execute plans on our behalf implicitly, without telling us?

What are you working on now?

Right now I am orchestrating raster plots of spiking neurons and the behaviour of artificial life models for Sound to Sea, a large-scale symphonic piece for orchestra, church organ, percussion, choir and mezzo soprano soloist. The piece was commissioned by my university, and will be premiered in 2012 at the Minster Church of St Andrew in Plymouth.

Do you feel that the evolving understanding of music cognition is opening up new possibilities in music composition?

Yes, to a limited extent. Progress will probably emerge from the reverse: new possibilities in musical composition contributing to the development of such understanding.

What do you hope audiences might feel when listening to your work? Are you trying to create an experience that is primarily aesthetic, or one that challenges listeners to think about the relationship of sound to language? Or something else?

I would say both. But my primary aim is to compose music that is interesting to listen to and catches the imagination of the audience. I would prefer my music to be appreciated as a piece of art rather than as a challenging auditory experiment. However, if the music makes people think about, say, the relationship of sound to language, I would be even happier. After all, music is not merely entertainment.

Although many would regard your work as avant-garde, do you feel part of a tradition that explores the boundaries of sound, voice and music? Arnold Schoenberg, for example, aimed to find a form of vocalization pitched between song and speech, and indeed the entire operatic form of recitative is predicated on a musical version of speech.

Absolutely. The notion of avant-garde disconnected from tradition is too naïve. If anything, to be at the forefront of something you need the stuff in the background. Interesting discoveries and innovations do not happen in a void.

Sunday, June 05, 2011

Are we all doomed?

That’s the question that New Statesman put to a range of folks, including me. My answer was truncated in the magazine, which is fair enough but somewhat gave the impression that I fully bought into Richard Gott’s Copernican principle. In fact I consider it to be an amusing as well as a thought-provoking idea, but not obviously more than what I depict it as in the second paragraph of my full answer below. So here, for what it’s worth, is the complete answer.
__________________________________________________________________________
There is a statistical answer to this. If you assume, as common sense suggests you should, that there is nothing special about us as humans, then it is unlikely we are among the first or last people ever to exist. A conservative guess at the trajectory of future population growth then implies that humanity has between 5,000 and 8 million years left. Whether that’s a sentence of doom or a reprieve is a matter of taste.

Alternatively, you might choose to say that we know absolutely nothing about our ‘specialness’ in this respect, and so this is just an argument that manufactures apparent knowledge out of ignorance. If you prefer this point of view, it forces us to confront our current apocalyptic nightmares. Will nuclear war, global warming, superbugs, or a rogue asteroid finish us off within the century? The last of these, at least, can be assigned fairly secure (and long) odds. As for the others, prediction is a mug’s game (which isn’t to say that all those who’ve played are mugs). I’d recommend enough pessimism to take seriously the tremendous challenges we face today, and enough optimism to think it’s worth the effort.

Wednesday, May 25, 2011

Steve Jones gets unnatural

I’ve just discovered a review of Unnatural in the Lancet by Steve Jones. As one might expect, he has an interesting and quite particular take on it. It’s one with which, happily, I agree.

Monday, May 23, 2011

Belated Prospect

I realise that I meant to put up earlier my May column from Prospect. Almost time for the June column now, but here goes.
________________________________________________________

The notion that God has an inordinate fondness for beetles, credited to the biologist J. B. S. Haldane, retains a whiff of solipsism. For beetles are not so unlike us: multicellular, big enough to see, and legged. But God surely favours single-celled organisms far more. Beetles and humans occupy two nearby tips on the tree of life, while single-celled life forms have two of the three fundamental branches all to themselves: bacteria and archaea, so alike that it was only in the 1970s that the latter were awarded their own branch. Archaea have a different biochemistry to bacteria – their metabolism usually produces methane – and they are found everywhere, including the human gut.

Our place on the ‘tree of life’ now looks like it may be even more insignificant, for a team at the University of California, working with genomics pioneer Craig Venter, claims to have found hints of a fourth major branch in the tree, again populated only by single-celled organisms. These branches, called domains, are the most basic divisions in the Linnaean system of biological classification. We share our domain, the eukaryotes (distinguished by the way their cells are structured), with plants, fungi and yet more monocellular species.

Like most things Venter is involved in, the work is controversial. But perhaps not half so controversial as Venter’s belief, expressed in a panel debate titled ‘What is life?’ in Arizona in February, that all life on Earth might not even have a common origin. “I think the tree of life is an artefact of some early scientific studies, which are not really holding up”, he said, to the alarm of fellow panellist Richard Dawkins. His suggestion that there may be merely a “bush of life” only made matters worse.

Drop in the ocean

Despite the glee of creationists, there was nothing in Venter’s speculative remark that need undermine the case for Darwinian evolution. The claim of a fourth domain is backed by a little more evidence, but remains highly tentative. The data were gathered on a now famous round-the-world cruise that Venter undertook between 2003 and 2007 on his yacht to gather genomic information about the host of unknown microorganisms in the oceans. The rapid gene-analysing techniques that he helped to develop allow the genes of different organisms to be rapidly compared in order to identify evolutionary relationships between them. By looking at the same group of genes in two different organisms, one can deduce where in the tree of life they shared a common ancestor.

Using Venter’s data, Jonathan Eisen in California discovered that two families of genes in these marine microbes each seem to show a branch that doesn’t fit on the conventional tree of life. It’s possible that these genes might have been acquired from some unknown forms of virus (viruses are excluded from the tree altogether). The more exciting alternative is that they flag up a new domain. If so, its inhabitants would seem so far to be quite rare – a minor anomaly, like the Basque language, that has persisted quietly for billions of years. But since we are ignorant about perhaps 99 per cent of species on the planet, who knows?

Thinking big

The European Union is looking for big ideas. Really big ones. Its Flagship programme offers to fund two scientific projects to the tune of €1 bn over the next ten years. These must be “ambitious large-scale, science-driven, visionary research initiatives that aim to achieve a scientific breakthrough, provid[ing] a strong and broad basis for future technological innovation and economic exploitation in a variety of areas, as well as novel benefits for society.” In other words, they’ve got to achieve a heck of a lot, and will have truckloads of money to do so.

Six of the applications – all of them highly collaborative, international and interdisciplinary – have now been selected for a year of pilot funding, starting in May. They range from the highly technical to the borders of science fiction.

One promises to develop graphene, the carbon material that won last year’s physics Nobel prize, into a practical fabric for information technologies. Another proposes to truly figure out how the brain works; a third will integrate information technology with medicine to realise the much-advertised ‘personalized medicine’. But these things will all be pursued regardless of the Flagship scheme. More extraordinary, and therefore both more enticing and more risky, are two proposals to develop intelligent, sensitive artificial agents – characterized here as Guardian Angels or Robot Companions – that will help us individually throughout our lives. The sixth proposal (which received the highest rating) is to develop massive computer-simulation systems to model the entire ‘living Earth’, offering a ‘crisis observatory’ that will forecast global problems ranging from wars to economic meltdowns to natural disasters – the latter now all too vivid. The two initiatives to receive full funding will be selected in mid-2012 for launch in 2013.

Friday, May 20, 2011

The chief designer


I have a review of the RSC’s play Little Eagles in Nature this week. Here it is. Too late now to catch the play, I fear, but I thought it was impressive – even though Andrew Billen has some fair criticisms in the New Statesman.
____________________________________________________________________________
Little Eagles
A play by Rona Munro, directed by Roxana Silbert
Hampstead Theatre, London, until 7 May

It is a curious year of anniversaries for the former Soviet military-industrial complex. Fifty years ago the cosmonaut Yuri Gagarin became the first person in space, orbiting the world for 108 minutes in the Vostok spacecraft. And 25 years ago, Reactor 4 of the Chernobyl nuclear plant exploded and sent a cloud of radioactive debris across northern Europe.

One triumph, one failure; each has been marked independently. But while Little Eagles, Rona Munro’s play commissioned by the Royal Shakespeare Company for the Gagarin anniversary, understandably makes no mention of the disaster in Ukraine a quarter of a century later, the connections assert themselves throughout. Most obviously, both events were the fruits of the Cold War nuclear age. The rockets made by Sergei Korolyov, the chief architect of the Soviet space programme and the play’s central character, armed President Khrushchev with intercontinental ballistic missiles before they took Gagarin to the stars.

But more strikingly, we see the space programme degenerate along the same lines that have now made an exclusion zone of Chernobyl. Impossible demands from technically clueless officials and terror at the consequences of neglecting them eventually compromise the technologies fatally – most notably here in the crash of Soyuz 1 in 1967, killing cosmonaut Vladimir Komarov. Gagarin was the backup pilot for that mission, but it was clear that he was by then too valuable a trophy ever to be risked in another spaceflight. All the same, he died a year later during the routine training flight of a jet fighter.

Callous disregard for life marks Munro’s play from beginning to end. We first see Korolyov in the Siberian labour camp where he was sent during Stalin’s purge of the officer class just before the Second World War. As the Soviets developed their military rocket programme, the stupidity of sending someone so brilliant to a virtual death sentence dawned on the regime, and he was freed to resume work several years later. During the 1950s Korolyov wrested control of the whole enterprise, becoming known as the Chief Designer.

Munro’s Korolyov seems to offer an accurate portrait of the man, if the testimony of one of his chief scientists is anything to go by: “He was a king, a strong-willed purposeful person who knew exactly what he wanted… he swore at you, but he never insulted you. The truth is, everybody loved him.” As magnetically played by Darrell D’Silva, you can see why: he is a swaggering, cunning, charming force of nature, playing the system only to realise his dream of reaching the stars. He clearly reciprocates the love of his ‘little eagles’, the cosmonauts chosen with an eye on the Vostok capsule’s height restrictions.

But for his leaders, rocketry was merely weaponry, or a way of demonstrating superiority over their foes in the West. Korolyov becomes a hero for beating the Americans with Sputnik, and then with Vostok. But when the thuggish, foul-mouthed Khrushchev (a terrifying Brian Doherty) is retired in 1964 in favour of the icily efficient Leonid Brezhnev, the game changes. The new leader sees no virtue in Korolyov’s dream of a Mars mission, and is worried instead that the Americans will beat them to the moon. The rushed and bungled Soyuz 1, launched after Korolyov’s death in 1966, was the result.

Out of this fascinating but chewy material, Munro has worked wonders to weave a tale that is intensely human and, aided by the impressive staging, often beautiful and moving. Gagarin’s own story is here a subplot, and not fully worked through – we start to see his sad descent into the vodka bottle, grounded as a toy of the Politburo, but not his ignominious end. There is just a little too much material here for Munro to shoehorn in. But that is the only small complaint in this satisfying and wise production.

What it becomes in the end is a grotesque inversion of The Right Stuff, Tom Wolfe’s account of the US space programme made into an exhilarating movie in 1983. Wolfe’s celebration was a fitting tribute to the courage and ingenuity that ultimately took humans to the moon, but an exposure of the other side of the coin was long overdue. There is something not just awful but also grand and awesome in the grinding resolve of the Soviets to win the space race relying on just the Chief Engineer “and convicts and some university students”, as Korolyov’s doctor puts it.

Little Eagles shows us the mix of both noble and ignoble impulses in the space race that the US programme, with its Columbus rhetoric, still cannot afford to acknowledge. It recognizes the eye-watering glory of seeing the stars and the earth from beyond the atmosphere, but at the same time reveals the human spaceflight programmes as utterly a product of their tense, cheat-beating times, a nationalistic black hole for dollars and roubles (and now, of yuan too). Crucially, it leaves the final judgement to us. “They say you changed the whole sky and everything under it”, Korolyov’s doctor (and conscience) says to him at the end. “What does that mean?”

Wednesday, May 18, 2011

The Achilles' heel of biological complexity

Here’s the pre-edited version of my latest news story for Nature. This is such an interesting issue that I plan to write a more detailed piece on it for Chemistry World soon.
_____________________________________________________________________________
The complex web of protein interactions in our cells may be masking an ever-worsening problem.

Why are we so complicated? You might imagine that we’ve evolved that way because it conveys adaptive benefits. But a new study in Nature [1] suggests that the complexity in the molecular ‘wiring’ of our genome – the way our proteins talk to each other – may be simply a side effect of a desperate attempt to stave off problematic random mutations in the proteins’ structure.

Ariel Fernández, working at Chicago University and now at the Mathematics Institute of Argentina in Buenos Aires, and Michael Lynch of Indiana University in Bloomington argue that complexity in the network of our protein interactions arises because our relatively small population size, compared with single-celled organisms, makes us especially vulnerable to ‘genetic drift’: changes in the gene pool due to the reproductive success of certain individuals by chance rather than by superior fitness.

Whereas natural selection tends to weed out harmful mutations in genes and their related proteins, genetic drift does not. Fernández and Lynch argue that the large number of physical interactions between our proteins – now a crucial component of how information is transmitted in our cells – compensates for the reduction in protein stability wrought by drift. But this response comes at a cost.

It might mask the accumulation of structural weaknesses in proteins to a point where the problem can no longer be contained. Then, say Fernández and Lynch, proteins might be liable to misfold spontaneously – as they do in so-called diseases such as Alzheimer’s, Parkinson’s and prion diseases, caused by misfolded proteins in the brain.

If so, this means we may be fighting a losing race. Genetic drift may eat away at the stability of our proteins until they are overwhelmed, leaving us a sickly species.

This would imply that Darwinian evolution isn’t necessary benign in the long run. By finding a short-term solution to drift, it might merely be creating a time-bomb. “Species with low population are ultimately doomed by nature’s strategy of evolving complexity”, says Fernández.

The work provides “interesting and important news”, according to William Martin, a specialist in molecular evolution at the University of Düsseldorf in Germany. Martin says it shows that evolution of eukaryotes – relatively complex organisms like us, with a cellular ‘nucleus’ that houses the chromosomes – “can be substantially affected by drift.”

Drift is a bigger problem for small populations – those of multicelled eukaryotic organisms – than for large ones, because survival by chance rather than by fitness is statistically more likely for small numbers. Many random mutations in a gene, and thus in the protein made from it, will harm the protein’s resistance to unfolding: the protein’s folded-up shape becomes more apt to loosen as water molecules intrude into it. This loss of shape weakens the protein’s ability to function.

Such problems can be avoided if proteins stick loosely to one another so as to shelter the regions vulnerable to water. Fernández and Lynch say that these associations between proteins – a key feature of the cell biology of eukaryotes – may have therefore initially been a passive response to genetic drift. Over time, certain protein-protein interactions may be selected by evolution for useful functions, such as sending molecular signals across cell membranes.

Using protein structures reported in the Protein Data Bank, the two researchers verified that disruption of the interface between proteins and water, caused mostly by exposure of ‘sticky’ parts of the folded peptide chain [full disclosure: these are actually parts of the chain that hydrogen-bond to one another; exposure to water enables the water molecules to compete for the hydrogen bonding. Ariel Fernández has previously explored how such regions may be ‘wrapped’ in hydrophobic chain segments to keep water away], leads to a greater propensity for a protein to associate with others. They also showed that drift could account for this ‘poor wrapping’ of proteins.

On this view, genome complexity doesn’t offer intrinsic evolutionary advantages, but is a kind of knee-jerk response to the chance appearance of ‘needy proteins’ – which ends up exposing us to serious risks.

“I believe prions are indicators of this gambit gone too far”, says Fernandez. “The proteins with the largest accumulation of structural defects are the prions, soluble proteins so poorly wrapped that they relinquish their functional fold and aggregate”. Prions cause disease by triggering the misfolding of other proteins.

“If genetic variability resulting from random drift keeps increasing, we as a species may end up facing more and more fitness catastrophes of the type that prions represent”, Fernandez adds. “Perhaps the evolutionary cost of our complexity is too high a price to pay in the long run.”

However, Martin doubts that drift alone can account for the difference in complexity between prokaryotes (single-celled organisms without a cell nucleus) and eukaryotes. His previous work has indicated that bioenergetics also plays a strong role [2]. For example, says Martin, prokaryotes with small population sizes are symbiotic, which tend to degenerate, not to become complex. “Population genetics is just one aspect of the complexity issue”, he says.

References
1. Fernandez, A. & Lynch, M. Nature doi:10.1038/nature09992 (2011).
2. Lane, N. & Martin, W. Nature 467, 929-934 (2010).

Monday, May 09, 2011

Unnatural happenings

There is a smart review of Unnatural in The Age by Damon Young. I don’t just say it is smart because it is positive – he engages intelligently with the issues. This bit made me smile: “Because he's neither a religious nor scientific fundamentalist, Ball's ideas may draw flak from both.” Well, indeed.

And I recently spoke to David Lemberg about the book for a podcast on the very nice Alden Bioethics blog run out of Albany Medical Center in New York. It’s available here.