Monday, April 11, 2011

Chaos promotes prejudice


Here’s my latest news story for Nature, pre-editing.
_______________________________________________________________

A disorderly environment makes people more inclined to put others in boxes.

Messy surroundings make us more apt to stereotype people, according to a new study by a pair of social scientists in the Netherlands.

Diederik Stapel and Siegwart Lindenberg of Tilburg University asked subjects to complete questionnaires that probed their judgements about certain social groups while in everyday environments (a street and a railway station) that were either messy or clean and orderly. They found small but significant and systematic differences in the responses: there was more stereotyping in the former cases than the latter.

The researchers say that social discrimination could therefore be counteracted by diagnosing and removing signs of disorder and decay in public environments. They report their findings in Science today [1].

Psychologist David Schneider of Rice University in Houston, Texas, a specialist in stereotyping, calls this “an excellent piece of work which speaks not only to a possibly important environmental cause, but also supports a major potential theoretical explanation for some forms of prejudice.”

The influence of environment on behaviour has long been suspected by social scientists and criminologists. The ‘broken windows’ hypothesis of sociologists James Q. Wilson and George Kelling supposes that people are more likely to commit criminal and anti-social acts when they see evidence of others having done so – for example, in public places with signs of decay and neglect.

This idea motivated the famous zero-tolerance policy on graffiti on the New York subway in the late 1980s (on which Kelling acted as a consultant), which is credited with a role in improving the safety of the network. Lindenberg and his coworkers conducted experiments in Dutch urban settings in 2008 that supported an influence of the surroundings on people’s readiness to act unlawfully or antisocially [2].

But could evidence of social decay, even at the mild level of littering, also affect our unconscious discriminatory attitudes towards other people? To test that possibility, Stapel and Lindenberg devised a variety of disorderly environments in which to test these attitudes.

In their questionnaires, participants were asked for example to rate Muslims, homosexuals and Dutch people according to various positive, negative and unrelated stereotypes. For example, the respective stereotypes for homosexuals were (creative, sweet), (strange, feminine) and (impatient, intelligent).

In one experiment, passers-by in the busy Utrecht railway station were asked to participate by coming to sit in a row of chairs, for the reward of a candy bar or an apple. The researchers took advantage of a cleaners’ strike, which had left the station dirty and litter-strewn. They then returned to do the same testing after the strike was over and the station was clean.

As well as probing these responses, the experiment examined unconscious negative responses to race. All the participants were white, while one place at the end of the row of chairs was already taken by a black or white Dutch person. In the messy station, people sat on average further from the black person than the white one, while in the clean station there was no statistical difference in these distances.

In another experiment, the researchers aimed to eliminate differences in cleanliness of the environments while preserving the disorder. The participants were approached on a street in an affluent Dutch city. But in one case the street had been made more disorderly by the removal of a few paving slabs and the addition of a badly parked car and an ‘abandoned’ bicycle. Again, disorder boosted stereotyping.

Stapel and Lindenberg suspect that stereotyping may be an attempt to compensate for mess: it could be, they say, “a way to cope with chaos, a mental cleaning device” that partitions other people neatly into predefined categories.

In support of that idea, they showed participants pictures of disorderly and orderly situations, such as a bookcase with dishevelled and regularly stacked books, before asking them to complete both the stereotyping survey and another one that probed their perceived need for structure, including questions such as “I do not like situations that are uncertain”. Both stereotyping and the need for structure were higher in people viewing the disorderly pictures.

Sociologist Robert Sampson of Harvard University says that the study is “clever and well done”, but is cautious about how to interpret the results. “Disorder is not necessarily chaotic’, he says, “and is subject to different social meanings in ongoing or non-manipulated environments. There are considerable subjective variations within the same residential environment on how disorder is rated – the social context matters.”

Therefore, Sampson says, “once we get out of the lab or temporarily induced settings and consider the everyday contexts in which people live and interact, we cannot simply assume that interventions to clean up disorder will have invariant effects.” 

Schneider agrees that the implications of the work for public policy are not yet clear. One question we’d need to answer is how long these kinds of effects last”, he says. “There is a possibility that people may quickly adapt to disorder. So I would be very wary of concluding that people who live in unclean and disordered areas are more prejudiced because of that.” Stapel acknowledges this: “people who constantly live in disorder get used to it and will not show the effects we find. Disorder in our definition is something that is unexpected.”

References
1. D. A. Stapel & S. Lindenberg, Science 332, 251-253 (2011).
2. K. Keizer, S. Lindenberg & L. Steg, Science 322, 1681 (2008).

Tuesday, April 05, 2011

Fattening up Schrödinger's cats


Here’s my latest story for Nature News.
__________________________________________________________

Huge molecules can show the wave-particle duality of quantum theory.

Researchers in Austria have made what they call the “fattest Schrödiner cats realized to date”. They have demonstrated quantum superpositions – objects in two or more states simultaneously – of molecules with up to 430 atoms each, several times larger than those used in previous experiments of this sort [1].

In the famous thought experiment conceived by Erwin Schrödinger in 1935 to illustrate the apparent paradoxes of quantum theory, a cat will be poisoned or not depending on the state of an atom, governed by quantum rules. Because the recently developed quantum theory insisted that these rules allowed for superpositions, it seemed that Schrödinger’s cat could itself be placed in a superposition of ‘live’ and ‘dead’ states.

The paradox highlights the question of how the rules of the quantum world – where objects like atoms can be in several positions at once – give way to the ‘classical’ mechanics that governs the macroscopic world of our everyday experience, in which things must be one way or the other but not both at the same time. This is called the quantum-to-classical transition.

It is now generally thought that the ‘quantumness’ is lost in a process called decoherence, where disturbances from the surrounding environment make the quantum wavefunction describing many-state superpositions appear to collapse [note to subs: we have to keep this ‘appear to’. The precise relationship between decoherence and wavefunction collapse is complicated and too tricky fully get into here] into a well-defined and unique classical state. This decoherence tends to become more pronounced as objects get bigger and the opportunities for interacting with the environment multiply.

There is still no consensus on how Schrödinger’s thought experiment will play out if the cat-and-atom system could be perfectly protected from decoherence. Some physicists are happy to believe that in that case the cat could indeed be in a live-dead superposition. But we couldn’t see it directly because the act of looking would destroy the superposition.

One manifestation of quantum superpositions is the interference that can occur between quantum particles passing through two or more narrow slits. In the classical world the particles just pass through with their trajectories unchanged, like footballs rolling through a doorway.
But quantum particles can behave like waves, which interfere with one another as they pass through the slits, either enhancing or cancelling to produce a series of bright and dark bands. This interference of quantum particles, first seen for electrons in 1927, is effectively the result of each particles passing through more than one slit: a quantum superposition.

At some point as the experiment is scaled up in size, quantum behaviour (interference) should give way to classical behaviour (no interference). But how big can the particles be before that happens?

In 1999 a team at the University of Vienna in Austria demonstrated interference in a many-slit experiment using beams of 60-atom carbon molecules (C60) shaped like hollow spheres [2]. Now Markus Arndt, one of the researchers in that experiment, and his colleagues in Austria, Germany and Switzerland have shown much the same effect for considerably larger molecules tailor-made for the purpose, up to 6 nanometres (millionths of a millimetre) across and composed of up to 430 atoms. These are bigger than some small protein molecules in the body, such as insulin.

In their experiment, the beams of molecules are passed through three sets of slits. The first of them, made from a slice of the hard material silicon nitride patterned with a grating of 90-nm-wide slits, prepares the molecular beam in a coherent state, in which the matter waves are all in step. The second, a ‘virtual grating’ made from laser light formed by mirrors into a standing wave of light and dark, causes the inference pattern. The third grating, also of silicon nitride, acts as a mask to admit parts of the interference pattern to an instrument called a mass spectrometer, which counts the number of molecules that pass through.

The researchers report in Nature Communications that this number rises and falls periodically as the outgoing beam is scanned from left to right, showing that interference, and therefore superposition, is present.

Although this might not sound like a Schrödinger cat experiment, it probes the same quantum effects. It is essentially like firing the cats themselves at the interference grating, rather than making a single cat’s fate contingent on an atomic-scale event.

Quantum physicist Martin Plenio of the University of Ulm in Germany calls the study part of an important line of research. “We have perhaps not gained deep new insights into the nature of quantum superposition from this specific experiment”, he admits, “but there is hope that with increasing refinement of the experimental technique we will eventually discover something new.”

Arndt says that such experiments might eventually enable tests of fundamental aspects of quantum theory, such as how wavefunctions are collapsed by observation. “Predictions such as that gravity might induce wavefunction collapse beyond a certain mass limit should become testable at significantly higher masses in far-future experiments”, he says.

Can living organisms – perhaps not cats, but maybe microscopic ones such as bacteria – be placed in superpositions? That has been proposed for viruses [3], the smallest of which are just a few nanometres across – although there is no consensus about whether viruses should be considered truly alive. “Tailored molecules are much easier to handle in such experiments than viruses”, says Arndt. But he adds that if various technical issues can be addressed, “I don’t see why it should not work.”

References
1. Gerlich, S. et al., Nat. Commun. online publication doi:10.1038/ncomms1263.
2. Arndt, M. et al., Nature 401, 680-682 (1999).
3. Romero-Isart, O., Juan, M. L., Quidant, R. & Cirac, J. I. New J. Phys. 12, 033105 (2010).

Monday, March 28, 2011

Who's (still) afraid of MMR?


With the fallout from the MMR scare still with us, this programme on BBC Radio 4 is a timely reminder of the issues. “Science betrayed” indeed, but by whom? The full story is societal as much as it is biomedical. Anyway, listen while you still can.

Friday, March 25, 2011

More monster myths

I have reviewed the National Theatre’s production of Frankenstein in the latest issue of Nature. Worth seeing (though if you haven’t got a ticket already, you don’t stand much chance), but I was slightly disappointed in the end, having seen some glowing reviews. There’s another perspective here.
________________________________________________________

Mary Shelley’s Frankenstein has been endlessly adapted and reinterpreted since it was first published, anonymously, in 1818. Aside from the iconic screen version by James Whale in 1931, there have been sequels, parodies (Mel Brooks’ Young Frankenstein, The Rocky Horror Picture Show), and postmodern interpolations (Brain Aldiss’s Frankenstein Unbound). Victor Frankenstein has become the archetypal mad scientist, unleashing powers he cannot control – in one recent remake, he became a female biologist experimenting on organ regeneration with stems cells. The ‘Franken’ label is attached to every new technology that appears to intervene in life, from genetic modification of crops to Craig Venter’s ‘synthetic’ microbe.

This reinvention is no recent phenomenon. Shelley’s book was little known until the first stage adaptations began in the 1820s, in which Frankenstein’s creature was already transformed into a mute, shambling brute based on the stock theatrical character of the Wild Man. This personification continued in the first film adaptation in 1910, simply called Frankenstein.

Some might lament how the original novel has been distorted and vulgarized. But literary critic Chris Baldick has a wiser perspective:
The truth of a myth… is not to be established by authorizing its earliest versions, but by considering all its versions… That series of adaptations allusions, accretions, analogues, parodies and plain misreadings which follows up on Mary Shelley’s novel is not just a supplementary component of the myth; it is the myth.
After all, there isn’t even a definitive version of Shelley’s story. She made small but significant changes in the third edition (1831), in particular emphasizing the Faustian themes of presumption and retribution on which the early stage versions insisted.

Besides, critics still dispute what Shelley’s message was meant to be – probably she was not fully conscious of all the themes herself. Far from offering a simplistic critique of scientific hubris, the story might instead echo Shelley’s troubled family life. Her mother, the feminist and political radical Mary Wollstonecraft, died from complications after Mary’s birth, and her father William Godwin all but disowned her after she eloped to Europe with Percy Shelley in 1814. She lost her first child, named William, that year, subsequently describing a dream in which the boy was reanimated. There is ample reason to believe Percy Shelley’s statement of the central moral of Frankenstein: ‘Treat a person ill, and he becomes wicked’.

If so, Nick Dear’s adaptation of the story for the National Theatre in London, directed by Danny Boyle of Trainspotting and Slumdog Millionaire fame, has returned to the essence of the tale. For it focuses on the plight of the creature, whose lone and awkward ‘birth’ begins the play. We see how this mumbling wretch, spurned as a hideous thing by Victor, is reviled by society until finding refuge with the blind peasant De Lacey. The kindly old man teaches the creature how to speak and read using Milton’s Paradise Lost, the story of Satan’s Promethean challenge to heaven.

Eventually De Lacey’s son and daughter-in-law return from the fields and drive out the creature in horror, whereupon he burns them in their cottage. These scenes are the moral core of Shelley’s novel, and in placing them so early Dear signals that this is very much the monster’s show.

In fact, perhaps too much. For while the creature is the most fully realised, most sympathetic and inventive incarnation I have seen, Victor Frankenstein is left with little to do but recoil from him and neglect all his other duties, martial, filial and moral. It is very clear from the outset who is the real monster.

In this production the two lead actors – Benedict Cumberbatch and Jonny Lee Miller – alternate the roles of Victor and his creature. This Doppelgänger theme is not a new idea: in the stage adaptation by Peggy Webling that formed the basis of Whale’s movie, the creature appeared dressed like Victor (there renamed Henry), who foreshadows the later confusion of creator and creature by saying ‘I call him by my own name – he is Frankenstein.’ It motivates Dear’s decision to leave the duo locked in mutual torment at the end: a vision more true to their relationship than that of the novel itself.

The scientific elements of the tale are skated over. Mary Shelley provided just enough hints for the informed reader to make the connection with Luigi Galvani’s recent work on electrophysiology; Dear has Frankenstein mention galvanism and electrochemistry (somewhat anachronistically), but that is as far as it goes. There is no serious attempt, therefore, to make the play a comment on the ‘Promethean ambitions’ of modern science (as Pope John Paul II called them in 2002) – a relief not because modern science is unblemished but because the alchemical trope of a solitary experimenter exceeding the bounds of God and nature is no longer the relevant vehicle for a critique.

The staging of this production is spectacular, and intelligent choices were made in the structure (if not always in the dialogue). Miller was extraordinary as the creature on the night I saw it; by all accounts Cumberbatch is equally so. Whether Dear adds anything new to the legend – as Whale and even Mel Brooks did – is debatable. But it is well to be reminded that the novel may be read not so much as a Gothic tale of monstrosity and presumption but as a comment on the consequences of how we treat one another.

Wednesday, March 23, 2011

Maths polymath scoops Abel Prize


Here’s a little news story I wrote for Nature on the Abel Prize. This award presents a notoriously challenging subject for science reporters each year, because it is always the devil of a job concisely to explain what on earth the recipient has done to deserve the award. I can’t deny that the same challenge applied here, but in spades, because Milnor has done so much. But it was a challenge I enjoyed. Given the choice, I’d have personally kept in the edited version the fact that holomorphic dynamics involves numbers in the complex plane, because it is the kind of thing experts will sniffily point out. But I can understand the fear that the reader will be exhausted by then. Ah, mathematics – what a wonderful, strange game it is.
_______________________________________________________

John Milnor wins the ‘Nobel of maths’ for his manifold works.

Awarding Albert Einstein a Nobel prize for his research on the photoelectric effect looks in retrospect like a somewhat arbitrary choice from among the galaxy of his contributions to all of physics.

In granting the 2011 Abel Prize in mathematics to John Milnor of Stony Brook University in New York, the committee of the Norwegian Academy of Science and Letters has wisely abandoned any such attempt to single out a particular achievement. The citation states merely that Milnor has made ‘pioneering discoveries in topology, geometry and algebra’: in effect a recognition that he has contributed to modern maths across the board.

In fact, Milnor’s work goes further: it also touches on dynamical systems, game theory, group theory and number theory. In awarding this equivalent of a Nobel prize, worth around $1m, the committee states that “All of Milnor’s works display marks of great research: profound insights, vivid imagination, elements of surprise, and supreme beauty.”

His breadth is unusual, says Professor Ragni Piene of the University of Oslo, the chair of the Abel Prize committee. “Though some of the fields he has worked in are related, he really has had to learn and develop new tools and new theory.”

Milnor “says is mainly a problem solver”, adds Piene. “But in the solving process, in order to understand the problem deeply he ends up creating new theories and opening up new fields.”

Among the most surprising of Milnor’s discoveries was the existence of so-called exotic spheres, multidimensional objects with strange topological properties. In 1956 Milnor was studying the topological transformations of smooth-contoured high-dimensional shapes – that is, shapes with no sharp edges. A so-called continuous topological transformation converts one object smoothly – as though remoulding soft clay – into another, without any tears in the fabric.

He discovered that in seven-dimensions there exist smooth objects that can be converted into the 7D equivalent of spheres only via intermediates that do have sharp kinks. In other words, the only way to get from one of these smooth objects to another is by making them not smooth. Kinks and corners in a surface are said to make it non-differentiable, which means that its curvature at the kinks has no well-defined value.

These counter-intuitive exotic spheres can exist in other dimensions too. With the French mathematician Michel Kervaire, Milnor calculated that there are precisely 28 exotic spheres in seven dimensions. But there seems at first glance little rhyme or reason to the trend for other dimensions: there is just one exotic sphere in 1, 2, 3, 5 and 6 dimensions, but 992 in 11 dimensions, 1 in 12 dimensions, 16,256 in 15D, and 2 in 16D. No one has yet figured out how many there are in four dimensions. This work spawned an entire new field of mathematics, called differential topology.

Some of Milnor’s other achievements are recognizably related to such topological conundrums, such as his work on the relationships between different triangulations (representations as networks of triangles) of mathematical surfaces called manifolds. Topology was also central to some of Milnor’s earliest work in 1950 on the curvature of knots.

But his work on group theory is quite different. Group theory was partly invented by the nineteenth-century Norwegian mathematician Niels Henrik Abel, after whom the award is named. In the formulation developed by Abel, a group can be represented as all non-equivalent combinations (‘words’) of a set of symbols. Milnor and the Czech mathematician Frantisek Wolf clarified how the number of words grows as the number of symbols increases for a wide class of groups called solvable groups.

More recently, Milnor, now 80, has been working in the field of holomorphic dynamics, which concerns the trajectories generated in the plane of real and imaginary numbers by iterating equations: the branch of maths that led to the discovery of fractal patterns such as the Mandelbrot and Julia sets.

Milnor has already won just about every other key prize in mathematics, including the Fields medal (1962) and the Wolf prize (1989). But beyond his skills as a researcher, Milnor has been widely praised as a communicator. His books “have become legendary for their high quality”, according to mathematician Timothy Gowers of the University of Cambridge.

Friday, March 18, 2011

Mind music


Here’s my latest news story for Nature. Eduardo Miranda is working very much at the experimental edge of electronic music – what I’ve heard has an intriguing ethereal quality which grows on you (well, it did on me).
__________________________________________________

A pianist plays a series of notes, and the woman echoes them on a computerized music system. And she plays a simple improvised melody over a looped backing track. It doesn’t sound much of a musical challenge – except that the woman, a stroke victim, is paralysed except for eye, facial and slight head movements. She is making the music purely by thinking.

This is a trial of a computer-music system that interfaces directly with the user’s brain, via electrodes on the scalp that pick up the tiny electrical impulses of neurons. The device, developed by composer and computer-music specialist Eduardo Miranda of the University of Plymouth in England and computer scientists at the University of Essex, should eventually enable people with severe physical disabilities, caused for example by brain or spinal-cord injuries, to make music for recreation or therapeutic purposes.

“This is surely an interesting avenue, and might be very useful for patients”, says Rainer Goebel, a neuroscientist at the University of Maastricht in the Netherlands who works on brain-computer interfacing.

Quite aside from the pleasure that making music offers, its value in therapy – for example, its capacity to awaken atrophied mental and physical functions in neurodegenerative disease – is well attested. But people who have almost no muscle movement at all have generally been excluded from such benefits and can enjoy music only through passive listening.

The development of brain-computer interfaces (BCIs) that can enable users to control computer functions by mind alone offer new possibilities for such people. In general these interfaces rely on the user’s ability to learn how to self-induce particular mental states that can be detected by brain-scanning technologies.

Miranda and colleagues have used one of the oldest of these techniques: electroencephalography (EEG), in which electrodes on the skull pick up faint neural signals. The EEG signal can be processed quickly, allowing fast response times. The instrumentation is cheap and portable in comparison to brain-scanning techniques such as magnetic resonance imaging (MRI) and positron-emission tomography (PET), and operating it requires no expert knowledge.

Whereas previous efforts on BCIs have tended to focus on simple tasks such as moving cursors or other screen icons, Miranda’s team sought to achieve something much more complex: to enable the user to play and compose music.

Miranda says he became aware of the then-emerging field of BCIs over a decade ago while researching how to make music using brainwaves. “When I realized the potential of a musical BCI for the well-being of severely disabled people”, he says, “I couldn’t leave the idea alone. Now I can’t separate this work from my activities as a composer – they are very integrated.”

The trick is to teach the user how to associate particular brain signals with specific tasks by presenting a repeating stimulus – auditory, visual or tactile, say – and getting the user to focus on it. This elicits a distinctive, detectable pattern in the EEG signal. Miranda and colleagues show several flashing ‘buttons’ on a computer screen, each one triggering a musical event. The users ‘push’ a button just by directing their attention to it.

For example, a button might be used to generate a melody from a pre-selected set of notes. The intensity of the control signal – how ‘hard’ the button is pressed, if you like – can be altered by the user by varying the intensity of attention, and the result is fed back to them visually as a change in the button’s size. In this way, any one of several notes can be selected by mentally altering the intensity of ‘pressing’.

With a little practice, this allows users to create a melody just as if they were selecting keys on a piano. And as with learning an instrument, say the researchers, “the more one practices the better one becomes.” They describe it in a forthcoming paper in the journal Music and Medicine [1].

The researchers trialled their system with a female patient at the Royal Hospital for Neuro-disability in London, who is suffering from locked-in syndrome, a form of almost total paralysis caused by brain lesions. During a two-hour session, she got the hang of the system and was eventually playing along with a backing track. She reported that “it was great to be in control again.”

Goebel points out that the patients here still need to be able to control their gaze, which people suffering from total locked-in syndrome cannot. In such partial cases, he says, “one can usually use gaze directly for controlling devices, instead of an EEG system”. But Miranda points out that eye-gazing alone does not permit variations in the intensity of the signal. “Eye gazing is comparable to a mouse or joystick”, he says. “Our system adds another dimension, which is the intensity of the choice. That’s crucial for our musical system.”

Miranda says that, while increasing the complexity of the musical tasks is not a priority, music therapists have suggested it would be better if the system was more like a musical instrument – for instance, with an interface that looks like a piano keyboard. He admits that it’s not easy to increase the number of buttons or keys beyond four, but is confident that “we will get there eventually”.

“The flashing thing does not need to be on a computer screen”, he adds. It could, for example, be a physical electronic keyboard with LEDs on the keys. “You could play it by staring at the keys”, he says.

References
1. Miranda, E. R., Magee, W. L., Wilson, J., Eaton, J. & Palaniappan, R. Music and Medicine (published online), doi:10.1177/1943862111399290.

Thursday, March 17, 2011

Open to the elements


Attending a planning meeting for the forthcoming Elements event at the Wellcome Institute on 8 April prompts me to advertise it. I can confidently say with no false modesty that I am among the very least of the attractions (I’ll be speaking, briefly, about mercury and arsenic in pigments). The artists’ suppliers Cornelissen will be bringing beautiful jarloads of the stuff. Nick Lane will be talking about oxygen, Andy Meharg about arsenic, and Andrew Szydlo will be demonstrating how Cornelius Drebbel made his submarine.  Andrea Sella will be presenting his mercury show. There will be an oxygen bar where you can inhale the stuff, and an iodine ‘wet play’ area. And you can apparently be cured of syphilis with an inhalation of mercury. Sort of. Andrea has arranged it with Hugh Aldersey-Williams, whose book on the elements has acted as a catalyst for the event. It will be mad and fun (and free!), and like real chemistry should be if it was allowed. If you live in or near London, get there early! 

Monday, March 14, 2011

New myths of parenting

I have an Opinion piece in Saturday’s Times prompted by research being pursued by the Newcastle embryology team. No point in giving the link, as it is subscription-only. But here is the piece before editing.
_______________________________________________________________________

The idea of having three parents – a notion apparently raised by the latest developments in reproductive technology – seems ripe material both for stand-up routines and for eliciting tabloid postures of horror. Never mind that both would conveniently ignore the fact that some children already have three primary carers in parental roles – it is revealing that this work should be discussed in terms of ‘three parents’ at all.

The more careful reports of the research being considered at Newcastle University – which would create embryos with DNA that is not wholly from the mother and father – will stress that these are three genetic or biological parents. The third ‘parent’ is an egg donor. The egg will be stripped of its nucleus, where the chromosomes reside, and replaced with that from a normal IVF embryo, containing maternal and paternal genes. But the egg will retain a few donor genes – 37 to be precise – in energy-generating compartments outside the cell nucleus called mitochondria. The procedure is being considered to eliminate serious diseases caused by faulty mitochondria in the mother’s eggs.

As a result, all the genes that influence the child’s development would be those of the mother and father except for the handful of genes that operate ‘out of sight’ to drive the mitochondria. Strictly speaking this does make the egg donor a kind of genetic parent, but the better analogy is with transplant patients, who, rather than having a tiny bit of ‘foreign’ genetic material in every tissue, have entirely foreign genes in one particular tissue.

So any talk of a ‘third parent’ here plays up the alleged weirdness of the situation, not least by introducing an exciting whiff of sexual irregularity. And in making the status of parent reassuringly one of genetic entitlement rather than responsibility of care, it plays along with the prevailing current notion of ‘genes’R’us’. Objections such as that raised by a spokesperson from the charity Life that the work would raise questions as to who is the real mother seem to place an extraordinary burden of identity on those 37 genes.

We are here reaping the harvest of modern genetics, especially the projects to read the chemical ‘code’ of human genomes. In their determination to sell this undoubtedly valuable enterprise, genetic scientists have all too often opted for the easy route of presenting the genome as the ‘book of life’, or as one leading scientist put it, ‘the set of instructions to make a human being’.

Such claims have encouraged us to equate our being with our genes to an unsupportable degree. The fact is that genes may be silenced, ignored or modified by environmental factors encountered by the developing organism. And any ‘identical’ twin (a revealing term in itself) will tell you that personal identity is not the same as genetic endowment. But the myth of genetic determinism is falsified beyond even these things. As physiologist Denis Noble has elegantly argued, genes are ‘instructions’ in roughly the same way that Bach’s scores, free of dynamics and ornaments, are prescriptions for music that brings tears to the eyes. (Even that is too tight an analogy, unless the performance admits improvisation.) Genes work not because they specify everything but precisely because there is so much that they do not need to specify.

If we had a better understanding of genetics – I don’t mean the public, but scientists too – we would be less likely to indulge in a gene-based materialistic view of parenthood and identity, and to confuse our bodies with our genomes. The ‘yuk factor’ response to embryos with non-maternal mitochondrial genes is a form of genetic narcissism. After all, most of the cell in our bodies are non-human: they are symbiotic bacteria in our gut, busy performing a host of functions on which our well-being depends. In any case, our genomes are patchworks of genes that no one can meaningfully claim as ‘their own’. In genetic terms we are all Frankenstein’s creatures. Whether we have a single parent or three, we just have to hope they do a better job than Victor did.

Behind all of this, however, is the deeper current of attitudes to ‘unnatural’ interventions in procreation. Here we swim in the murky waters of myth. When scientists in 2009 announced in Nature that they had achieved these ‘ mitochondrial transplants’ in monkeys, an editorial acknowledged that “[an] argument raised when such research has been attempted in the past is that such a three-parent union is ‘unnatural’.” One obvious rejoinder is to point to the neonatal infant mortality rate two centuries ago when birth involved very little intervention and was therefore more ‘natural’. But the function of the word ‘unnatural’ here is not merely to point out that these things don’t happen in nature, but to enlist moral disapproval. The unnatural act is not just the opposite of the natural, but is one we are invited to deplore.

Even if we substitute instead the word ‘artificial’, the pejorative implication remains. This is an ancient prejudice. The distinctions and relative merits of ‘art’ (meaning artifice) and ‘nature’ were debated by Plato and Aristotle, and it was not until the seventeenth century that there was any serious challenge to the prevailing view that artificial objects cannot be equal to natural ones. Often this prejudice went beyond the assertion that the products of technology are inferior: there was a suggestion that technology is inherently perverting. The biologist J. B. S. Haldane put it this way in 1924: “If every physical and chemical invention is a blasphemy, every biological invention is a perversion. There is hardly one which, on first being brought to the notice of an observer… would not appear to him as indecent and unnatural.”  Five decades later, IVF proved his point again. It is still opposed in the Catholic Catechism, which complains of “the domination of technology over the origin and destiny of the human person.” Far from enabling the birth of a longed-for child, for the church this reproductive technology creates an indelible stain on the ‘origin and destiny’ of any person in whose conception it is involved.

None of this is to deny that the work being contemplated at Newcastle needs careful consideration of the ethics as well as the safety. But presenting the issues in terms of a confusion of parenthood illustrates that we are trying to make sense of biomedical developments using moral and social contexts that they have already left behind. In an age of advanced prosthetics and transplantation, tissue engineering, and rapid genomic profiling, we need to escape from the tendency to shoehorn our uniqueness into a molecular structure and look for it instead in how we inhabit the world.

Thursday, March 10, 2011

Attack of the killer mushrooms


I have reviewed for Nature the thriller Spiral by nanotech expert Paul McEuen. It is somewhat formulaic but great fun. One could quibble that the villains are East Asians, but Paul is pretty harsh (if, I suspect, scarily accurate) on the US military too.
_______________________________________________________________________

One of the more humdrum obligations I have had to fulfil in the line of duty was to read Michael Crichton’s Prey, a thriller based on the premise of nanotechnological robot swarms run amok. An ingénue in this genre, I found myself comparing the characters’ psychological implausibility with the non-sequitur quirks of figures from myth and legend. But with guns.

Crichton, of course, made millions with his formula. Whether Spiral (Dial Press, New York, 2011) will do the same for Cornell physicist Paul McEuen remains to be seen (the movie rights are already sold), but it deserves to. It is more enjoyable, more palatable, and (as though this matters) boasts impeccable science rather than the half-digested fare that Crichton occasionally seemed to mistake for the real thing.

There’s nothing here, it should be said, that bucks the thriller formula. Indeed, Spiral made me realise that these books already are movies in literary form: every scene is tailored for the screen, and you can’t help but do the casting as you read. The dialogue is based on how people speak in blockbusters, not in life, and there’s the familiar cast: the vulnerable but plucky mother, the clinically ruthless assassin, the sadistic billionaire, the kid in peril, and so on. The race against time, the apocalyptic threat. And just as these films, if done well, offer a great ride, so does Spiral. It’s more fun than Prey or Angels and Demons, and won’t make your toes curl.

The story begins at the end of the Second World War, when young Irish microbiologist Liam Connor is brought on board a US warship to witness the effects of a devastating biological weapon developed by the Japanese: a fungal infection called the Uzumaki, which induces terrible hallucinations and madness and is ultimately fatal. Connor ends up hiding away a tiny vial of the stuff, wrestled from the Japanese engineer Hitoshi Kitano who was responsible for developing it in northern China.

Sixty years later Connor is an octogenarian with a Nobel prize, and still in active research at Cornell. Unknown to the authorities, he has for decades been secretly searching for the cure that he is sure will one day be needed for the Uzumaki. Aware that the nation that holds the cure also possesses a terrible weapon, he is determined to keep his work from the military. Then he is found dead at the bottom of a gorge, apparently having thrown himself over the edge to escape from a mysterious woman caught on CCTV footage. His coded last message to his colleague Jake Sterling, his granddaughter Maggie and her son Dylan, makes them the only people who can prevent a global outbreak of the killer fungus. But who is behind the fiendish scheme to release it?

You can see a lot of this coming, and as usual the climax depends on who can reach the gun fastest, but that doesn’t detract from the compulsive page-turning quality. And as far as the science goes, McEuen shows that the imagination of an inventive scientist is far more interesting than that of a writer who has merely done his homework – here he trumps not only Crichton but his namesake Ian McEwen who peppers his narratives with cutting-edge science, most notably in his recent novel Solar. It’s a delight to watch how McEuen – a world expert in nanoelectronics – has marshalled his knowledge to kit out the technical plot devices: nanotechnology, microbiology, information technology and synthetic biology are all brought into play in a convincing, unforced manner. Devotees of the latest trends will recognize many elements, from genetically engineered oscillating fluorescence to microfluidic labs-on-chips.

I confess that my interest struggles rather more to find purchase with square-jawed, stolid heroes with names like Jake whose physical prowess and ex-army credentials are carefully established in preparation for the gutsy displays that will inevitably be required of them. But that’s the genre, and Jake is a little less tiresomely bland than the wooden leads in Dan Brown and Crichton. A more appealing hero, however, is Cornell University itself, which enjoys a rather touching love letter here from the author. But the stars of the show are, as ever, the villains: the MicroCrawlers that scrabble ominously across the cover, microelectromechanical devices that acquire a seriously bad attitude.

Next time I hope McEuen dares to push harder at the boundaries of the genre. But I certainly hope there will be a next time, if he can escape both the lab bench and the all-consuming jaws of Hollywood.

Tuesday, March 08, 2011

The aliens haven't landed


If you’ve been hearing rumours that alien life forms have been found, here’s the story, as I reported for Nature News. It’s a strange universe out there (I'm talking about NASA).
________________________________________________________________________

As shown by its latest claim of 'alien bugs', the Journal of Cosmology has at least been an entertaining diversion. But don’t mistake it for real science.

The discovery of alien life forms might reasonably be expected to create headline news. But the media response to the announcement of such a ‘discovery’ in the Journal of Cosmology [1] has been muted, and mostly dismissive. “Bugs from space? Forget it”, said Science reporter Richard Kerr, while the Los Angeles Times quoted microbiologist Rosie Redfield as saying “Move along folks. There’s nothing to see here.”

These are somewhat more presentable than the comments received by Nature, of which ‘utter nonsense’ is a polite paraphrase. But the real story is stranger than Richard Hoover’s claim to have found fossilized extraterrestrial bacteria. Who is Hoover and what is the Journal of Cosmology and why has NASA been moved to officially distance itself from the affair?

That Hoover can rightly claim to be a NASA scientist may sound impressive to the media, but most scientists know that the space agency is a morass of odd ideas squirming below its gleaming surface. This of course goes with the territory: folks who dedicate their lives to the exploration of space tend to be bold, even extravagant thinkers, many of them today the children of the science-fiction fantasies of the 1950s and 60s, and the kind of imagination that can put people on Mars is bound to put a lot of other weird stuff out there too.

Hoover is himself an engineer and astrobiologist at NASA’s Marshall Space Flight Centre in Huntsville, Alabama, and he has been pushing this claim for years. “Personally, I have a completely open mind”, says meteoriticist Ian Wright of the Open University in Milton Keynes, England. “The problem for Hoover is that no matter how many papers he writes on this subject, people will only begin to accept the findings when they are replicated by others.”

Hoover’s paper reports curious microscopic filamentary structures seen inside a number of carbon-rich meteorites, including the classic Orgeuil meteorite that fell in France in 1864 and was examined by Louis Pasteur among others. These filaments have a carbon-rich coat filled with minerals, and Hoover points out that they look remarkably similar to structures formed by living and fossil cyanobacteria.

This may be so, but it doesn’t prove that the bacterial forms – if they are that – are extraterrestrial. Hoover says that because the structures are buried deep inside the meteorites, it is unlikely that they represent contamination by microorganisms on Earth. Experts don’t buy this. “Contaminants can easily get inside carbonaceous meteorites as they are relatively porous”, says Iain Gilmour of the Open University, who points to direct evidence of this for at least one other carbon-rich meteorite.

Meteoriticist Harry McSween of the University of Tennessee agrees. “All of us who have studied meteorites, especially CI chondrites [the class studied by Hoover], are aware that they have been terrestrially contaminated”, he says.

In fact, claims very similar to Hoover’s were made in the 1960s by the chemist Bartholomew Nagy, leading to a high-profile debate which left a consensus that Nagy’s ‘life-like’ structures were the result of contamination by pollen grains. Similar assertions of bacteria-like fossil forms in a Martian meteorite, made by NASA scientists in 1996 [2], have also been judged inconclusive.

If Hoover’s report is so unconvincing, why was it published? The Journal of Cosmology asserts that all its papers are peer-reviewed, but also states that “Given the controversial nature of [Hoover’s] discovery, we have invited 100 experts and have issued a general invitation to over 5000 scientists from the scientific community to review the paper and to offer their critical analysis… No other paper in the history of science has undergone such a thorough analysis.” This is a decidedly unorthodox publication strategy, not least because many of the ‘commentaries’ published so far by the journal seem more like the kind of thing one would find on fringe blogs.

Doubtless this is why NASA has been embarrassed into releasing a disclaimer about the work. “NASA cannot stand behind or support a scientific claim unless it has been peer-reviewed or thoroughly examined by other qualified experts”, it says. “NASA was unaware of the recent submission of the paper to the Journal of Cosmology or of the paper's subsequent publication.”

But the Journal of Cosmology is no ordinary journal. It has been running for just two years under the leadership of astrophysicist Rudolf Schild of the Harvard-Smithsonian Center for Astrophysics, and is a torch-bearer for the hypothesis of panspermia, according to which life on Earth was seeded by organisms brought here from other worlds. This was a favourite theory of the maverick astrophysicist Fred Hoyle and his colleague N. C. Wickramasinghe (an executive editor of the journal), who have argued that alien viruses could explain flu epidemics. Other highlights of the journal include an article titled ‘Sex on Mars’, which asks the burning question: have astronauts ever had sex, and is it safe?

A press release from the journal has now announced that it will cease publication in May, claiming to have been “killed by thieves and crooks”. The journal’s success “posed a direct threat to traditional subscription based science periodicals”, says senior execute managing director Lana Tao. “JOC was targeted by Science magazine and others who engaged in illegal, criminal, anti-competitive acts to prevent JOC from distributing news about its online editions and books.”

If JOC is no more, this is arguably a shame, since there ought to be space for such entertaining and eccentric voices. It’s true that apparently authentic journals like this might muddy the public’s distinction between real science and half-baked speculation; but judging from the latest episode, the world (apart from Fox News) is not as gullible as all that. 

References

1. Hoover, R. B. J. Cosmol. 13 [no pages] (2011).
2. McKay, D. S. et al., Science 273, 924-930 (1996).

Friday, March 04, 2011

I can see clearly now


Here’s a little piece I wrote for Nature news. To truly appreciate this stuff you need to take a look at the slideshow. There will be a great deal more on early microscopy in my next book, probably called Curiosity and scheduled for next year.
________________________________________________________________________

The first microscopes were a lot better than they are given credit for. That’s the claim of microscopist Brian Ford, based at Cambridge University and a specialist in the history and development of these instruments.

Ford says it is often suggested that the microscopes used by the earliest pioneers in the seventeenth century, such as Robert Hooke and Antony van Leeuwenhoek, gave only very blurred images of structures such as cells and micro-organisms. Hooke was the first to record cells, seen in thin slices of cork, while Leeuwenhoek described tiny ‘animalcules’, invisible to the naked eye, in rain water in 1676.

The implication is that these breakthroughs in microscopic biology involved more than a little guesswork and invention. But Ford has looked again at the capabilities of some of Leeuwenhoek’s microscopes, and says ‘the results were breathtaking’. ‘The images were comparable with those you would obtain from a modern light microscope’, he adds in an account of his experiments in Microscopy and Analysis [1].

“It's a very trustworthy and interesting article”, says Catherine Wilson, a historian of microscopy at the University of Aberdeen on Scotland. “Ford is the world’s leading expert on the topic and what he has to say here makes a good deal of sense”, she adds.

The poor impression of the seventeenth-century instruments, says Ford, is due to bad technique in modern reconstructions. In contrast to the hazy images shown in some museums and television documentaries, careful attention to such factors as lighting can produce micrographs of startling clarity using original microscopes or modern replicas.

Ford was able to make some of these improvements when he was granted access to one of Leeuwenhoek’s original microscopes owned by the Utrecht University Museum in the Netherlands. Leeuwenhoek made his own instruments, which had only a single lens made from a tiny bead of glass mounted in a metal frame. These simple microscopes were harder to make and to use than the more familiar two-lens compound microscope, but offered greater resolution.

Hooke popularized microscopy in his 1665 masterpiece Micrographia, which included stunning engravings of fleas, mites and the compound eyes of flies. The diarist Samuel Pepys judged it ‘the most ingenious book that I ever read in my life’. Ford’s findings show that Hooke was not, as some have imagined, embellishing his drawings from imagination, but should genuinely have been able to see such things as the tiny hairs on the flea’s legs.

Even Hooke was temporarily foxed, however, when he was given the duty of reproducing the results described by Leeuwenhoek, a linen merchant of Delft, in a letter to the Royal Society. It took him over a year before he could see these animalcules, whereupon he wrote that ‘I was very much surprised at this so wonderful a spectacle, having never seen any living creature comparable to these for smallness.’

‘The abilities of those pioneer microscopists were so much greater than has been recognized’ says Ford. He attributes this misconception to the fact that ‘no longer is microscopy properly taught.’

Reference
1. Ford, B. J. Microsc. Anal. March 2011 (in press).

Wednesday, March 02, 2011

Return of the mad scientist


I have a comment on the Prospect blog about the production of Frankenstein at the National Theatre, which I saw this week. To save you a click, here it is anyway. I am reviewing the play more formally for Nature. It’s flawed but worth seeing – but if you haven’t got a ticket, tough luck, as it’s sold out. However, I believe you could still come to this.

________________________________________________________________________

Do not go to see the Monstrous Drama, founded on the improper work called FRANKENSTEIN!!! Do not take your wives, do not take your daughters, do not take your families!!!

Actually, although the latest adaptation of Mary Shelley’s story at the National Theatre, scripted by Nick Dear and directed by Danny Boyle, includes nudity and a rape that would certainly not have featured in the 1823 staging that prompted this warning, there is little here that would shock most wives and daughters. Even the Grand Guignol gore in the draft script has been toned down. One scene even turns into a dance routine like some monstrous hybrid of Oliver! and The Rocky Horror Picture Show.

None of this is a bad thing. Some is very good: the staging is spectacular, the adaptation largely thoughtful and the monster – I can comment only on Jonny Lee Miller’s version in the show’s alternation of lead roles – is the most inventive and heartfelt I have seen, owing something to Caliban, Charles Laughton’s Hunchback of Notre Dame and even the Elephant Man. Some of the secondary performances creak, and some of the dialogue is throwaway, but the main problem is the title character.

Benedict Cumberbatch, who played Victor Frankenstein on the night I saw it, did all a versatile, intelligent actor of his calibre could be expected to do with the lines he was given. But about halfway through the production, the penny dropped as to why he seemed to be struggling. He is the Mad Scientist.

True, he does not cackle like Gene Wilder or shriek Colin Clive’s line from James Whale’s seminal movie – ‘Now I know what it feels like to be God!’ But that’s part of the problem: not even naked madness motivates his egotistical quest, his utter neglect of his doting fiancée, his contempt for the ‘little men with little lives’, his lack of real anguish about his child brother’s murder. From the outset it is clear that he is a stranger to human feeling and has not the slightest real interest in developing his knowledge of reanimation for ‘medical research’. Set against a creature who we see develop from its ‘birth’ and first baby steps to a state of savage grace and wisdom, all the time spurned and despised for looking no worse than a person flung through a windscreen, there is never any doubt who is the real monster.

I don’t think it makes much sense for scientists to feel indignant at this portrayal. Frankenstein has for so long been the archetype of the mad scientist that another representation as literal as this can’t elaborate on that image. And anyone who could entertain the notion that this cold, amoral individual experimenting in misanthropic solitude for nothing but personal glory bears the slightest resemblance to the modern scientist is already too biased and ignorant to argue with. This Frankenstein is a fairy-tale figure, like the wicked witch or the evil stepmother. The only harm this can do today is in dramatic terms: villains need to be either more complex or more exuberantly depraved to work as central characters. For all its virtues, Nick Dear’s adaptation in the end takes the easier option in making us love the monster. A production that tries to make us feel sympathy for Victor, a useless but confused and struggling father – now that would be an interesting challenge. 

Thursday, February 24, 2011

A metaphor too far


I have a Muse on Nature’s online news about metaphor in science; here’s the pre-edited version. In this huge and complex topic, this piece is a drop in the ocean.
____________________________________________________________________

Are scientists addicted to using metaphorical imagery at the cost of misleading the public and themselves?

Metaphors influence the way we think. In a recent paper in PLoS ONE, Stanford psychologists Paul Thibodeau and Lera Boroditsky show that how people judge the appropriate response to crime differs significantly when it is presented as a ‘beast’ or a ‘virus’ ravaging society [1]. In the former case they were more likely to call for stronger law enforcement, whereas in the latter there was more openness to solutions involving reform and understanding of root causes.

Perhaps the most striking aspect of this study is that the participants were unaware of the role the metaphorical context was playing. Instead they found ways to rationalize their decision based on apparently objective information such as statistics. “Far from being mere rhetorical flourishes”, the researchers say, “metaphors have profound influences on how we conceptualize and act with respect to important societal issues.”

To have this demonstrated and quantified is valuable – but perhaps mostly because it underlines what politicians and their advisers have never doubted. If there is a spin doctor or speechwriter who does not already recognize that metaphors sway opinion, it is a mystery how they ever got the job.

It isn’t hard to see why ‘crime as wild beast of prey’ encourages people to think about how to cage or kill it, whereas ‘crime as virus’ fosters more eagerness for ‘scientific’ understanding of causes. But too rarely are such metaphors interrogated at a deeper level.

In both the cases here, crime is presented as a (malevolent) force of nature, outside human agency. Whether beast or virus, the criminal is not like us – is not in fact human. By the same token, a ‘war on drugs’ or a ‘war on terror’ not just is an emotive image but deploys a narrative that bears little relation to reality.

In literature metaphor serves poetic ends; in politics it is a (subtly manipulative) argument by analogy. But in science, metaphor is widely considered an essential tool for understanding. So where then does this latest work leave us?

While the example of crime here imputes natural agency to human actions, science generally invokes metaphors the other way around: natural processes are described as if resulting from intention. This anthropomorphizing tendency was called the ‘pathetic fallacy’ by the nineteenth-century critic John Ruskin, though it was noted two centuries earlier by Francis Bacon.

It is an ingrained and profoundly influential habit, especially in biology [2-6], where intimations of intelligent agency seem irresistible even to those who deplore them. Most famous in this respect is Richard Dawkin’s selfish gene. Given the idea Dawkins strove to convey in his 1976 book of that title, the metaphor seems apt and understandable almost to the point of inevitability. But its problems go well beyond the fact that genes are of course not selfish in the way that people are (which is to say, they are not selfish at all).

For the selfish gene props up the whole notion of a Darwinian world as uncaring to the point of being positively nasty: an image that has sometimes provoked resistance to the sciences in general and natural selection in particular. And as physiologist Denis Noble has compellingly argued, the idea that genes are ‘selfish’ is totally unnecessary for understanding how they work, and in some ways misleading [7].

But it is no better to talk instead of the ‘cooperative gene’, which is equally value-laden and misinforming. Genes are not selfish or cooperative any more than they are happy or short-tempered. The central problem here is that of scientific metaphor in general [8,9].

Books of life, junk DNA, DNA barcodes – all can and have distorted the picture, not least because sometimes scientists themselves start to forget that these are metaphors. And when the science moves on – when we discover that the genome is nothing like a book or blueprint – the metaphors tend nonetheless to stick. The more vivid they are, the more dangerously seductive and resistant to change.

Thibodeau and Boroditsky give us new cause to be wary, for they show how unconsciously metaphors colour the way we reason. This seems likely to be as true in science – especially a science as emotive as genetics – as in social and political discourse.

Most scientists would probably agree with physiologist Robert Root-Bernstein that ‘metaphors are essential to doing and teaching science’ [10]. They might sympathize with biologist Paul Hebert’s response to criticisms of his ‘DNA barcoding’ metaphor [11]: “Why want to be so scientifically proper as to make our science tedious?” [12]

But the need for metaphor in science stands at risk of becoming dogma. Maybe we are too eager to find a neat metaphor rather than just explaining what is going on as clearly and honestly as we can. We might want to recognize that some concepts are “a reality beyond metaphor”, as David Baltimore has said of DNA [13]. At the very least, we might admit metaphor into science only after strict examination, and heed the warning of cyberneticists Arturo Rosenblueth and Norbert Wiener that “the price of metaphor is eternal vigilance” [14].

References

1. Thibodeau, P. H. & Boroditsky, L. PLoS ONE 6, e16782 (2011).
2. D. Nelkin, Nat. Rev. Genet. 2, 555-559 (2001).
3. B. Nerlich, R. Elliott & B. Larson (eds), Communicating Biological Sciences (Ashgate, Farnham, 2009).
4. B. Nerlich, B. & Dingwall, R., in Cognitive Models in Language and Thought: Ideology, Metaphors and Meanings (eds R. Dirven, R. Frank & M. Pütz), p.395–428. (Mouton de Gruyter, Berlin, 2003).
5. Kay, L. E., Who Wrote the Book of Life? (Stanford University Press, Stanford, 2000).
6. E. F. Keller, Refiguring Life (Columbia University Press, New York, 1996).
7. D. Noble, The Music of Life (Oxford University Press, Oxford, 2006).
8. G. Lakoff & M. Johnson, Metaphors We Live By (University of Chicago Press, Chicago, 1981).
9. T. L. Brown, Making Truth: Metaphor in Science (Univeristy of Illinois Press, Urbana, 2003).
10. R. Root-Bernstein, Am. Scient. 91(6) (2003).
11. P. Hebert, Proc. R. Soc. B Biol. Sci. 270, 313-321 (2003).
12. Quoted in ref. 3, p.161.
13. Quoted in ref. 3, p.158.
14. Quoted in R. C. Lewontin, Science 291, 1263-1264 (2001).

Thursday, February 17, 2011

Fruit flies sniff out heavy hydrogen


Here’s my latest news article for Nature. It’s worth checking out the comments on the Nature site.
______________________________________________________________________

Insects' ability to discriminate isotopes reignites debate over a controversial theory of olfaction.

Fruit flies can smell the difference between ordinary and heavy hydrogen, according to new research published today.

Efthimios Skoulakis of the Alexander Fleming Biomedical Sciences Research Centre in Vari, Greece, and his colleagues say that fruit flies show a preference for an odorant molecule containing ordinary hydrogen over the same molecule with the hydrogen replaced by heavy hydrogen (deuterium), when presented with both odorants in the two branches of a T-shaped maze.

The flies can also be conditioned to display a selective aversion to either of the forms of the odorant by electric-shock treatment, showing that they can clearly distinguish between them. The researchers report their findings in the Proceedings of the National Academy of Sciences USA [1].

Skoulakis and colleagues say that the results offer strong support to a controversial theory of how olfaction works, which has been proposed previously by Luca Turin of the Massachusetts Institute of Technology, who is also an author of the paper. According to Turin, odorants are identified by the olfactory apparatus not according to their molecular shape but their vibrations.

“This is an important paper, and offers very strong evidence in favour of the vibrational theory of olfaction”, says materials physicist Andrew Horsfield of Imperial College in London.

But others are not convinced. Leslie Vosshall, a neuroscientist specializing in olfaction at the Rockefeller University in New York, considers it interesting that flies show such discrimination, but adds that “these findings by themselves do not provide strong support for any of the prevailing models of smell.”

Deuterium is an isotope of hydrogen: unlike ordinary hydrogen, its atoms contain a neutron in the nucleus as well as a proton. This makes the atoms roughly twice as heavy. The chemical properties of deuterium are much the same as those of ordinary hydrogen, but its greater mass means that when the atoms are bonded to others in a molecule, they vibrate more slowly.

In the predominant theory of olfaction, odorant molecules dock into cavities in receptor proteins lodged in the olfactory membranes. This docking depends on a match between the shape of the odorant and that of the cavity; if they fit together, this triggers a neural signal to the brain.

But Turin thinks that instead the receptor proteins ‘sense’ the vibrations of the odorant, an effect made possible by the quantum-mechanical behaviour of electrons in the molecules. Horsfield and others have shown that this process could work in theory [2], but there is no direct evidence for it in practice.

If Turin is right, deuterium-substituted odorants should smell different to those with ordinary hydrogen because they have different vibration frequencies.

There is not yet any good evidence that deuterated compounds smell different to humans [3], but subtle biases are hard to eliminate from such tests. That’s why Turin teamed up with Skoulakis to test fruit flies, which are less susceptible to biases and are known to have a good sense of smell.

When presented with the attractive (to flies) odorant acetophenone, the fruit flies showed an increasing aversion to it as more of its hydrogens were substituted for deuterium. The researchers could train the flies to associate either the deuterated or normal odorant with punishing electric shocks applied to their feet via the floor of the maze, and to avoid them accordingly.

If the vibrational mechanism of smell is correct, the researchers reasoned that flies trained to avoid deuterated odorants should display a similar aversion to compounds called nitriles, since the vibration of the nitrile chemical group has a very similar frequency to that of the bonds between deuterium and carbon. They found this was so.

But Bill Hansson, a specialist in insect olfaction at the Max Planck Institute for Chemical Ecology in Jena, Germany, isn’t persuaded. He points out that, although most isotopes are chemically identical, this is not always the case with hydrogen and deuterium, given their large (2:1) difference in mass. After all, heavy water is toxic, and even in these odorants the substitution of deuterium changes properties such as melting and boiling points.

“If hydrogen bonds between the odorant and corresponding receptor play a major role, insects may well be able to discriminate between deuterated and non-deuterated compounds using conformational [shape-based] sensing”, he says.

Vosshall is also sceptical. “Insects use odorant receptors that are structurally and functionally distinct from these human receptors, yet this group claims that the same vibration mechanism operates in these very distinct proteins”, she says. “This idea is difficult to reconcile with the current knowledge of how these completely divergent protein types detect odors.”

Regardless of the mechanism, might humans discriminate isotopes by smell too? “Extrapolation to humans has to be treated with care”, Horsfield warns. Turin has, however, received unpublished reports of isotopic smell discrimination in dogs. “In one case at least the dogs are said to completely ignore the deuterated version of an odorant that they are trained to detect in the undeuterated version”, he says.

“Things are unlikely to work exactly in the same way for humans”, he acknowledges. But he is convinced that something analogous applies.

References 

1. Franco, M. I., Turin, L., Mershin, A. & Skoulakis, E. M. C. Proc. Natl Acad. Sci. USA details to come.
2. Brookes, J. C., Hartoutsiou, F., Horsfield, A. P. & Stoneham, A. M. Phys. Rev. Lett. 98, 038101 (2007).
3. Keller, A. & Vosshall, L. B. Nat. Neurosci. 7, 337-338 (2004).

Thursday, February 10, 2011

Talk about Unnatural

While they last: I discuss Unnatural on the Guardian books podcast and on BBC Radio 4’s Today programme last Tuesday. 

Monday, February 07, 2011

Fears for tears


Here’s my latest Crucible column for Chemistry world. Weird stuff, huh?

*****************************************
There is an early candidate for this year’s Ig Nobel prize in chemistry, one of the annual spoof awards for ‘improbable research’. The work reported in Science by Shani Gelstein of the Weizmann Institute of Science in Rehovot, Israel, and colleagues1 has precisely the degree of risqué unlikelihood that the Ig Nobel committee clearly enjoys. It is not to descend too far into tabloid sensationalism to describe the findings thus: men say they don’t feel much like having sex after sniffing women’s tears.

To judge a piece of work worthy of an Ig Nobel is not necessarily to denigrate it, and indeed I’d argue that the research by Gelstein and colleagues raises interesting and significant questions. Several Ig Nobel laureates have investigated problems of genuine value: in one of my favourites, chemical engineers Ed Cussler and Brian Gettelfinger looked at whether the theoretical viscosity scaling relationships for drag and thrust in swimming motions are borne out experimentally by having people swim in a pool filled with syrup2. Others – and I believe Gelstein et al. fall into this category – look odd, even perverse, merely because odd and perverse things happen in nature.

That is plain from the context of the work, which confronts us with the astonishing fact that we do not know why we cry. Anyone tempted to sneer at the motivation of the Israeli team should be silenced by this stark truth. It is not hard to devise stories about the adaptive value of tears – one such invokes their potential to prevent dehydration of mucous membranes while weeping3 – yet far harder to adduce any proof. The odd thing about tears as an emotional signal is that they seem to be purely symbolic, whereas Darwin imagined that such signals must have (or have had) some functional role too – the baby’s cry broadcasts its distress, say.4

Tears are surprisingly complex structures, an investment that surely must have some payoff. They are not just salty water, but contain enzymes and other proteins, lipids and metabolites. The lipids self-organize into a surface film in which two-dimensional crystal and liquid patches interact to create resilience as the droplets deform.5 Emotional tears of humans have a slightly different composition from mere ‘eye-watering’ tears; and noting that mouse tears contain a pheromone, Gelstein and colleagues wondered if this might be true of human emotional tears too.

To study that, they harvested ‘sad tears’ from women watching weepy films, and investigated whether men could smell any difference between these and saline solution. They couldn’t. But then the researchers showed the male subjects images of women’s faces while constantly exposing them to the vapours of the tears by attaching a tear-soaked pad beneath their nostrils, and asked the men to assess the sadness and the sexual attractiveness of the images.

As psychological testing goes, this seems to be heading into strange territory. But the results were surprising: while tears did not influence judgements of sadness, they lowered significantly ratings of attractiveness. In related tests, the men reported lower sexual arousal after sniffing tears – a condition supported by measurements of psychophysiological state (such as skin conductance), testosterone levels, and even brain activity monitored by functional MRI. Importantly, the men did not know that the substance to which they were being exposed was female tears, nor had they seen the women cry.

The nature of the chemical signal in the tears presumed to be triggering these effects isn’t yet clear. But its mere existence adds an unexpected new dimension to the chemical basis of sexual interaction – which, even if the metaphor is rather archly belaboured by chemists, is already undeniable.6

Other questions abound. What are the effects of same-sex tears, or children’s tears? Are other functions besides sexual arousal affected? In any event, the current results are an invitation for evolutionary psychologists to cook up explanations of why it is adaptive to experience lower sexual arousal when someone is crying. Does that allow us to hug them without wanting to make love to them? (Countless movies, notably Don’t Look Now, insist otherwise, as does the stereotype of the sexual predator who exploits emotional vulnerability.) It’s fine to speculate, but perhaps better to exercise restraint and regard this intriguing finding as still at the stage of being a chemical rather than an evolutionary problem.

References

1. S. Gelstein et al. Science doi: 10.1126/science.1198331 (2011).
2. B. Gettelfinger & E. L. Cussler, Am. Inst. Chem. Engin. J. 50, 2646-2647 (2004).
3. A. Montagu, Science 130, 1572-15 (1959).
4. C. Darwin, The Expression of the Emotions in Man and Animals (John Murray, London, 1872).
5. P. G. Petrov et al., Exp. Eye Res. 84, 1140-1146 (2007).
6. G. Froböse & R. Froböse, love and Lust: Is It More than Just Chemistry? (RSC, Cambridge, 2006).