Wednesday, June 25, 2008

Birds that boogie
[I reckon this one speaks for itself. It is on Nature News. I just hope Snowball can handle the fame.]

YouTube videos of dancing cockatoos are not flukes but the first genuine evidence of animal dancing

When Snowball, a sulphur-crested male cockatoo, was shown last year in a YouTube video apparently moving in time to pop music, he became an internet sensation. But only now has his performance been subjected to scientific scrutiny. And the conclusion is that Snowball really can dance.

Aniruddh Patel of the Neurosciences Institute in La Jolla, California, and his colleagues say that Snowball’s ability to shake his stuff is much more than a cute curiosity. It could shed light on the biological bases of rhythm perception, and might even hold implications for the use of music in treating neurodegenerative disease.

‘Music with a beat can sometimes help people with Parkinson’s disease to initiate and coordinate walking’, says Patel. ‘But we don’t know why. If nonhuman animals can synchronize to a beat, what we learn from their brains could be relevant for understanding the mechanisms behind the clinical power of rhythmic music in Parkinson’s.’

Anyone watching Snowball can see that his foot-tapping seems to be well synchronized with the musical beat. But it was possible that in the original videos he was using timing cues from people dancing off camera. His previous owner says that he and his children would encourage Snowball’s ‘dancing’ with rhythmic gestures of their own.

Genuine ‘dancing’ – the ability to perceive and move in time with a beat – would also require that Snowball adjust his movements to match different rhythmic speeds (tempi).

To examine this, Patel and his colleagues went to meet Snowball. He had been left by his previous owner at a bird shelter, Birdlovers Only Rescue Service Inc. in Schererville, Indiana, in August 2007, along with a CD containing a song to which his owner said that Snowball liked to dance: ‘Everybody’ by the Backstreet Boys.

Patel and colleagues videoed Snowball ‘dancing’ in one of his favourite spots, on the back of an armchair in the office of Birdlovers Only. They altered the tempi of the music in small steps, and studied whether Snowball stayed in synch.

This wasn’t as easy as it might sound, because Snowball didn’t ‘dance’ continuously during the music, and sometimes he didn’t get into the groove at all. So it was important to check whether the episodes of apparent synchrony could be down to pure chance.

‘On each trial he actually dances at a range of tempi’, says Patel. But the lower end of this range seemed to correlate with the beat of the music. ‘When the music tempo was slow, his tempo range included slow dancing. When the music was fast, his tempo range didn’t include these slower tempi.’

A statistical check on these variations showed that the correlation between the music’s rhythm and Snowball’s slower movements was very unlikely to have happened by chance. ‘To us, this shows that he really does have tempo sensitivity, and is not just ‘doing his own thing’ at some preferred tempo’, says Patel.

He says that Snowball is unlikely to be unique. Adena Schachner of Harvard University has also found evidence of genuine synchrony in YouTube videos of parrots, and also in studies of perhaps the most celebrated ‘intelligent parrot’, the late Alex, trained by psychologist Irene Pepperberg [1]. Patel [2] and Schachner will both present their findings at the 10th International Conference on Music Perception and Cognition in Sapporo, Japan, in August.

Patel and his colleagues hope to explore whether Snowball’s dance moves are related to the natural sexual-display movements of cockatoos. Has he invented his own moves, or simply adapted those of his instinctive repertoire? Will he dance with a partner, and if so, will that change his style?

But the implications extend beyond the natural proclivities of birds. Patel points out that Snowball’s dancing behaviour is better than that of very young children, who will move to music but without any real synchrony to the beat [3]. ‘Snowball is better than a typical 2-4 year old, but not as good as a human adult’, he says. (Some might say the same of Snowball’s musical tastes.)

This suggests that a capacity for rhythmic synchronization is not a ‘musical’ adaptation, because animals have no genuine ‘music’. The question of whether musicality is biologically innate in humans has been highly controversial – some argue that music has served adaptive functions that create a genetic predisposition for it. But Snowball seems to be showing that an ability to dance to a beat does not stem from a propensity for music-making.

References

1. Pepperberg, I. M. Alex & Me (HarperCollins, 2008).
2. Patel, A. D. et al., Proc. 10th Int. Conf. on Music Perception and Cognition, eds M. Adachi et al. (Causal Productions, Adelaide, in press).
3. Eerola, T. et al., Proc. 9th Int. Conf. on Music Perception and Cognition, eds M. Baroni et al. (2006).

Wednesday, June 18, 2008

Fly me to the moon?

Last Monday I took part in a debate at the Royal Institution on human spaceflight: is it humanity’s boldest endeavour or one of our greatest follies? My opponent was Kevin Fong of UCL, who confirmed all my initial impressions: he is immensely personable, eloquent and charming, and presents the sanest and least hyperbolic case for human spaceflight you’re ever likely to hear. All of which was bad news for my own position, of course, but in truth this was a debate I was never going to win: a show of hands revealed an overwhelming majority in favour of sending humans into space at the outset, and that didn’t change significantly (I was gratified that I seemed to pick up a few of the swing voters). And perhaps rightly so: if Kevin was put in charge of prioritizing and publicizing human spaceflight in the west, I suspect I’d find it pretty unobjectionable too. Sadly, we have instead the likes of the NASA PR machine and the bloomin’ Mars Society. (The only bit of hype I detected from Kevin all evening was about the importance to planetary geology of the moon rocks returned by Apollo – he seemed to accept (understandably, as an anaesthetist) the absurdly overblown claims of Ian Crawford.) In any event, it was very valuable to hear the ‘best case’ argument for human spaceflight, so that I could sharpen my own views on the matter. As I said then, I’m not against it in principle (I’m more of an agnostic) – but my goodness, there’s a lot of nonsense said and done in practice, and it seems even the Royal Astronomical Society bought some of it. Here, for what it is worth, is a slightly augmented version of the talk I gave.

*****

Two weeks ago I watched the documentary In the Shadow of the Moon, and was reminded of how exciting the Apollo missions were. Like most boys growing up in the late 60s, I wanted to be an astronaut. I retain immense respect for the integrity, dedication and courage of those who pioneered human spaceflight.

So it’s not idly that I’ve come to regard human spaceflight today as a monumental waste of money. I’ve been forced to this conclusion by the stark facts of how little it has achieved and might plausibly achieve in the near future, in comparison to what can be done without it.

Having watched those grainy, monochrome pictures in 1969, and having duly built my Airfix lunar modules and moon buggies, as a teenager I then watched Carl Sagan’s TV series Cosmos at the start of the 1980s. Now, Sagan did say ‘The sky calls to us; if we do not destroy ourselves we will one day venture to the stars.’ And I suspect he is right. But he, like me, didn’t seem to be in any great hurry about that. Or rather, I think he felt that we were essentially going there already, because Sagan drew on the results then just arriving from the Voyager spacecraft, launched only a year or so before the series was made and at that time investigating Jupiter and Saturn. He also reaped the bounty of the earlier Mariner missions to Venus and Mars, which offered images that remain stunning even now. The moon landings were a fantastic human achievement, but it was the unmanned missions that I encountered through Cosmos that really opened my eyes to the richness and the strangeness of the universe. Even in Technicolor, the moon is a drab place; but here, thanks to the Mariners and Voyagers, were worlds of swirling colour, of ice sheets and volcanoes and dust storms and molten sulphur. Did I feel short-changed that we weren’t sending humans to these places? On the contrary, I think I sensed even then that humans don’t belong here; they would simply be absurd, insignificant, even unwelcome intruders.

There had been Skylab in the 1970s, of course, in Earth orbit for six years, and that seemed kind of fun but now I recall a nagging sense that I wasn’t sure quite what they were doing up there, beyond a bit of microgravitational goofing around. And then came the space shuttle, and the Challenger disaster of 1986, and I began to wonder, what exactly is the aim of all this tentative astronautics at the edge of space?

And all the while that human spaceflight was losing its way, unmanned missions were offering us jaw-dropping sights. The Magellan mission wowed us on Venus, the Galileo mission gave thrilling views of Jupiter and its moons, and the rovers Opportunity and Spirit continue to wander on Mars sending back breathtaking postcards. And most recently, the Cassini-Huygens mission to Saturn and its moon Titan has shown us images of the strangest world we’ve ever seen, with methane lakes oozing up against shores encrusted with organic material under the drizzle of a methane rain.

This has all made me look again at the arguments put forward for why humans should go into space. And I’ve yet to find one that convinces me of its value, at this stage in our technological evolution.

One of the first arguments we hear is that there are technological spinoffs. We need to be cautious about this from the outset, because if you put a huge amount of money into developing any new technology, you’re bound to get some useful things from it. Of course, it is probably impossible to quantify, and perhaps rather meaningless to ask, what we would have found if we had directed even a small fraction of the money spent on human spaceflight directly into research on the sort of products it has spun off; but the fact remains that if you want a new kind of miniature heart pump or a better alloy for making golf clubs or better thermal insulation – if you really decide that you need these things badly – then sending people into space is a peculiar way of going about it. Whatever you want to say about the ragbag of products that have had some input from human spaceflight technology, I don’t think you can call them cost-effective. We’ve also got to take care that we distinguish between the spinoffs that have come from unmanned spaceflight.

What’s more, the spinoff argument has been routinely distorted. You ask many people what are the major spinoffs from spaceflight and they will say ‘Teflon’. So let me tell you: DuPont’s major Teflon plant in Virginia was producing a million pounds of it a year in 1950, and Teflon cookware was in the stores when Yuri Gagarin orbited the earth. Then people might say ‘Velcro’ – no, invented in Switzerland in 1941. Or if they’re American, they might cite the instant fruit drink Tang, which NASA simply bought off the supermarket shelf for their astronauts. When the head of NASA, Mike Griffin, referred to spinoffs in a recent speech defending human spaceflight, the first examples he reached for were these three – even though he then admitted that these didn’t come from the space program at all! You have to wonder why these spinoff myths have been allowed to persist for so long – was there really nothing better to replace them?

Then there’s the argument that you can do great science in space. Here again it is not too strong to say that some advocates routinely peddle false claims. Yes, you can do some neat experiments in space. For example, you can look at the fine details of how crystals grow, undisturbed by the convection currents that stir things up under gravity. And that also means you can grow more perfect crystals. Fine – but have we truly benefited from it, beyond clearing up a few small questions about the basic science of crystal growth? One common claim is that these improved crystals, when made from biomolecules, can offer up a more accurate picture of where all the atoms sit, so that we can design better drugs to interact with them. But I am not aware of any truly significant advance in drug development that has relied in any vital way on crystals grown in space. If I’ve overlooked something, I’d be happy to know of it, although you can’t always rely on what you read to make that judgement. In 1999, for example, it was claimed that research on an anti-flu drug had made vital use of protein crystals grown in a NASA project on board a space shuttle. NASA issued a press release with the headline ‘NASA develops flu drugs in space’. To which one of the people involved in the study replied by saying the following: ‘the crystals used in this project were grown here on Earth. One grown on Mir [the Russian space station, and nothing to do with NASA] was used in the initial stages, but it was not significantly better than the Earth-grown crystals.’

I’m confident of this much: if you ask protein crystallographers which technology has transformed their ability to determine crystal structures with great precision, it won’t cross their minds to mention microgravity. They will almost certainly cite the advent of high-intensity synchrotron X-ray sources here on Earth. Crystals grown in space are different, we’re told. Yes, American physicist Robert Park has replied, they are: ‘They cost more. Three orders of magnitude more.’

What we do learn in space that we can’t easily learn on Earth is the effect of low or zero gravity on human physiology. That’s often cited as a key scientific motivation for space stations. But wait a minute. Isn’t there a bit of circularity in the argument that the reason to put people in space is to find out what happens to them when you put them there?

One of the favourite arguments for human space exploration, particularly of the moon and Mars, is that only humans can truly go exploring. Only we can make expert judgements in an instant based on the blend of logic and intuition that one can’t program into robots. Well, there’s probably some truth in that, but it doesn’t mean that the humans have to physically be there to do it. Remote surgery has demonstrated countless times now that humans can use their skill and judgement in real time to guide robotics. NASA researchers have been calling the shots all along with way with the Mars rovers. This pairing of human intelligence with remote, robust robotics is now becoming recognized as the obvious way to explore extreme environments on Earth, and it surely applies in space too. It’s been estimated that, compared with unmanned missions, the safety requirements for human exploration push up launch costs by at least a factor of ten. We still lose a fair number of unmanned missions, but we can afford to, both in financial and in human terms. Besides, it’s easy to imagine ways in which robots can in fact be far more versatile explorers than humans, for example by deploying swarms of miniature robots to survey large areas. And in view of the current rate of advance in robotics and computer intelligence, who knows what will become feasible within the kind of timescale inevitably needed to even contemplate a human mission to Mars. I accept that even in 50 years time there may well be things humans could do on Mars that robots cannot; but I don’t think it is at all clear that those differences will in themselves be so profound as to merit the immense extra cost, effort and risk involved in putting humans there.

And now let’s come to what might soon be called the Hawking justification for human space exploration: we ‘need’ another world if we’re going to survive as a species. At a recent discussion on human exploration, NASA astronaut and former chief scientist John Grunsfeld put it this way: ‘single-planet species don’t survive.’ He admitted that he couldn’t prove it, but this is one of the most unscientific things I’ve heard said about human space exploration. How do you even begin to formulate that opinion? I have an equally unscientific, but to my mind slightly more plausible suggestion: ‘species incapable of living on a single, supremely habitable planet don’t survive.’

Quite aside from these wild speculations, one wonders how some scientists can be quite so blind to what our local planetary environment is like. They seem ready to project visions of Earth onto any other nearby world, just as Edgar Rice Burroughs did in his Mars novels. If you’ve ever flown across Siberia en route to the Far East, you know what it is like down there: there’s not a sign of human habitation for miles upon miles. Humans are incredibly adaptable to harsh environments, but there are places on Earth where we just can’t survive unaided. Well, let me tell you: compared with the Moon and Mars, Siberia is like Bognor Regis. Humans will not live autonomously here while any of us is alive, nor our children. It may be that one day we can run a moonbase, much as we have run space stations. But if the Earth goes belly up, the Moon and Mars will not save us, and to suggest otherwise is fantasy that borders on the irresponsible.

I was once offered an interesting justification for human space exploration by American planetary scientist Brian Enke. In response to a critique of mine, he said this:
‘I can’t think of a better way to devastate the space science budget in future years than to kill the goose that lays the golden eggs, the manned space program. We would destroy our greatest justification and base of support in the beltway. Why should Uncle Sam fund space science at its current levels if it gives up on manned space exploration? Our funding depends upon a tenuous mindset - a vision of a progressive future that leads somewhere.’

In other words, we scientists may not be terribly interested in human spaceflight, but it’s what the public loves, and we can’t expect their support if we take that away.

Now, I have some sympathy with this; I can see what Brian means. But I can’t see how a human space program could be honestly justified on these grounds. Scientists surely have a responsibility to explain clearly to the public what they think they can achieve, and why they regard it as worth achieving. The moment we begin to offer false promises or create cosmetic goals, we are in deep trouble. Is there any other area of science in which we divert huge resources to placating public opinion, and even if there was, should we let that happen? In any event, human spaceflight is so hideously expensive that it’s not clear, once we have indulged this act of subterfuge, that we will have much money left to do the real science anyway. That is becoming very evidently an issue for NASA now, with the diversion of funds to fulfil George Bush’s grandiose promise of a human return to the moon by 2020, not to mention the persistent vision of a manned mission to Mars. If we give the ‘beltway’ what they want (or what we think they want), will there be anything left in the pot?

In fact, the more I think, in the light of history, about this notion of assuaging the public demand for ‘vision’, the more unsettling it becomes. Let’s put it this way. In the early 1960s, your lover says ‘Why are you a good-for-nothing layabout? Just look at what the guy next door is building – why can’t you do that?’ And so you say, ‘All right my dear, I’ll build you a rocket to take us to the moon.’ Your lover brightens up instantly, saying ‘Hey, that’s fantastic. I love you after all.’ And so you get to work, and before long your lover is saying ‘Why are you spending all this damned time and money on a space rocket?’ But you say, ‘Trust me, you’ll love it.’ The grumbling doesn’t stop, but you do it, and you go to the moon, and your lover says ‘Honey, you really are fabulous. I’ll love you forever.’ Two years later, the complaining has started again: ‘So you went to the moon. Big deal. Well, you can stop now, I’m not impressed any more.’ So you stop and go back to tinkering in your garage.

The years go by, and suddenly it’s the 1990s, and your lover is discontented again. ‘What have you ever achieved?’ and so on. ‘Oh, but I took us to the moon’, you say. ‘Big deal.’ ‘Well, you could go there again.’ ‘Hmm…’ ‘All right’, you say, exasperated, ‘look, we’ll go the moon again and then to Mars.’ ‘Oh honey, that’s so wonderful, if you do that I’ll love you forever.’ And what’s this? You believe it! You really believe that two years after you’ve been to Mars, they won’t be saying ‘Oh, Mars! Get a life. What else can you do?’ What a sucker. And indeed, what else will you do? Where will you go after that, to keep them happy for a few years longer?

We’re told that space science inspires young people to become scientists. I think this is true. But how do we know that they might not be equally motivated by scientific and technological achievements on Earth? Has anyone ever tried to answer that question? Likewise, how do we compare the motivation that comes from putting people into space with that from the Mars rovers or the Huygens mission to Titan? How would young people feel about being one of the scientists who made these things possible and who were the first to see the images they obtained? Is the allure of astronautics really so much more persuasive than anything else science has to offer young people? Do we know that it is really so uniquely motivating? I don’t believe that has ever been truly put to the test.

I mentioned earlier some remarks by NASA’s head Mike Griffin about human spaceflight. These were made in the context of a speech last year about the so-called ‘real’ reasons we send people into space. Sure, he said, we can justify doing this in hard-nosed cost-benefit terms, by talking about spinoffs, importance for national security, scientific discovery and so on. Now, as I’ve said, I think all those justifications can in fact be questioned, but in any case Griffin argued that they were merely the ‘acceptable’ reasons for space exploration, the kind of arguments used in public policy making. But who, outside of those circles, talks and thinks like that, he asked. The ‘real’ reasons why humans try to fly the Atlantic and climb Everest, he said, have nothing to do with such issues; they are, in Griffin’s words, ‘intuitive and compelling but not immediately logical’, and are summed up in George Mallory’s famous phrase about why we go up mountains: ‘Because it is there’. We want to excel, we want to leave something for future generations. The real reasons, Griffin said, are old-fashioned, they are all about the American pioneer spirit.

This is what the beltway wants to hear! That’s the Columbus ideal! Yes, the real reason many people, in the US at least, will confess to an enthusiasm for human spaceflight is that it speaks of the boldness and vision that has allowed humanity to achieve wonderful things. Part of this is mere hubris – the idea that we’ll have not ‘really’ been to Mars until we’ve stamped our big, dirty feet on the place (and planted our national flag). But part is understandable and valid: science does need vision and ambition. But in terms of space travel, this trades on the illusion that space is just the next frontier, like Antarctica but a bit further away. Well, it’s not. Earth is an oasis in a desert vaster than we can imagine. I can accept the Moon as a valid and clearly viable target, and we’ve been there. I do think that one day humans will go to Mars, and I’m not unhappy about that ultimate prospect, though I see no purpose in trying to do it with our current, fumbling technologies. But what then? Space does not scale like Earth: it has dimensions in time and space that do not fit with our own. Space is not the Wild West; it is far, far stranger and harder than that.

Actually, invoking the Columbus spirit is apt, because of course Columbus’s voyage was essentially a commercial one. And this, it seems, is the direction in which space travel is now going. In 2004 a privately financed spaceplane called SpaceShipOne won the Ansari X Prize, an award of US$10 million offered for the first non-government organization to launch a reusable manned spacecraft into space twice within two weeks. SpaceShipOne was designed by aerospace engineer Burt Rutan, financed by Microsoft co-founder Paul Allen. Rutan is now developing the space vehicle that Richard Branson plans to use for his Virgin Galactic business, which will offer the first commercial space travel. The plan is that Rutan’s SpaceShipTwo will take space tourists 100 kilometres up into suborbital space at a cost of around $200,000 each. Several other companies are planning similar schemes, and space tourism looks set to happen in one way or another. Part of me deplores this notion of space as a playground for the rich. But part of me thinks that perhaps this is how human spaceflight really ought to be done, if we must do it at all: let’s admit its frivolity, marvel at the inventiveness that private enterprise can engender, and let the wasted money come from the pockets of those who want it.

I must confess that I couldn’t quite believe the pathos in one particular phrase from Mike Griffin’s speech: ‘Who can watch people assembling the greatest engineering project in the history of mankind – the International Space Station – and not wonder at the ability of people to conceive and to execute that project?’ I’m hoping Griffin doesn’t truly believe this, but I fear he does. I think most scientists would put it a little differently, something like this: ‘Who can watch people assembling the most misconceived and pointless engineering project in the history of mankind – the International Space Station – and not wonder at the ability of people to burn dollars?’ Scientists disagree about a lot of things, but there’s one hypothesis that will bring near-unanimity: the International Space Station is a waste of space.

Ronald Reagan told the United States in 1984 that the space station would take six years to build and would cost $8 billion. Sixteen years and tens of billions of dollars later, NASA enlisted the help of 15 other nations and promised that the station would be complete by 2005. The latest NASA plans say it will be finished by the end of this decade. And it had better be, because in 2010 the shuttles will be decommissioned.

It is easy to mock the ISS, with its golf-playing astronauts, its Pizza Hut deliveries, its drunken astronauts and countless malfunctions. But you have to ask yourself: why is it so easy to mock it? Perhaps because it really is risible?

Robert Park, the physicist at the University of Maryland who I mentioned earlier and who has consistently been one of the sanest voices on space exploration, summed this up very recently in a remark with which I want to leave you. He said: ‘There is a bold, adventurous NASA that explores the universe. That NASA had a magnificent week. Having traveled 423 million miles since leaving Earth, the Phoenix Mars Lander soft-landed in the Martian arctic. Its eight-foot backhoe will dig into the permafrost subsoil to see if liquid water exists. There is another NASA that goes in circles on the edge of space. That NASA is having a problem with the toilet on the ISS. I need not go into detail to explain what happens when a toilet backs up in zero gravity - it defines ugly.’

Which vision of space exploration would you rather have?

Sunday, June 15, 2008

A sound theory?

[Here, because it will soon vanish behind a subscriber wall, is my latest Muse.for Nature News.]

A new theory suggests a natural basis for our preference for musical consonance. But does such a preference exist at all?

What was avant-garde yesterday is often blandly mainstream today. But this normalization doesn’t seem to have happened to experiments in atonalism in Western music. A century has passed since composer Arnold Schoenberg and his supporters rejected tonal organization, yet Schoenberg’s music is still considered by many to be ‘difficult’ at best, and a cacophony at worst.

Could this be because the dissonances characteristic of Schoenberg’s atonal compositions conflict with some fundamental human preference for consonance, embedded in the very way we perceive musical sound? That’s what his detractors have sometimes implied, and it might be inferred also from a new proposal for the origins of consonance and dissonance advanced in a paper by biomathematicians Inbal Shapira Lots and Lewi Stone of Tel Aviv University in Israel, published in the Journal of the Royal Society Interface [1].

Shapira Lots and Stone suggest that a preference for consonance may be hard-wired into the way we hear music. The reason that we prefer two simultaneous tones separated by a pitch interval of an octave or a fifth (seven semitones — the span from the notes C to G, say) rather than ‘dissonant’ intervals such as a tritone (C to F sharp, for instance) is that in the former cases, the ratio of frequencies of the two tones is a simple one: 1:2 for the octave, 2:3 for the fifth. This, the researchers argue, creates robust, synchronized firing of the neural circuits that register the tones.

One reading of this result (although it is one from which the authors hold back) is that Schoenberg’s programme was doomed from the outset because it contravenes a basic physiological mechanism that makes us crave consonance. The reality, however, is much more complicated, both in ways the authors acknowledge and in ways they do not.

Locked in harmony

Here’s the picture Shapira Lots and Stone propose. At the neural level, our response to different pitches seems to be governed by oscillators — either single neurons or small groups of them — that fire and produce an output signal when stimulated by an oscillatory input signal coming from the ear's cochlea. The frequency of the input is the acoustic frequency of the pitch that excites the cochlea, and firing happens when this matches the neural oscillator’s resonant frequency.

A harmonic interval of two simultaneous notes excites two such oscillators. What if they are coupled so that the activity of one can influence that of the other? By considering a biologically realistic form of coupling in which one oscillator can push the other towards the threshold stimulus needed to trigger firing, the researchers calculate that the two oscillators can become ‘mode-locked’ so that their firing patterns repeat with a fixed ratio of periodicities. When mode-locked, the neural responses reinforce each other, which can be deemed to provoke a stronger response to the acoustic stimulus.

Mode-locked synchronization can occur for any frequency ratios of the input signals, but it is particularly stable – the ratio of output frequencies stays constant over a particularly wide range of input frequencies – when the input signals have ratios close to small numbers, such as 1:1, 1:2, 2:3 or 3:4. These are precisely the frequency ratios of intervals deemed to be consonant: the octave, fifth, fourth (C to F), and so on. In other words, neural synchrony is especially easy to establish for these intervals.

In fact, the stability of synchrony, judged this way, mirrors the degree of consonance for all the intervals in the major and minor scales of Western music: the major sixth (C-A), major third (C-E) and minor third (C-E flat) are all slightly less stable than the fourth, and are followed, in decreasing order of stability, by the minor sixth (C-A flat), major second (C-D), major seventh (C-B) and minor seventh (C-B flat). One could interpret this as not only rationalizing conventional Western harmony but also supporting the very choice of note frequency ratios in the Western major and minor scales. Thus, the entire scheme of Western music becomes one with a ‘rational’ basis anchored in the physiology of pitch perception.

Natural music?

This is a very old idea. Pythagoras is credited (on the basis of scant evidence) as being the first to relate musical harmony to mathematics, when he noted that ‘pleasing’ intervals correspond to simple frequency ratios. Galileo echoed this idea when he said that these commensurate ratios are ones that do not “keep the ear drum in perpetual torment”.

However, there were some serious flaws in the tuning scheme derived from Pythagoras’s ratios. For one thing, it generated new notes indefinitely whenever tunes were transposed from one key to another – in essence, Pythagorean tuning assigns a different frequency to sharps and their corresponding flats (F sharp and G flat, say), and the result is a proliferation of finely graded notes. What’s more, the major third interval, which was deemed consonant by Galileo’s time, has a frequency ratio of 64:81, which is not particularly simple at all.

The frequency ratios of the various intervals were simplified in the sixteenth century by the Italian composer Giuseppe Zarlino (he defined a major third as having a 4:5 ratio, for example), and the resulting scheme of ‘just intonation’ solved some of the problems with Pythagorean tuning. But the problem of transposition was not fully solved until the introduction of equal temperament, beginning in earnest from around the eighteenth century, which divides the octave into twelve equal pitch steps, called semitones. The differences in frequency ratio between Pythagorean, just and equal-tempered intonation are very small for some intervals, but significant for others (such as the major third). Some people claim that, once you’ve heard the older schemes, equal temperament sounds jarringly off-key.

In any event, the mathematical and physiological bases of consonance continued to be debated. In the eighteenth century, the French composer Jean-Philippe Rameau rooted musical harmony instead in the ‘harmonic series’ — the series of overtones, with integer multiples of the fundamental frequency, that sound in notes played on any instrument. And the German physiologist Hermann von Helmholtz argued in the nineteenth century that dissonance is the result of ‘beats’: the interference between two acoustic waves of slightly different frequency. If this difference is very small, beats are heard as a periodic rise and fall in the volume of the sound. But as the frequency difference increases, the beating gets faster, and when it exceeds about 20 hertz it instead creates an unpleasant, rattling sensation called roughness. Because real musical notes are complex mixtures of many overtones, there are several potential pairs of slightly detuned tones for any two-note chord. Helmholtz showed that beat-induced roughness is small for consonant intervals of such complex tones, but larger for dissonant intervals.

Shapira Lots and Stone argue rightly that their explanation for consonance can explain some aspects that Helmholtz’s cannot. But the reverse is true too: modern versions of Helmholtz’s theory can account for why the perception of roughness depends on absolute as well as relative pitch frequencies, so that even allegedly consonant intervals sound gruff when played in lower registers.

Good vibrations


There are more important reasons why the new work falls short of providing a full account of consonance and dissonance. For one thing, these terms have more than a single meaning. When Shapira Lots and Stone talk of ‘musical dissonance’, they actually mean what is known in music cognition as ‘sensory dissonance’ – the sensation of roughness. Musical dissonance is something else, and a matter of mere convention. As I say, the major third interval that now seems so pleasing to us was not recognized as consonant until the Renaissance, and only the octave was deemed consonant before the ninth century. And sensory dissonance is itself a poor guide to what people will judge to be pleasing. It's not clear, for example, that the fourth is actually perceived as more consonant than the major third [2]. And the music of Ravel and Debussy is full of ‘dissonant’ sixths, major sevenths and ninths that now seem rather lush and soothing.

But fundamentally, it isn’t clear that we really do have an intrinsic systematic preference for consonance. This is commonly regarded as uncontentious, but that’s far from true. It is certainly the case, as Shapira Lots and Stone say, that the musical systems of most cultures are based around the octave, and that intervals of a fifth are widespread too. But it’s hard to generalize beyond this. The slendro scale of Indonesian gamelan music, for instance, divides the octave into five roughly equal and somewhat variable pitch steps, with none of the resulting intervals corresponding to small-number frequency ratios.

Claims that infants prefer consonant intervals over dissonant ones [3] are complicated by the possibility of cultural conditioning. Babies can hear and respond to sound even in the womb, and they have a phenomenal capacity to assimilate patterns and regularities in their environment. A sceptical reading of experiments on infants and primates might acknowledge some evidence that both the octave and the fifth are privileged, but nothing more [4]. My guess is that the ‘neural synchrony’ argument, of which Shapira Lots and Stone offer the latest instalment, is on to something, but that harmony in Western music will turn out to lean more heavily on nurture than on nature.

References

1. Shapira Lots, I. and Stone, L. J. R. Soc. Interface doi:10.1098/rsif.2008/0143
2. Krumhansl, C. L. Cognitive Foundations of Musical Pitch (Oxford University Press, 1990).
3. Schellenberg, E. G. and Trehub, S. E. Psychol. Sci. 7, 272–277 (1996).
4. Patel, A. Music, Language, and the Brain (Oxford University Press, 2008).
Beauty and function

Brian Appleyard has a nice blog about my book Universe of Stone. He says “Ball, in preferring earlier, starker Gothic to the later more decorative variety, teeters on the brink of the fallacy that has dogged architectural criticism of the last hundred years - the idea there is some necessary and rational connection between clearly expressed function and beauty.” I can see what he means, and why he may have got this impression. But my own preferences here are purely aesthetic: I find the profusion of crockets and the excesses of Flamboyant Gothic often mere clutter, as though the builders had lost faith in letting blank stone speak for itself. I do discuss in the book the Platonic notion that links beauty to intelligibility and order, an issue nicely dealt with in Umberto Eco’s book Art and Beauty in the Middle Ages. But I don’t necessarily intend to imply any advocacy of this position.

Brian’s point is a reminder, however, that I should take care not to get too snobbish and purist about English and Late Gothic vaulting, with its lunatic mosaics of tiercerons and liernes and its fluted fans. These have their own over-enthusiastic charm, and we should just sit back and enjoy it.

Tuesday, June 10, 2008

You’re not a molecule, but sometimes you’re a statistic

The editorial in the latest issue of Nature, written by me, could in its edited form (my original draft is below) seem to present a capitulation to the view of social science advocated by Steve Fuller at Warwick, who has previously been highly critical of the statistical perspective discussed in my book Critical Mass. (My response to Fuller is here.) But that’s not really how it is. The pull quote (“The goal of social science is not simply to understand how people behave in large groups but to understand what motivates individuals to behave the way they do.”) is equally true in reverse, which is what Fuller seems blind to. I’m more that happy to make explicit what statistical ‘laws’ overlook. But to deny that group behaviour matters, or that it can differ from that predicted by linear extrapolation from individuals, is to deny the ‘social’ in social science, which seems to me a far more egregious oversight.

Fair point from the editor, though: it wouldn’t actually be hard at all to improve on Mill’s words, in the sense of leavening the Victorian stodge. But I hope the editorial doesn’t now seem to be implicitly critical of the González et al. paper that motivated it, on the grounds that it focuses on the masses and not the individual. This paper does reveal information about both. That very issue, however, has provoked an absurd level of hysteria in the wake of the news story we ran. It seems some people who haven’t bothered to read the paper are concerned about privacy. Makes you wonder what they have to hide (not that anyone would be finding out in any case, given that the data were rendered anonymous). Do these people ever stop to think what is happening to the data every time they make a purchase on their credit cards?

******

“Events which in their own nature appear most capricious and uncertain and which in any individual case no attainable degree of knowledge would enable us to foresee, occur, when considerable numbers are taken into account, with a degree of regularity approaching to mathematical.” It would be hard to improve on John Stuart Mill’s words to encapsulate the regularities found in human mobility patterns on page 779 of this issue. Who would have thought that something as seemingly capricious as the matter of where we go during our daily lives could yield such lawfulness?

One of the remarkable features of this work is not the results, however, but the methodology. Social scientists have long struggled with a paucity of hard data about human activities – social networks, say, movement patterns. Self-reporting is notoriously unreliable and labour-intensive. The use, in this case, of mobile phone networks to track individuals has supplied a data set of proportions almost unheard of for such a complex aspect of behaviour: over 16 million ‘hops’ for 100,000 people. The resulting statistics show a strikingly small scatter, giving grounds for confidence in the mathematical laws they disclose.

This adds to the examples of information technologies offering tools to the social scientist that provide a degree of quantification and precision comparable to the so-called ‘hard’ sciences. Community network structures can be derived from, say, email transmissions or automated database searches of scientific collaboration. Online schemes can even enable genuinely experimental study of behaviour in large populations, complete with control groups and tunable parameters.

Making sense of these data sets may require a rather different set of skills from the conventional statistical approaches used in the social sciences, which is why it is no surprise that studies like the present one are often conducted by those trained in the physical sciences, where there is a long tradition of investigating ‘complex systems’ of interacting entities. One view might be that this lends some prescience to the suggestion of sociologist George Lundberg in 1939: “It may be that the next great developments in the social sciences will come not from professed social scientists but from people trained in other fields.” Lundberg was a positivist eager for his field to adopt the methods of the natural sciences.

The ‘physicalization’ of the social sciences needs to be regarded with some caution, however. While some social scientists aim to understand the ways people behave in large groups, others insist that ultimately the goal is not to uncover bare statistical laws and regularities but to gain insight into what motivates individuals to behave the way they do. It is not clear that universal scaling functions can offer that: however vast the data set, the inverse problem of deriving the factors that produce it remains as challenging as ever. Statistical regularities may conjure up images of Adolphe Quetelet’s homme moyen, the ‘average man’ who not only tends to deny the richness of human behaviour but even threatens to impose a stifling behavioural norm.

It would be wrong to imply that the interest of these findings is restricted to the conventional boundaries of the social sciences. Epidemiologists, for instance, have traditionally been forced to work with very simple descriptions of dispersal and contact, for example based on diffusive models, for lack of any hard evidence to the contrary. But recent work has made it very clear that the topology and quantitative details of contact networks can have a qualitative impact on the transmission of disease. There is sure also to be commercial interest in information about patterns of usage for portable electronics, while the nature of mass human movement could inform urban planning and the development of transportation networks.

But for the social sciences proper, the latest results suggest both an opportunity and a challenging question: how much of social behaviour do we capture in statistical regularities, and how much do we overlook?

Wednesday, June 04, 2008

Yes, I do read my reviews

‘When the laws of physics defy the science of storytelling’ says the headline. Whoops, I’m in for it. But not completely. Ed Lake’s review of The Sun and Moon Corrupted in the Telegraph last weekend was not a complete stinker; I think it is what one calls ‘mixed’. He calls it a ‘fine piece of pop science’, and says that I ‘manage to deliver a surprising amount of actual science.’ Uh-oh – seems he detects a Djerassi-like agenda to sneak science in through the literary back door. Then we hear about superheroes and the X-Men and a ‘Dan Brown novel with weird science in place of crank Christology’, and I sense I’ve failed to land in the right field here. Can’t exactly blame anyone else for that, but let me just say now: I really don’t care if you learn any science from this book or not. Not a jot.

I’m not about to indecorously defend myself from criticism here. Ed Lake made a considered judgement, and that’s fine. He said some nice things, and some useful things, and he did a good job of conveying the essence of the plot. (I don’t, incidentally, take ‘overripe gothicism’ as a criticism, and I’m not quite sure if he meant it to be – it’s unclear if he wanted less of that, or more.) It’s just an interesting awakening to the world of fiction reviewing, where one unfortunately can’t say ‘this particular criticism was disproved in Physical Review Letters in 1991’. One person’s meat is another person’s demon-haunted brew from the foul swamps of Transylvania.

Still, the New Humanist liked it.

Friday, May 30, 2008

Fuelling the sceptics?
[Here’s the long version of my Lab Report column in the June issue of Prospect.]

Has the Intergovernmental Panel on Climate Change (IPCC) got its numbers wrong? That’s what a recent paper in Nature seems to be saying, to the delight of climate sceptics everywhere. Whereas the IPCC report forecast a rise in global mean temperature of around 0.2-0.3 oC per decade, researchers in Germany found from a sophisticated computer model of climate that temperatures are likely to remain flat until around 2015, as they have done since about 1998.

The sceptics will argue that this shows we don’t have much of a clue about climate, and all the dire forecasts from models count for nothing. That, however, would be like saying that, because we took a wrong turn on the road from London to Edinburgh, we have no idea where Edinburgh is.

There is actually nothing in the new result that conflicts with the IPCC’s position, which has always acknowledged that the poorly understood natural variability of the climate system will superimpose its imprint on the global warming trend. The new findings are an attempt to forecast short-term, decade-scale temperature changes, rather than the longer-term changes usually considered by climate modellers. Over a decade or two, temperatures are much more susceptible to natural variations (which boosted warming in the late 1990s). The current cooling influence is due to weakening of heat-bearing ocean currents such as the Gulf Stream. This may persist for about a decade, but then the warming will resume, and by 2030 it should reconnect with the IPCC predictions.

No reason, then, to throw out all the climate models. Indeed, no climate scientist seems particularly surprised or perturbed by the findings, which simply flesh out the short-term picture. To climate sceptics, this is mere dissembling and backpedalling, although their own determination to undermine the IPCC’s predicted trend never identified anything of the sort. It’s a curious logic that uses climate modelling to discredit climate models.

Science policy maven Roger Pielke Jr of the University of Colorado, a sceptic in the proper sense, has justifiably demanded how the models can be validated when they are seemingly predicting one thing one moment, and the opposite the next. But the answer is that natural variability compromises any short-term predictions – a frustrating fact of life that demands great care in framing the right questions and drawing conclusions. Certainly, we should remain wary of claims that a few hot summers, or a few more hurricanes, prove that global catastrophe is imminent, just as we should of suggestions that a few relatively cool years rubbish the IPCC’s forecasts.

****

We must be wary too of making global warming a deus ex machina that explains every environmental trend, especially if it’s bad. Droughts and storms worsened by climate change may be playing a small part in the global food crisis, but a far bigger problem comes from attempts to mitigate such change with biofuels. In 2006, a fifth of US maize was grown to make ethanol, not food. With the US providing 70 percent of global maize exports, grain prices worldwide were sure to feel the effect.

The rush towards an ill-considered biofuels market is a depressing reminder that the vicissitudes of climate science are as nothing compared with the lack of foresight in the economic system that rides on it. The passion for biofuels in the Bush administration is driven more by a wish for national energy independence than by concerns about climate, while farmers embrace them largely for profit motives. But science has played a part in condoning this shaky vision. It’s a little late now for some scientists to be explaining that of course the benefits will only be felt with next-generation biofuels, which will make much more efficient use of plant matter.

Biofuels aren’t the only reason for soaring food prices. Population rise is playing its ever baleful part, as is the increase in oil prices, which makes food costlier to produce and transport. This is a less simple equation than is often implied, because growing crops for energy introduces a new economic coupling between oil and food: escalating oil prices make it advantageous for farmers to switch to energy crops. The consequences of this new dependency in two vast sectors of the economy do not yet seem to have been carefully evaluated.

*****

Chinese geoscience blotted its record when its bold claim to be able to predict earthquakes was followed in 1976 by the devastating and unforeseen Tangshan quake that killed several hundred thousand. The death toll of the recent magnitude 7.9 quake in Sichuan province may ultimately approach a tenth of that. The basic laws of mechanics seem to preclude accurate forecasting by monitoring geological faults, no matter how closely, because the size and timing of slippage is inherently unpredictable from information available at the source. But researchers based at Edinburgh think the necessary information could be spread over a far wider area around the fault zone, in the pattern and evolution of stress in surrounding rock. They propose using small human-made seismic waves to map out these stresses, and claim this could enable the time, size and maybe location of earthquakes to be forecast days or even months in advance.

They say a stress-monitoring site consisting of three boreholes1-2 km deep, fitted out with seismic sources and sensors, could have forecast such a big event in Sichuan even from Beijing, 1000 km away. A monitoring station’s likely price tag of several million dollars dwindles before the cost of damage inflicted by quakes this severe. Despite the notorious record of earthquake prediction, this one looks worth a shot.

Thursday, May 29, 2008

Why we should love logarithms
[More Musement from Nature News.]

The tendency of 'uneducated' people to compress the number scale for big numbers is actually an admirable way of measuring the world.


I'd never have guessed, in the days when I used to paw through my grubby book of logarithms in maths classes, that I'd come to look back with fondness on these tables of cryptic decimals. In those days the most basic of electronic calculators was the size of a laptop and about as expensive in real terms, so books of logarithms were the quickest way to multiply large numbers (see 'What are logarithms'.

Of course, logarithms remain central to any advanced study of mathematics. But as they are no longer a practical arithmetic tool, one can’t now assume general familiarity with them. And so, countless popular science books contain potted guides to using exponential notation and interpreting logarithmic axes on graphs. Why do they need to do this? Because logarithmic scaling is the natural system for magnitudes of quantities in the sciences.

That's why a new claim that logarithmic mapping of numbers is the natural, intuitive scheme for humans rings true. Stanislas Dehaene of the Federative Institute of Research in Gif-sur-Yvette, France, and his co-workers report in Science [1] that both adults and children of an Amazonian tribe called the Mundurucu, who have had almost no exposure to the linear counting scale of the industrialized world, judge magnitudes on a logarithmic basis.

Down the line


The researchers presented their subjects with a computerized task in which they were asked to locate on a line the points that best signified the number of various stimuli (dots, sequences of tones or spoken words) in the ranges from 1 to 10 and from 10 to 100. One end of the line corresponded to 1, say, and the other to 10; where on this line should 6 sit? The results showed that the Amazonians had a clear tendency to apportion the divisions logarithmically, which means that successive numbers get progressively closer together as they get bigger.

The same behaviour has previously been seen in young children from the West [2]. But adults instead use a linear scaling, in which the distance between each number is the same irrespective of their magnitude. This could be because adults are taught that is how numbers are 'really' distributed, or it could be that some intrinsic aspect of brain development creates a greater predisposition to linear scaling as we mature. To distinguish between these possibilities, Dehaene and his colleagues tested an adult population that was 'uncontaminated' by schooling.

The implication of their finding, they say, is that "the concept of a linear number line seems to be a cultural invention that fails to develop in the absence of formal education". If this study were done in the nineteenth century (and aside from the computerized methodology, it could just as easily have been), we can feel pretty sure that it would have been accompanied by some patronizing comment about how 'primitive' people have failed to acquire the requisite mathematical sophistication.

Today's anthropology is more enlightened, and indeed Dehaene and his team have previously revealed the impressive subtlety of Mundurucu concepts of number and space, despite the culture having no words for numbers greater than five [3,4].

Everything in perspective


But in any event, the proper conclusion is surely that it is our own intuitive sense of number that is somehow awry. The notion of a decreasing distance between numbers makes perfect sense once we think about that difference in proportionate terms: 1,001 is clearly more akin to 1,000 than 2 is to 1. We can even quantify those degrees of likeness. If we space numbers along a scale such that the distances between them reflect the proportion by which they increment the previous number, then the distance of a number n from 1 is given by the harmonic series, the sum of 1 + 1/2 + 1/3 + 1/4 and so on up to 1/n. This distance is roughly proportional to the logarithm of n.

This, it is often said, is why life seems to speed up as we get older: each passing year is a smaller proportion of our whole life. In perceptual terms, the clock ticks with an ever faster beat.

But wait, you might say – surely 'real' quantities are linear? A kilometre is a kilometre whether we have travelled 1 or 100 already, and it takes us the same time to traverse at constant speed. Well, yes and no. Many creatures, execute random walks or the curious punctuated random walks called Lévy flights [watch out for next week's issue of Nature on this...], in which migrations over a fixed increment in distance takes an ever longer time. Besides, we can usually assume that an animal capable of covering 100 kilometres could manage 101, but not necessarily that one capable of 1 kilometre could manage 2 kilometres (try it with a young child).

Yet the logarithmic character of nature goes deeper than that. For scientists, just about all magnitude scales are most meaningful when expressed logarithmically, a fact memorably demonstrated in the vision of the Universe depicted in the celebrated 1977 film Powers of Ten The femtometre (10**-15 metres) is the scale of the atomic nucleus, the nanometre (10**-9 metres) that of molecular systems, the micrometre (10**-6 metres) the scale of the living cell, and so on. Cosmological eras demand logarithmically fine time divisions as we move closer back towards the Big Bang. The immense variation in the size of earthquakes is tamed by the logarithmic magnitude scale, in which (roughly speaking) an increase of one degree of magnitude corresponds to a tenfold increase in energy. The same is true of the decibel scale for sound intensity, and the pH scale of acidity.

Law of the land

Indeed, the relationship between earthquake magnitude and frequency is one of the best known of the ubiquitous natural power laws, in which some quantity is proportional to the n th power of another. These relationships are best depicted with logarithmic scaling: on logarithmic axes, they look linear. Power laws have been discovered not only for landslides and solar flares but for many aspects of human culture: word-use frequency, say, or size-frequency relationships of wars, towns and website connections.

All these things could be understood much more readily if we could continue to use the logarithmic number scaling with which we are apparently endowed intuitively. So why do we devote so much energy to replacing it with linear scaling?

Linearity betrays an obsession with precision. That might incline us to expect an origin in engineering or surveying, but actually it isn't clear that this is true. The greater the number of units in a structure's dimension, the less that small errors matter: a temple intended to be 100 cubits long could probably accommodate 101 cubits, and in fact often did, because early surveying methods were far from perfect. And in any event, such dimensions were often determined by relative proportions rather than by absolute numbers. It seems more conceivable that a linear mentality stemmed from trade: if you're paying for 100 sheep, you don't want to be given 99, and the seller wants to make sure he doesn't give you 101. And if traders want to balance their books, these exact numbers matter.

Yet logarithmic thinking doesn't go away entirely. Dehaene and his colleagues show that it remains even in Westerners for very large numbers, and it is implicit in the skill of numerical approximation. Counting that uses a base system, such as our base 10, also demands a kind of logarithmic terminology: you need a new word or symbol only for successive powers of ten (as found both in ancient Egypt and China).

All in all, there are good arguments why an ability to think logarithmically is valuable. Does a conventional education perhaps suppress it more than it should?


References

1. Dehaene, S. , Izard, V. , Spelke, E. & Pica, P. Science 320, 1217–1220 (2008).
2. Booth, J. L. & Siegler, R. S. Dev. Psychol. 42, 189–201 (2006).
3. Pica, P. , Lemer, C. , Izard, V. & Dehaene, S. Science 306, 499–503 (2004).
4. Dehaene, S. , Izard, V. , Pica, P. & Spelke, E. Science 311, 381–384 (2006).

Making Hay

It’s a rotten cliché of a title, but you can’t avoid the irony when the scene was pretty much like that above – I don’t know if this picture was taken this year or some previous year, but it sums up the situation at the Hay Literary Festival on Sunday and Monday this week. (Looking outside, things may not have got much better.) Strangely, this didn’t matter. One great thing about Hay is that it takes place in a complex of tents connected by covered walkways, so you can stroll around and stay dry even if it is pelting down. It was cold enough to see your breath at midday, but no one seemed to be complaining, and the crowds kept coming. These Guardian readers are hardier folk than you might think.

The mud was another matter. The authors’ car park was a lake, so there was no avoiding a trek through a reconstruction of the Somme. We came with a silver-grey car; now it has the colour and texture, if not quite the smell, of a farmyard. Note to self: take wellies next time.

Still, fun for all. Good food, no plague of cheap commercialism, and a fantastic setting even if you can’t see it through the driving rain. I was there with my family, so had limited opportunity to catch talks, but I was impressed by David King’s passionate determination to get beyond the rearranging-deckchairs approach to climate change. As David is in a position to make things happen, this is good news for us all. In particular, he advocates a massive increase in funding of research to make solar energy affordable enough to be a routine aspect of new building, enforced by legislation. David sometimes gets flak from environmental groups for not going far enough (not to mention his endorsement of nuclear power), but he is far more outspoken and committed than many, if not most, of the scientists in such positions of influence. I must admit that when David was appointed Chief Scientific Adviser, I took the simplistic view that he is a nice chap and a good scientist but that heterogeneous catalysis seemed an awfully long way from policy advising. Eat those words, lad – he’s shown exactly how a scientific adviser can make a real difference on important issues.

He was there primarily to talk about his book on climate change with Gabrielle Walker, The Hot Topic. But he and I, along with Steve Jones, sat on a discussion panel for Radio 4’s Material World, recorded in front of the Hay audience for broadcast on Thursday 29 May (listen out for the rain pelting on canvas). We talked about what happens to science when it intersects with broader culture – yes, vague huh? While my co-panellists are old hands at finding incisive responses to whatever is thrown at them, I sometimes felt that I was mouthing platitudes. No doubt the capable MW team will have edited it down to a model of eloquence.

But my main excuse for lounging in the artists’ luxurious Green Room (i.e. it had heating and a more or less dry carpet) was that I was talking about my book on Chartres cathedral, Universe of Stone – as it turned out, to an improbably large audience in the cinema tent. Perhaps they thought that ‘Universe of Stone’ was a blockbuster movie. (And maybe it should be – good title, no?) Anyway, they were very kind, and made me want to go back.

Wednesday, May 14, 2008

Me, me, me

Well, what do you expect in a blog, after all? Here, then, is some blatant advertising of forthcoming events at which I’m speaking or participating. I’ve been trying to rein back on this kind of thing, but seem to have acquired a cluster of bookings in the near future.

You’ve already missed the seminar on new materials at King’s College, London, on 12 May – a very interesting collection of people assembled by Mark Miodownik, whose Materials Library is a very fabulous thing to behold. I hope to post my talk on my website soon.

On 27 May I am talking about my book Universe of Stone at the Hay Festival. And it seems that I’ll be participating in a discussion about science books for Radio 4’s Material World, which will be recorded at Hay the previous day. The other panellists are Sir David King and Steve Jones.

On 28 May I will be chairing a public discussion on synthetic biology at the Science Museum’s Dana Centre, called ‘Making Life’.

In June I have what is looking ominously like a residency at the Royal Institution, starting with a discussion of my novel The Sun and Moon Corrupted at the newly launched RI book club on 9 June. I will be coming along to face the critics after the discussion – do come and be gentle with me.

The following Monday, 16 June, I will be attempting to persuade the RI audience why human spaceflight is seldom of any scientific worth and is best left to private entrepreneurs (see here). The counter-argument will be ably put by Kevin Fong of UCL.

Then on 10 July I’ll be talking at the RI about my book Elegant Solutions, published by the Royal Society of Chemistry, which looked at the issue of beauty in chemistry experiments (details here). This is an event organized to mark the book’s receipt of the 2007 Dingle Prize for communicating the history of science and technology from the British Society for the History of Science.

Then I’m having a holiday.

Friday, May 09, 2008


Mixed messages

Last night I drove into the traffic hell that is Canary Wharf to see a play by the marvellous Shifting Sands in a rather nice little theatre marooned on the Isle of Dogs. (I would advertise it, but this is the end of their run. I’m collaborating with Shifting Sands on a production early next year based on the life of Paracelsus and funded by the Wellcome Trust.)

I’ve never ventured into E14 by car before, for good reason. Here is a traffic system that radiates sneering contempt, confronting you with a morass of flyovers, tunnels and slip roads labelled only with signs saying things like ‘Canary Wharf Depot A’. One wrong turn and I was in a tunnel that offered no escape until it spat me back out at the Rotherhithe Tunnel.

My point is this. You emerge, dazed, anxious and disorientated, from some underground cavern to find yourself on a busy roundabout, and in the middle is the structure shown above. Traffic lights point in all directions; some beckon in green, some prohibit in red, some tantalize in amber.

‘You can’t be serious’, I muttered, and several moments passed before I twigged that indeed this is not a serious device for directing traffic, but, can you credit it, an art installation. At least, I could only assume so, but I decided to quiz the bar attendant at the theatre. She professed ignorance of the roads, but a local bloke sitting at the bar chipped in. The installation cost £140,000, he said, and he was living in a flat that overlooked it when it was first installed. ‘I’ve never seen so many accidents’, he said.

The stupidity of it is so breathtaking that it is almost a work of conceptual art itself. I try to picture the local council meeting at which the design was proposed. ‘Yes, I want to use real traffic lights. By utterly confusing and bewildering the driver, you see, it will make a comment on the complexity of everyday life.’ ‘Well, that sounds like a brilliant idea. Here’s 140 grand.’

This is all merely a slender excuse to advertise this nice preprint by Stefan Lämmer and Dirk Helbing on self-organized traffic lights that replace central control with autonomy. A self-organized approach could in principle let traffic flow considerably more efficiently, as I’ve discussed some time ago in a Nature article.

But I fear that E14 is beyond any redemption that self-organization can offer.

Sunday, May 04, 2008

When worlds collide

[This is the pre-edited version of my
latest Muse article for Nature News.]

Worries about an apocalypse unleashed by particle accelerators are not new. They have their source in old myths, which are hard to dispel.

When physicists dismiss as a myth the charge that the Large Hadron Collider (LHC) will trigger a process that might destroy the world, they are closer to the truth that they realise. In common parlance a myth has come to denote a story that isn’t true, but in fact it is a story that is psychologically true. A real myth is not a false story but an archetypal one. And the archetype for this current bout of scare stories is obvious: the Faust myth, in which an hubristic individual unleashes forces he or she cannot control.

The LHC is due to be switched on in July at CERN, the European centre for particle physics near Geneva. But some fear that the energies released by colliding subatomic particles will produce miniature black holes that will engulf the world. Walter Wagner, a resident of Hawaii, has even filed a lawsuit to prevent the experiments.

As high-energy physicist Joseph Kapusta points out in a new preprint [1], such dire forebodings have accompanied the advent of other particle accelerators in the past, including the Bevalac in California and the Relativistic Heavy Ion Collider (RHIC) on Long Island. In the latter case, newspapers seized on the notion of an apocalyptic event – the UK’s Sunday Times ran a story under the headline ‘The final experiment?’

The Bevalac, an amalgamation of two existing accelerators at the Lawrence Berkeley Laboratory, was created in the 1970s to investigate extremely dense states of nuclear matter – stuff made from the compact nuclei of atoms. In 1974 two physicists proposed that there might be a hitherto unseen and ultra-dense form of nuclear matter more stable than ordinary nuclei, which they rather alarmingly dubbed ‘abnormal’. If so, there was a small chance that even the tiniest lump of it could keep growing indefinitely by cannibalizing ordinary matter. Calculations implied that a speck of this pathological form of abnormal nuclear matter made in the Bevalac would sink to the centre of the Earth and then expand to swallow the planet, all in a matter of seconds.

No one, Kapusta says, expected that abnormal nuclear matter, if it existed at all, would really have this voracious character – but neither did anyone know enough about the properties of nuclear matter to rule it out absolutely. According to physicists Subal Das Gupta and Gary Westfall, who wrote about the motivations behind the Bevalac to mark its termination in 1993[2], “Meetings were held behind closed doors to decide whether or not the proposed experiments should be aborted.”

The RHIC, at the Brookhaven National Laboratory, began operating in 1999 primarily to create another predicted superdense form of matter called a quark-gluon plasma. This is thought to have been what the universe consisted of less than a millisecond after the Big Bang. Following an article about it in Scientific American, worries were raised about whether matter this dense might collapse into a mini-black hole that would again then grow to engulf the planet.

Physicist Frank Wilczek dismissed this idea as “incredible”, but at the same time he raised a new possibility: the creation of another super-dense, stable form of matter called a strangelet that could again be regarded as a potential Earth-eater. In a scholarly article published in 2000, Wilczek and several coworkers analysed all the putative risks posed by the RHIC, and concluded that none posed the slightest real danger[3].

But isn’t this just what we’d expect high-energy physicists to say? That objection was raised by Richard Posner, a distinguished professor of law at the University of Chicago[4]. He argued that scientific experiments that pose potentially catastrophic risks, however small, should be reviewed in advance by an independent board. He recognized that current legal training provides lawyers and judges with no expertise for making assessments about scientific phenomena “of which ordinary people have no intuitive sense whatsoever”, and asserted that such preparation is therefore urgently needed.

It seems reasonable to insist that, at the very least, such research projects commission their own expert assessment of risks, as is routinely done in some areas of bioscience. The LHC has followed the example of the RHC in doing just that. A committee has examined the dangers posed by strangelets, black holes, and the effects of possible ‘hidden’ extra dimensions of space. In 2003 they declared that “we find no basis for any conceivable threat” from the accelerator’s high-energy collisions[5].

These scare stories are not unique to particle physics. When in the late 1960s Soviet scientists mistakenly believed they had found a new, waxy form of pure water called polywater, one scientist suggested that it could ‘seed’ the conversion of all the world’s oceans to gloop – a scenario memorably anticipated in Kurt Vonnegut’s 1963 novel Cat’s Cradle, where the culprit was instead a new form of ice. Superviruses leaked from research laboratories are a favourite source of rumour and fear – this was one suggestion for the origin of AIDS. And nanotechnology was accused of hastening doomsday thanks to one commentator’s fanciful vision of grey goo: replicating nanoscale robots that disassemble the world for raw materials from which to make copies of themselves.

In part, the appeal of these stories is simply the frisson of an eschatological tale, the currency of endless disaster movies. But it’s also significant that these are human-made apocalypses, triggered by the heedless quest for knowledge about the universe.

This is the template that became attached to the Faust legend. Initially a folk tale about an itinerant charlatan with roots that stretch back to the Bible, the Faust story was later blended with the myth of Prometheus, who paid a harsh price for daring to challenge the gods because of his thirst for knowledge. Goethe’s Faust embodied this fusion, and Mary Shelley popularized it in Frankenstein, which she explicitly subtitled ‘The Modern Prometheus’. Roslynn Haynes, a professor of English literature, has explored how the Faust myth shaped a common view of the scientist as an arrogant seeker of dangerous and powerful knowledge[6].

All this sometimes leaves scientists weary of the distrust they engender, but Kapusta points out that it is occasionally even worse than that. When Das Gupta and Westfall wrote about the concerns of abnormal nuclear matter raised with the Bevalac, they were placed on the FBI’s ‘at risk’ list of individuals thought to be potential targets of the Unabomber. Between 1978 and 1995, this former mathematician living in a forest shack in Montana sent bombs through the US mail to scientists and engineers he considered to be working on harmful technologies. A lawsuit by a disgruntled Hawaiian seems mild by comparison.

And yet… might there be anything in these fears? During the Manhattan Project that developed the atomic bomb, several of the scientists involved were a little unsure, until they saw the mushroom cloud of the Trinity test, whether the explosion might not trigger runaway combustion of the Earth’s atmosphere.

The RHIC and LHC have taken far less on trust. But of course the mere acknowledgement of the risks that is implied by commissioning studies to quantify them, along with the fact that it is rarely possible to assign any such risk a strictly zero probability, must itself fuel public concern. And it is well known to risk-perception experts that we lack the ability to make a proper rating of very rare but very extreme disasters, even to the simple extent that we feel mistakenly safer in our cars than in an aeroplane.

That’s why Kapusta’s conclusion that “physicists must learn how to communicate their exciting discoveries to nonscientists honestly and seriously”, commendable though it is, can never provide a complete answer. We need to recognize that these fears have a mythic dimension that rational argument can never wholly dispel.


References

1. Kapusta, J. I. Preprint http://xxx.arxiv.org/abs/0804.4806
2. Das Gupta, S. & Wetfall, G. D. Physics Today 46 (May 1993), 34-40.
3. Jaffe, R. L. et al., Rev. Mod. Phys. 72, 1125-1140 (2000).
4. Posner, R. A. Catastrophe: Risk and Response (Oxford University Press, Oxford, 2004).
5. Blaizot, J.-P. et al., ‘Study of potentially dangerous events during heavy-ion collisions at the LHC: Report of the LHC Safety Study Group’, CERN Report 2003-001.
6. Haynes R.D., From Faust to Strangelove: Representations of the Scientist in Western Literature (Johns Hopkins University Press, Baltimore & London, 1994).

Friday, May 02, 2008

Talking about Chartres

There’s a gallery of images and a vodcast for my book Universe of Stone now up on the Bodley Head site: you can find it here.

Thursday, April 24, 2008


Buddha in oils?
[This is the pre-edited version of my latest news story for Nature.]

Painters on the Silk Road may have been way ahead of the Europeans.

Artists working in Afghanistan were using a primitive form of oil paint hundreds of years before it became common practice in Europe, a team of scientists has claimed.

Yoko Taniguchi of the National Research Institute for Cultural Properties in Tokyo and her coworkers have analysed samples of Buddhist paintings in caves at Bamiyan in Afghanistan, made in the mid-seventh and early eighth centuries AD. They say that the paint layers contain pigments apparently bound within so-called drying oils, perhaps extracted from walnuts and poppy seeds.

But Jaap Boon, a specialist in the chemical analysis of art at the Institute for Atomic and Molecular Physics in Amsterdam, the Netherlands, cautions that this conclusion must be seen as tentative until more detailed studies have been done.

The Bamiyan caves sit behind the gigantic statues of Buddha that were destroyed by the Taliban in 2001. The paintings, showing robed Buddhas and mythical creatures, were also defaced but not obliterated. The Bamiyan caves are now a designated UNESCO World Heritage site.

The researchers removed tiny samples of the painted surface (typically less than 1 mm across) for analysis using state-of-the-art techniques. These can reveal the chemical identity of the pigments and the materials used to bind them to a layer of earthen plaster on the cave walls.

Taniguchi’s collaborators used X-ray beams produced by the European Synchrotron Radiation Facility in Grenoble, France, to figure out the composition and crystal structures of pigment particles, deposited in a series of microscopically thin layers. The synchrotron facility produces extremely bright X-ray beams, which are essential for getting enough data from such small samples.

Meanwhile, spectroscopic methods, which identify molecular structures from the way their vibrations cause light absorption, were used to identify the organic components of the paint layers. The findings are described in a paper in the Journal of Analytical Atomic Spectrometry [1].

The researchers found pigments familiar from the ancient world, such as vermilion (red mercury sulphide) and lead white (lead carbonate). These were mixed with a range of binders, including natural resins, gums, possibly animal-skin glue or egg – and oils.

Boon suggests that this variety in itself raises concerns about potential contamination – microogranisms on the rock surface, say, or the fingerprints of people touching the paintings (something encouraged in Buddhist tradition).

He says that other techniques that really pin down what the organic molecules are should be applied before jumping to conclusions. With spectroscopy alone, he says, it can be difficult to tell egg from oils, let alone animal from plant oils.

But Marine Cotte of the Centre of Research and Restoration of the French Museums in Paris, a coauthor of the study, is convinced of the conclusions. She says that oils have an unambiguous spectroscopic signature, and adds that their molecular components have been confirmed by the technique of chromatography.

Oil painting is commonly said to have been invented by the Flemish painter Jan van Eyck and his brother Hubert in the fifteenth century. But while the van Eycks seem to have refined this technique to create stunningly rich and durable colours, the practice of mixing pigments with drying oils is known to be considerably older.

It is first mentioned in the late fifth century by the Byzantine writer Aetius, and a recipe for an oil varnish (in which a drying oil is mixed with natural resins) is listed in an eighth-century Italian manuscript.

In the twelfth century, a German Benedictine monk named Theophilus describes how to make oil paints for painting doors. Oil paints are also known from this period on Norwegian churches.

Drying oils are relatively slow to dry compared with the common medieval binders of egg yolk and size from boiled animal hide, which initially led Western craftsmen to regard them as fit only for rather lowly uses.

So the use of oils in fine art as early as the seventh century is surprising – all the more so for painting on plaster-coated rock, where the translucency of oil paints would not be expected to recommend their use. ‘It doesn’t make a lot of sense to use oils’, says Boon. He says that it would be really difficult to keep the paint in good condition for a long time in an environment like this, exposed to damp, fungi and bacteria.

But Cotte says that the oils are found in deeper layers where contamination would not penetrate, while being laid over an opaque bottom or ‘ground’ layer.

It’s not clear who these artists were, the researchers say. They were probably travelling on the Silk Road between China and the Middle East, and may have been bringing with them specialist knowledge from China.

Cotte says that these studies should aid efforts to preserve the paintings. “It helps you do that if you know what is there”, she explains – this would identify the most appropriate cleaning procedures, for example.

Reference

1. Cotte, M. et al., J. Analyt. Atomic Spectrosc. (in press, 2008)

Monday, April 21, 2008

Journeys in musical space
[This is one of the most stimulating things I’ve read for some time (not my article below, published on Nature’s online news site, but the paper it discusses). The paper itself is tough going, but once Dmitri Tymoczko explained to me where it was headed, the implications it opened up are dizzying – basically, that music is an exploration of complex geometries, giving us an intuitive feel for these spaces that we probably couldn’t get from any other kind of sensory input.]

Researchers map out the geometric structure of music.

To most of us, a Mozart piano sonata is an elegant succession of notes. To composer and music theorist Dmitri Tymoczko of Princeton University and his colleagues Clifton Callender and Ian Quinn, it is a journey in multidimensional space that can be described in the language of geometry and symmetry.

In a paper in Science, the trio offer nothing less than a way of mapping out all of pitched music (music which is not constructed from unpitched sounds like percussion), whether it is by Monteverdi or Mötörhead.

Commenting on the work, mathematician Rachel Wells Hall of Saint Joseph’s College in Philadelphia says that it opens up new directions in music theory, and could inspire composers to explore new kinds of music. It might even lead to the invention of new musical instruments, she says.

Although the work uses some fearsome maths, it is ultimately an exercise in simplification. Tymoczko and colleagues have looked for ways of representing geometrically all the equivalences that musicians recognize between different groups or sequences of notes, so that for example C-E-G and D-F#-A are both major triads, or C-E-G played in different octaves is considered basically the same chord.

By recognizing these equivalences, the immense number of possible ways of arranging notes into melodies and chord sequences can be collapsed from a multidimensional universe of permutations into much more compact spaces. The relationships between ‘musical objects’ made of small groupings of notes can then be understood in geometric terms by mapping them onto the shape of the space. Musical pieces may be seen as paths through this space.

It may sound abstract, but the idea brings together things that composers and musicologists have been trying to do in a fragmentary manner for centuries. The researchers say that all music interpretation involves throwing away some information so that particular musical structures can be grouped into classes. For example, playing ‘Somewhere Over the Rainbow’ in the key of G rather than, as originally written, the key of E flat, involves a different sequence of notes, but no one is going to say it is a different song on that account.

The Princeton researchers say there are five common kinds of transformation like this that are used in judging equivalence in music, including octave shifts, reordering of notes (for example, in inversions of chords, such as C-E-G and E-G-C), and duplications (adding a higher E to those chords, say). These equivalences can be applied individually or in combination, giving 32 different ways in which, say, two chords can be considered ‘the same’.

Such symmetries ‘fold up’ the vast space of note permutations in particular ways, Tymoczko explains. The geometric spaces that result may still be complex, but they can be analysed mathematically and are often intuitively comprehensible.

“When you’re sitting at a piano”, he says, “you’re interacting with a very complicated geometry.” In fact, composers in the early nineteenth century were already implicitly exploring such geometries through music that could not have been understood using the mathematics of the time.

In these folded-up spaces, classes of equivalent musical objects – three-note chords, say, or three-note melodies – can each be represented by a point. One point in the space that describes three-note chord types (which is cone-shaped) corresponds to major triads, such as C-E-G, another to augmented chords (in which some notes are sharpened by a semitone), and so on.

Where does this musical taxonomy get us? The researchers show that all kinds of musical problems can be described using their geometric language. For example, it provides a way of evaluating how related different sequences of notes or chords are, and thus whether or not they can be regarded as variations of a single musical idea.

“We can identify ways chord sequences can be related that music theorists haven’t noticed before”, says Tymoczko. For example, he says the approach reveals how a chord sequence used by Claude Debussy in 'L’Après-Midi d’un Faune' is related to one used slightly earlier by Richard Wagner in the prelude to 'Tristan und Isolde' – something that isn’t obvious from conventional ways of analysing the two sequences.

Clearly, Debussy couldn’t have know of this mathematical relationship to Wagner’s work. But Tymoczko says that such connections are bound to emerge as composers explore the musical spaces. Just as a mountaineer will find that only a small number of all the possible routes between two points are actually negotiable, so musicians will have discovered empirically that their options are limited by the underlying shapes and structures of musical possibilities.

“Music theorists have tended to regard the nineteenth-century experiments in harmony as unmotivated whimsy”, says Tymoczko. But his geometric scheme suggests that they were much more rational than that, governed by rigorous rules that their new approach can now uncover.

For example, the scheme supplies a logic for analysing how so-called voice leading works in chord progressions. This describes the way in which a sequence of chords with the same numbers of notes can be broken apart into parallel melodic lines. For example, the progression C-E-G to C-F-A can be thought of as three melodic lines: the E moves to F, and the G to A, with a constant C root. Finding efficient and effective voice-leading patterns has been challenging for composers and music theorists. But in the geometric scheme, a particular step from one chord to another becomes a movement in musical space between two points separated by a well defined distance, and one can discover the best routes.

This is just one of the ways in which the new theory could not only illuminate existing musical works but could point to new ways of solving problems posed in musical composition, the researchers claim.

Reference
1. Callender, C. et al. Science 320, 346-348 (2008).

Sunday, April 20, 2008

NASA loses its (science) head, Pfizer loses its case
[This is my Lab Report column for the May issue of Prospect.]

The resignation of NASA’s science chief Alan Stern in April is a symptom of all that’s wrong with the US space agency. Stern has given no official reason for his abrupt departure, which of course makes it seem all the more that the reason is one he’d rather not talk about. Many suspect his decision stems from a frustrating relationship with NASA’s leadership, specifically its head Mike Griffin, despite Stern’s assertion that Griffin is “the best administrator NASA has ever had”. Stern’s aim to keep projects on schedule and within budget – both persistent problems for NASA – is hard to fault, but it has sometimes caused a collision of priorities.

A highly respected planetary scientist, Stern has been seen as a true voice of science at NASA, favouring projects that actually teach us something about the universe. But increasingly, NASA seems compelled to support popular programmes that pander to the romanticised American vision of space exploration. Griffin has frozen the budget for fundamental science to fund a manned return mission to the moon – a political rather than scientific venture. Stern also tried to reduce the focus of planetary missions on Mars at the expense of the outer planets.

The crunch seems to have come over Stern’s decision in March to shut down Opportunity, one of the two Mars rovers currently exploring the planet’s surface. Griffin was not informed of that decision, and when he found out, he reversed it. Whatever the demands of etiquette, Stern’s decision made sense: the rovers have been an immensely successful testament to the power of robotic exploration, but they have long fulfilled their objectives. Opportunity and Spirit can still gather useful data, but the real problem was that the public loves them: the planned shutdown became headline news and provoked objections in Congress.

The rovers are now portrayed like pets: newspapers talked about Opportunity being ‘put to sleep’ rather than switched off. This pathetic fallacy is a projection of the longing to put humans on Mars. The irony is that a populist commitment to cripplingly expensive human spaceflight projects will ultimately give the taxpayer far less value for money than the kind of missions Stern supported. For now, that kind of absurd sentimentality has deprived NASA of a highly capable head of science.

*****

When scientists submit papers for publication, they usually enter into an unwritten contract of confidentiality with the journal: the paper will not be disseminated outside of the peer review process, but the reviewers will not be disclosed to the authors.

The pharmaceutical company Pfizer has decided that this arrangement should be subordinate to its own interests. During a lawsuit last year over alleged side effects of its painkillers Celebrex and Bextra, it subpoenaed the New England Journal of Medicine (NEJM) to release the reviews and reviewers’ identities for papers published on the drugs, along with details of the journals’ internal editorial deliberations. The NEJM’s refusal has now been upheld by a federal court in Massachusetts.

Pfizer’s lawyers say that the information could help to exonerate the company in deciding to put the drugs on sale. Bextra was withdrawn in 2005 after claims that it could cause heart attacks and strokes; Celebrex remains on the market.

“The public has no interest in protecting the editorial process of a scientific journal”, the lawyers have say. But the public has every interest in knowing that scientific claims will be checked out by independent experts who not only are guaranteed anonymity but do not expose themselves to the danger of litigation. The best reviewers might otherwise decline the task rather than take that risk. A counter-argument is that information relevant to public health should not be kept confidential – but drug companies are after all under no obligation to disclose their own tests and trials.

Besides, Pfizer has not specified what it hoped to find in the documents. One interpretation is that the company is simply fishing for anything that might help its case, rather than acting on a belief that the NEJM holds some pivotal evidence. The court’s decision is the right one, but will it persuade drug companies that they cannot rewrite the rules by which science is conducted?

*****

The new head of the Human Fertilisation and Embryology Authority (HFEA), Renaissance historian Lisa Jardine, has certainly begun her role during ‘interesting times’. The impending vote on the Human Fertilisation and Embryology Bill crystallizes several moral dilemmas about today’s research and practice in these areas, and threatens to heighten the polarization they induce. Whatever positions Jardine takes are sure to upset some vocal group or other.

Perhaps this is why the appointment of someone used to taking the long view, and accustomed also to the hard knocks of public life, makes sense. Certainly, Jardine’s popularizing instincts seem right for the HFEA just now: she considers public education about fertility issues (“something people need to know about”) as important as the regulatory responsibilities. The HFEA, while not exactly an opaque bureaucracy, has seldom previously shown an explicit commitment to inform.

And now is the time to do it. So far, it seems that the kind of misinformation about the bill spread by Catholic officials and other religious groups – talk of animal-human ‘cybrid’ embryos in research as ‘of Frankenstein proportion’ – has not significantly dented a public appreciation of the benefits such research could bring. (The ‘animal’ component here is a mere shell for human genes.) But it’s never a good idea to underestimate the determination of zealots.