Friday, September 19, 2008

Opening the door to Hogwarts
[This is how I originally wrote my latest story for Nature’s online news. It is about another piece of creative thinking from this group at Shanghai Jiao Tong University. I was particularly struck by the milk-bottle effect that John Pendry told me about – I’d never thought about it before, but it’s actually quite a striking thing. (The same applies to water in a glass, but it’s more effective with milk.) John says that it is basically because, as one can show quite easily, no light ray can pass through the glass wall that does not also pass through some milk.

Incidentally, I have to suspect that John Pendry must be a candidate for some future Nobel for his work in this area, though probably not yet, as the committee would want to see metamaterials prove their worth. The same applies to Eli Yablonovitch and Sajeev John for their work on photonic crystals. Some really stimulating physics has come out of both of these ideas.

The photo, by the way, was Oliver Morton’s idea.]

Scientists show how to make a hidden portal

In a demonstration that the inventiveness of physicists is equal to anything fantasy writers can dream up, scientists in China have unveiled a blueprint for the hidden portal in King’s Cross railway station through which Harry Potter and his chums catch the train to Hogwarts.

Platform Nine and Three Quarters already exists at King’s Cross in London, but visitors attempting the Harry Potter manoeuvre of running at the wall and trusting to faith will be in for a rude shock.

Xudong Luo and colleagues at Shanghai Jiao Tong University have figured out what’s missing. In two preprints, they describe a method for concealing an entrance so that what looks like a blank wall actually contains invisible openings [1,2].

Physicist John Pendry of Imperial College in London, whose theoretical work laid the foundations of the trick, agrees that there is a whiff of wizardry about it all. “It’s just magic”, he says.

This is the latest stunt of metamaterials, which have already delivered invisibility cloaks [3] and other weird manipulations of light. Metamaterials are structures pieced together from ‘artificial atoms’, tiny electrical devices that allow the structure to interact with light in way that are impossible for ordinary substances.

Some metamaterials have a negative refractive index, meaning that they bend light the ‘wrong’ way. This means that an object within the metamaterial can appear to float above it. A metamaterial invisibility shield, meanwhile, bends light smoothly around an object at its centre, like water flowing around a rock in a river. The Shanghai group recently showed how the object can be revealed again with an anti-invisibility cloak [4].

Now they have worked out in theory how to hide a doorway. The trick is to create an object that, because of its unusual interactions with light, looks bigger than it really is. A pillar made of such stuff, placed in the middle of an opening in a wall, could appear to fill the gap completely, whereas in fact there are open spaces to each side.

Pendry and his coworker S. Anantha Ramakrishna demonstrated the basic principle in 2003, when they showed that a cylinder of metamaterial could act as a magnifying lens for an object inside it [5].

“When you look at a milk bottle, you don’t see the glass”, Pendry explains. Because of the way in which the milk scatters light, “the milk seems to go right to the edge of the bottle.” He and Ramakrishna showed that with a negative-refractive index metamaterial, an object in the bottle could be magnified on the surface.

And now Luo and colleagues have shown that an even more remarkable effect is possible: the milk can appear to be outside the bottle. “It’s like a three-dimensional projector”, says Pendry. “I call it a super-milk bottle.”

The Chinese team opt for the rather more prosaic term “superscatterer”. They show that such an object could be made from a metal core surrounded by a metamaterial with a negative refractive index [1].

The researchers have calculated how light interacts with a rectangular superscatterer placed in the middle of a wide opening in a wall, and find that, for the right choice of sizes and metamaterial properties, the light bounces back just as it does if there was no opening [2].

If someone passes through the concealed opening, they find, it becomes momentarily visible before disappearing again once they are on the other side.

So “platform nine and three-quarters is realizable”, the Shanghai team says. “This is terrific fun”, says Pendry. He feels that the effect is even more remarkable than the invisibility cloak, because it seems so counter-intuitive that an object can project itself into empty space.

But the calculations so far only show concealment for microwave radiation, not visible light. Pendry says that the problem in using visible-light metamaterials – which were reported last month [6,7] – is that currently they tend to absorb some light rather than scattering it all into the magnified image, making it hard to project the image a significant distance beyond the object’s surface. So openings hidden from the naked eye aren’t likely “until we get on top of these materials”, he says.


References
1. Yang, T. et al. http://arxiv.org/abs/0807.5038 (2008).
2. Luo, X. et al. http://arxiv.org/abs/0809.1823 (2008).
3. Schurig, D. et al., Science 314, 977-980 (2006).
4. Chen, H., Luo, X., Ma, H. & Chan, C. T. http://arxiv.org/abs/0807.4973 (2008).
5. Pendry, J. B. & Ramakrishna, S. A. J. Phys.: Condens. Matter 15, 6345-6364 (2003).
6. Valentine, J. et al., Nature doi:10.1038/nature07247 (2008).
7. J. Yao et al., Science 321, 930 (2008).

Wednesday, September 17, 2008

Don't mention the 'C' word

I’m beginning to wonder whether I should be expecting the science police to come knocking on my door. After all, my latest book contains images of churches, saints, Jesus and the Virgin Mary. It discusses theology. And, goodness me, I have even taken part in a workshop organized by the Templeton Foundation. I am not sure that being an atheist will be a mitigating factor in my defence.

These dark thoughts are motivated by the fate of Michael Reiss, who has been forced to resign from his position as director of education at the Royal Society over his remarks about creationism in the classroom.

Now, Reiss isn’t blameless in all of this. Critics of his comments are right to say that the Royal Society needs to make it quite clear that creationism is not an alternative way to science of looking at the universe and evolutionism, but is plain wrong. Reiss didn’t appear to do this explicitly in his controversial talk at the British Association meeting. And his remark that “the concerns of students who do not accept the theory of evolution” should be taken “seriously and respectfully” sounds perilously close to saying that those concerns should be given serious consideration, and that one should respect the creationist point of view even while disagreeing with it. The fact is that we should feel obliged to respect points of view that are respectable, such as religious belief per se. Creationism is not respectable, scientifically, intellectually or indeed theologically (will they tell the kids that in Religious Education?). And if you are going to title your talk “Should creationism be a part of the science curriculum?”, it is reasonable that questions should be asked if you aren’t clearly seen at some point to say “No.”

So, a substantial case for the prosecution, it might seem. But for a start, one might reasonably expect that scientists, who pride themselves on accurate observation, will read your words and not just blunder in with preconceptions. It is hard to see a case, in Reiss’s address, for suggesting that his views differ from those that the Royal Society has restated in conjunction with Reiss’s resignation: “creationism has no scientific basis and should not be part of the science curriculum. However, if a young person raises creationism in a science class, teachers should be in a position to explain why evolution is a sound scientific theory and why creationism is not, in any way, scientific.”

This, to my mind, was the thrust of Reiss’s argument. He quoted from the Department for Children, Schools and Families Guidance on Creationism, published in 2007: “Any questions about creationism and intelligent design which arise in science lessons, for example as a result of media coverage, could provide the opportunity to explain or explore why they are not considered to be scientific theories and, in the right context, why evolution is considered to be a scientific theory.” The point here is that teachers should not be afraid to tackle the issue. They need not (indeed, I feel, should not) bring it up themselves, but if pupils do, they should not shy away by saying something like “We don’t discuss that in a science class.” And there is a good chance that such things will come up. I have heard stories of the genuine perplexity of schoolchildren who have received a creationist viewpoint from parents, whose views they respect, and a conflicting viewpoint from teachers who they also believe are intent on telling them the truth. Such pupils need and deserve guidance, not offhand dismissal. You can be respectful to individuals without having to ‘respect’ the views they hold, and this seems closest to what Reiss was saying.

And there’s nothing that disconcerts teachers more than their being told they must not discuss something. Indeed, that undermines their capacity to teach, just as the current proscription on physical contact with children undermines teachers’ ability to care for them in loco parentis. A fearful teacher is not a good one.

What perhaps irked some scientists more than anything else was Reiss’s remark that “Creationism can profitably be seen not as a simple misconception that careful science teaching can correct. Rather, a student who believes in creationism can be seen as inhabiting a non-scientific worldview, a very different way of seeing the world.” This is simplistic and incomplete as it stands (Gerald Holton has written about the way that a scientific viewpoint in some areas can coexist happily with irrationalism in others), but the basic point is valid. Despite (or perhaps because of) the recent decline in the popularity of the ‘deficit model’ of understanding science, some scientists still doggedly persist in the notion that everyone would be converted to a scientific way of thinking if we can just succeed in drumming enough facts into their heads. Reiss is pointing to the problem that the matter runs much deeper. Science education is essential, and the lack of it helps to pave the way for the kind of spread of ignorance that we can see in some parts of the developed world. But to imagine that this will undermine an entire culture and environment that inculcates some anti-scientific ideas is foolish and dangerous. I suspect that some scientists were angered by Reiss’s comments here because they imply that these scientists’ views of how to ‘convert’ people to a scientific worldview are naïve.

Most troubling of all, however, are the comments from some quarters which make it clear that the real source of outrage stems from the fact that Reiss is an ordained Church of England minister. The implication seems to be that, as a religious believer, he is probably sympathetic to creationism, as if one necessarily follows from the other. That creationism is an unorthodox, indeed a cranky form of Christianity (or of other kinds of fundamentalism – Islam and Judaism has its creationists too) seems here to be ignored or denied. It’s well known that Richard Dawkins sees fundamentalism as the centre of gravity of all religions, and that moderate, orthodox views are just the thin end of the wedge. But his remark that “a clergyman in charge of education for the country’s leading scientific organization” is like “a Monty Python sketch” itself has a whiff of fundamentalist intolerance. If we allow that it’s not obvious why a clergyman should have a significantly more profound belief than any other religious believer, this seems to imply that Dawkins would regard no Christian, Muslim, Hindu, Jew or so forth as fit for this job. Perhaps they should be excluded from the Royal Society altogether? Are we now to assume that no professed believer of any faith can be trusted to believe in and argue for a scientific view of the world? I do understand why some might regard these things as fundamentally incompatible, but I would slightly worry about the robustness of a mind that could not live with a little conflict and contradiction in its beliefs.

This situation has parallels to the way the Royal Society has been criticized for its involvement with the Templeton Foundation. I carry no torch for the Templeton, and indeed was on the wary lookout at the Varenna conference above for a hidden agenda. But I found none. It seems to me that the notion of exploring links between science and religion is harmless enough in itself, and it certainly has plenty of historical relevance, if nothing else. No doubt some flaky stuff comes of it, but the Templeton events that I have come across have been of high scientific quality. (I’m rather more concerned about suggestions that the Templeton has right-wing leanings, although that doesn’t seem obvious from their web site – and US rightwingers are usually quite happy to trumpet the fact.) But it seems sad that the RS’s connections with the Templeton have been lambasted not because anyone seems to have detected a dodgy agenda (I understand that the Templeton folks are explicitly unsympathetic to intelligent design, for example) but because they are a religious-based organization. Again, I thought that scientists were supposed to base their conclusions on actual evidence, not assumptions.

In regard to Reiss, I’m not going to start ranting about witch hunts (not least because that is the hallmark of the green-ink brigade). He was rather incautious, and needed to see how easily his words might be misinterpreted. But they have indeed been misinterpreted, and I don’t see that the Royal Society has done itself much of a service by ousting him, particularly as this seems to have been brought about by a knee-jerk response from scientists who are showing signs of ‘Reds (or in this case, Revs) under the bed’ paranoia.

The whole affair reminds me of the case of the Archbishop of Canterbury talking about sharia law, where the problem was not that he said anything so terrible but that he failed to be especially cautious and explicit when using trigger words that send people foaming at the mouth. But I thought scientists considered themselves more objective than that.

Thursday, September 04, 2008

Intelligence and design

Little did I realise when I became a target of criticism from Steve Fuller of Warwick University that I would be able to wear this as a badge of honour. I just thought it rather odd that someone in a department of sociology seemed so indifferent to the foundational principles of his field, preferring to regard it as a branch of psychology rather than an attempt to understand human group behaviour. I take some solace in the fact that his resistance to physics-based ideas seems to have been anticipated by George Lundberg, one of the pioneers of the field, who, in Foundations of Sociology (1939), admits with dismay that ‘The idea that the same general laws may be applicable to both ‘physical’ and societal behavior may seem fantastic and inconceivable to many people.’ I was tempted to suggest that Fuller hadn’t read Lundberg, or Robert Park, Georg Simmel, Herbert Simon and so on, but this felt like the cheap form of rhetoric that prompts authors to say of critics whose opinions they don’t like that ‘they obviously haven’t read my book’. (On the other hand, Fuller’s first assault, on Radio 4’s Today programme, came when he really hadn’t read my book, because it hadn’t been published at that point.)

Anyway, judging from the level of scholarship A. C. Grayling finds (or rather, fails to find) in Fuller’s new book Dissent over Descent, a defence of the notion of intelligent design, maybe my hesitation was generous. But of course one shouldn’t generalize. Grayling has dissected the book in the New Humanist, and we should be grateful to him for sparing us the effort, although he clearly found the task wearisome. But wait a minute – a social scientist writing about evolution? Isn’t that a little like a chemist (sic) writing about social science?

Friday, August 29, 2008

Why less is more in government
[This is the pre-edited version of my latest Muse for Nature’s online news.]

In committees and organizations, work expands to fill the time available while growth brings inefficiency. It’s worth trying to figure out why.

Arguments about the admission of new member states to the European Union have become highly charged since Russia sent tanks into Georgia, which harbours EU aspirations. But there may be another reason to view these wannabe nations cautiously, according to two recent preprints [1,2]. It claims that decision-making bodies may not be able to exceed about 20 members without detriment to their efficiency.

Already the EU, as well as its executive branch the European Commission, has 27 members, well in excess of the putative inefficiency threshold. And negotiations in Brussels have become notorious for their bureaucratic wrangling and inertia. The Treaty of Lisbon, which proposes various reforms in an attempt to streamline the EU’s workings, implicitly recognizes the overcrowding problem by proposing a reduction in the number of Commissioners to 18. But as if to prove the point, Ireland rejected it in June.

It’s not hard to pinpoint the problem with large committees. The bigger the group, the more factious it is liable to be, and it gets ever harder to reach a consensus. This has doubtless been recognized since time immemorial, but it was first stated explicitly in the 1950s by the British historian C. Northcote Parkinson. He pointed out how the executive governing bodies in Britain since the Middle Ages, called cabinets since the early seventeenth century, tended always to expand in inverse proportion to their ability to get anything done.

Parkinson showed that British councils and cabinets since 1257 seemed to go through a natural ‘life cycle’: they grew until they exceeded a membership of about 20, at which point they were replaced by a new body that eventually suffered the same fate. Parkinson proposed that this threshold be called the ‘coefficient of inefficiency’.

Stefan Thurner and colleagues at the Medical University of Vienna have attempted to put Parkinson’s anecdotal observations on a solid theoretical footing [1,2]. Cabinets are now a feature of governments worldwide, and Thurner and colleagues find that most of those from 197 countries have between 13 and 20 members. What’s more, the bigger the cabinet, the less well it seems to govern the country, as measured for example by an index called the Human Development Indicator, used by the United Nations Development Programme and which takes into account such factors as life expectancy, literacy and gross domestic product.

Thurner and colleagues have tried to understand where this critical mass of 20 comes from by using a mathematical model of decision-making in small groups [1]. They assume that each member may influence the decisions of a certain number of others, so that they form a complex social network. Each adopts the majority opinion of those to whom they are connected provided that this majority exceeds a certain threshold.

For a range of model parameters, a consensus is always possible for less than 10 members – with the exception of 8. Above this number, consensus becomes progressively harder to achieve. And the number of ways a ‘dissensus’ may arise expands significantly beyond about 19-21, in line with Parkinson’s observations.

Why are eight-member cabinets anomalous? This looks like a mere numerical quirk of the model chosen, but it’s curious that no eightfold cabinets appeared in the authors’ global survey. Historically, only one such cabinet seems to have been identified: the Committee of State of the British king Charles I, whose Parliament rebelled and eventually executed him.

Now the Austrian researchers have extended their analysis of Parkinson’s ideas to the one for which he is best known: Parkinson’s Law, which states that work expands to fill the time available [2]. This provided the title of the 1957 book in which Parkinson’s essays on governance and efficiency were collected.

Parkinson regarded his Law as a corollary of the inevitable expansion of bureaucracies. Drawing on his experience as a British civil servant, he pointed out that officials aim to expand their own mini-empires by gathering a cohort of subordinates. But these simply make work for each other, dwelling over minutiae that a person lacking such underlings would have sensibly prioritized and abbreviated. Dare I point out that Nature’s editorial staff numbered about 13 when I joined 20 years ago, and now numbers something like 33 – yet the editors are no less overworked now than we were then, even though the journal is basically the same size.

Parkinson’s explanation for this effect focused on the issue of promotion, which is in effect what happens to someone who acquires subordinates. His solution to the curse of Parkinson’s Law and the formation of over-sized, inefficient organizations is to engineer a suitable retirement strategy such that promotion remains feasible for all.

With promotion, he suggested, individuals progress from responsibility to distinction, dignity and wisdom (although finally succumbing to obstruction). Without it, the progression is instead from frustration to jealousy to resignation and oblivion, with a steady decrease in efficiency. This has become known as the ‘Prince Charles Syndrome’, after the British septuagenarian monarch-in-waiting who seems increasingly desperate to find a meaningful role in public life.

Thurner and colleagues have couched these ideas in mathematical terms by modelling organizations as a throughflow of staff, and they find that as long as promotion prospects can be sufficiently maintained, exponential growth can be avoided. This means adjusting the retirement age accordingly. With the right choice (which Parkinson called the ‘pension point’), the efficiency of all members can be maximized.

Of course, precise numbers in this sort of modelling should be taken with a pinch of salt. And even when they seem to generate the right qualitative trends, it doesn’t necessarily follow that they do so for the right reasons. Yet correlations like those spotted by Parkinson, and now fleshed out by Thurner and colleagues, do seem to be telling us that there are natural laws of social organization that we ignore at our peril. The secretary-general of NATO has just made positive noises about Georgia’s wish for membership. This may or may not be politically expedient; but with NATO membership currently at a bloated 26, he had better at least recognize what the consequences might be for the organization’s ability to function.

References

1. Klimek, P. et al. Preprint http://arxiv.org/abs/0804.2202
2. Klimek, P. et al. Preprint http://arxiv.org/abs/0808.1684

Friday, August 08, 2008

Crime and punishment in the lab
[This is the uncut version of my latest Muse article for Nature’s online news.]

Before we ask whether scientific conduct is dealt with harshly enough, we need to be clear about what punishment is meant to achieve.

Is science too soft on its miscreants? That could be read as the implication of a study published in Science, which shows that 43 percent of a small sample of scientists found guilty of misconduct remained employed subsequently in academia, and half of them continued to turn out a paper a year [1].

Scientists have been doing a lot of hand-wringing recently about misconduct in their ranks. A commentary in Nature [2] proposed that many such incidents go unreported, and suggested ways to improve that woeful state of affairs, such as adopting a ‘zero-tolerance culture’. This prompted several respondents to maintain that matters are even worse, for example because junior researchers see senior colleagues benefiting from ‘calculated, cautious dishonesty’ or because some countries lack regulatory bodies to police ethical breaches [3-5].

All this dismay is justified to the extent that misconduct potentially tarnishes the whole community, damaging the credibility of science in the eyes of the public. Whether the integrity of the scientific literature suffers seriously is less clear – the more important the false claim, the more likely it is to be uncovered quickly as others scrutinize the results or fail to reproduce them. This has been the case, for example, with the high-profile scandals and controversies over the work of Jan Hendrik Schön in nanotechnology, Hwang Woo-suk in cloning and Rusi Taleyarkhan in bench-top nuclear fusion.

But the discussion needs to move beyond these expressions of stern disapproval. For one thing, it isn’t clear what ‘zero tolerance’ should mean when misconduct is such a grey area. Everyone can agree that fabrication of data is beyond the pale; but as a study three years ago revealed [6], huge numbers of scientists routinely engage in practices that are questionable without being blatantly improper: using another’s ideas without credit, say, or overlooking others’ use of flawed data. Papers that inflate their apparent novelty by failing to acknowledge the extent of previous research are tiresomely common.

And it is remarkable how many austere calls for penalizing scientific misconduct omit any indication of what such penalties are meant to achieve. Such a situation is inconceivable in conventional criminology. Although there is no consensus on the objectives of a penal system – the relative weights that should be accorded to punishment, public protection, deterrence and rehabilitation – these are at least universally recognized as the components of the debate. In comparison, discussions of scientific misconduct seem all too often to stop at the primitive notion that it is a bad thing.

For example, the US Office of Research Integrity (ORI) provides ample explanation of its commendable procedures for handling allegations of misconduct, while the Office of Science and Technology Policy outlines the responsibilities of federal agencies and research institutions to conduct their own investigations. But where is the discussion of desired outcomes, beyond establishing the facts in a fair, efficient and transparent way?

This is why Redman and Merz’s study is useful. As they say, ‘little is known about the consequences of being found guilty of misconduct’. The common presumption, they say, is that such a verdict effectively spells the end of the perpetrator’s career.

Their conclusions, based on studies of 43 individuals deemed guilty by the ORI between 1994 and 2001, reveal a quite different picture. Of the 28 scientists Redman and Merz could trace, 10 were still working in academic positions. Those who agreed to be interviewed – just 7 of the 28 – were publishing an average 1.3 papers a year, while 19 of the 37 for which publication data were available published at least a paper a year.

Is this good or bad? Redman and Merz feel that the opportunity for redemption is important, not just from a liberal but also a pragmatic perspective. ‘The fact that some of these people retain useful scientific careers is sensible, given that they are trained as scientists’, says Merz. ‘They just slipped up in some fundamental way, and many can rebuild a scientific career or at least use the skills they developed as scientists.’ Besides, he adds, everyone they spoke to ‘paid a substantial price’. All reported financial and personal hardships, and some became physically ill.

But on another level, says Merz, these data ‘could be seen as undermining the deterrent effect of the perception that punishment is banishment, from academia, at least.’ Does the punishment fit the crime?

The scientific community has so far lacked much enthusiasm for confronting these questions – perhaps because misconduct, while a trait found in all fields of human activity, is felt to be uniquely embarrassing to an enterprise that considers itself in pursuit of objective truths. But the time has surely come to face the issue, ideally with more data to hand. In formulating civic penal policy, for example, one would like to know how the severity of sentencing affects crime rates (which might indicate the effectiveness of deterrence), and how different prison regimes (punitive versus educative, say) influence recidivism. And one needs to have a view on whether sanctions such as imprisonment are primarily for the sake of public protection or to mete out punishment.

The same sorts of considerations apply with scientific misconduct, because the result otherwise has a dangerously ad hoc flavour. Just a week ago, the South Korean national committee on bioethics rejected an application by Hwang Woo-suk to resume research on stem cells. Why? Because ‘he engaged in unethical and wrongful acts in the past’, according to one source. But that’s not a reason, it is simply a statement of fact. Does the committee fear that Hwang would do it again (despite the intense scrutiny that would be given to his every move)? Do they think he hasn’t been sufficiently punished yet? Or perhaps that approval would have raised doubts about the rigour of the country’s bioethics procedures? Each of these reasons might be defensible – but there’s no telling which, if any, applies.

One reason why it matters is that by all accounts Hwang is an extremely capable scientist. If he and others like him are to be excluded from making further contributions to their fields because of past transgressions, we need to be clear about why that is being done. We need a rational debate on the motivations and objectives of a scientific penal code.

References

1. Redman, B. K. & Merz, J. F., Science 321, 775 (2008).
2. Titus, S. L. et al., Nature 453, 980-982 (2008).
3. Bosch, X. Nature 454, 574 (2008).
4. Feder, N. & Stewart, W. W. Nature 454, 574 (2008).
5. Nussenzveig, P. A. & Funchal, Z. Nature 454, 574 (2008).
6. Martinson, B. C. et al., Nature 435, 737-738 (2008).

Tuesday, August 05, 2008

Who is Karl Neder?

These people tend to define themselves by what they don’t like, which is usually much the same: relativity, the Big Bang. Einstein. Especially Einstein, poor fellow.

In my novel The Sun and Moon Corrupted, where these words appear, I sought to convey the fact that the group of individuals who scientists would call cranks, and who submit their ideas with tenacious insistence and persistence to journals such as Nature, have remarkably similar characteristics and obsessions. They tend to express themselves in much the same manner, exemplified in my book by the letters of the fictional Hungarian physicist Karl Neder. And their egocentricity knows no bounds.

I realised that, if I was right in this characterization, it would not be long at all before some of these people became convinced that Karl Neder is based on them. (The fact is that he is indeed loosely based on a real person, but there are reasons why I can be very confident that this person will never identify the fact.)

And so it comes to pass. The first person to cry ‘It’s me!’ seems to be one Pentcho Valev . I do not know who Valev is, but it seems I once (more than once?) had the task of rejecting a paper he submitted to Nature. I remember more than you might imagine about the decisions I made while an editor at Nature, and by no means always because the memory is pleasant. But I fear that Valev rings no bells at all. Nonetheless, says Valev, there are “Too many coincidences: Bulgaria + thermodynamics + Einstein + desperately trying to publish (in Nature) + Phillip [sic] Ball is Nature’s editor at that time and mercilessly rejects all my papers. Yes most probably I am at least part of this Karl Neder. Bravo Phillip Ball! Some may say it is unethical for you to make money by describing the plight of your victims but don't believe them: there is nothing unethical in Einstein zombie world.” (If it is any consolation, Mr Valev, the notion that this book has brought me "fortune" provokes hollow laughter.)

Ah, but this is all so unnervingly close to the terms in which Karl Neder expresses himself (which mimic those of his real-life model). In fact, Valev seems first to have identified ‘his’ voice from a quote from the book in a review in the Telegraph:
‘Actually, what [Neder] says is: "PERPETUUM MOBILE IS CONSTRUCTED BY ME!!!!!!!!!"; his voluminous correspondence being littered with blood-curdling Igorisms of this sort.’
Even I would not have dreamt up the scenario in which Mr Valev is apparently saying to himself “Blood-curdling Igorisms? But that’s exactly like me, damn it!” (Or rather, “LIKE ME!!!!!!!!!”)

Valev continues: “If Philip Ball as Nature’s editor had not fought so successfully against crazy Eastern Europe anti-relativists, those cranks could have turned gold into silver and so the very foundation of Western culture would have been destroyed” – and he quotes from a piece I wrote in which I mentioned how relativistic effects in the electron orbitals of gold atoms are responsible for its reddish tint. This is where I start to wonder if it is all some delicious hoax by the wicked Henry Gee or one of the people who read my book for the Royal Institution book club, and therefore knows that indeed it plunges headlong into alchemy and metallic transmutation in its final chapters. What are you trying to do, turn me paranoid?

Saturday, August 02, 2008

Might religion be good for your health?
[Here is the uncut version of my latest Muse for Nature news online.]

Religion is not a disease, a new study claims, but a protection against it.

Science and religion, anyone? Oh come now, don’t tell me you’re bored with the subject already. Before you answer that, let me explain that a paper in the Proceedings of the Royal Society B [1] has a new perspective on offer.

Well, perhaps not new. In fact it is far older than the authors, Corey Fincher and Randy Thornhill of the University of New Mexico, acknowledge. Their treatment of religion as a social phenomenon harks back to classic works by two of sociology’s founding fathers, Emile Durkheim and Max Weber, who, around the start of the twentieth century, offered explanations of how religions around the world have shaped and been shaped by the societies in which they are embedded.

That this approach has fallen out of fashion tells us more about our times than about its validity. The increasing focus on individualism in the Western world since Durkheim wrote that “God is society, writ large” is reflected in the current enthusiasm for what has been dubbed neurotheology: attempts to locate religious experience in brain activity and genetic predispositions for certain mental states. Such studies might ultimately tell us why some folks go to church and other don’t, but they can say rather little about how a predisposition towards religiosity crystallizes into a relatively small number of institutionalized religions - why, say, the 'religiously inclined' don't simply each have a personal religion.

Similarly, the militant atheists who gnash their teeth at the sheer irrationality and arbitrariness of religious belief will be doomed forever to do so unless they accept Durkheim’s point that, rather than being some pernicious mental virus propagating through cultures, religion has social capital and thus possible adaptive value [2]. Durkheim argued that it once was, and still is in many cultures, the cement of society that maintains order. This cohesive function is as evident today in much of American society as it is in Tehran or Warsaw.

But of course there is a flipside to that. Within Durkheim’s definition of a religion as ‘a unified set of beliefs and practices which unite in one single moral community all those who adhere to them’ is a potential antagonism towards those outside that community – a potential that has become, largely unanticipated, the real spectre haunting the modern world.

It is in a sense the source of this tension that forms the central question of Fincher and Thornhill’s paper. Whereas Weber looked at the different social structures that different religions tended to promote, and Durkheim focused on ‘secular utility’ such as the benefits of social cohesion, Fincher and Thornhill propose a specific reason why religions create a propensity to exclude outsiders. In their view, the development of a religion is a strategy for avoiding disease.

The more a society disperses and mixes with other groups, the more it risks contracting new diseases. ‘There is ample evidence’, the authors say, ‘that the psychology of xenophobia and ethnocentrism is importantly related to avoidance and management of infectious disease.’

Fincher and Thornhill have previously shown that global patterns of social collectivism [3] and of language diversity [4] correlate with the diversity of infectious disease in a manner consistent with avoidance strategies: strangers can be bad for your health. Now they have found that religious diversity is also greater in parts of the world where the risk of catching something nasty from those outside your group (who are likely to have different immunity patterns) is higher.

It’s an intriguing observation. But as with all correlation studies, cause and effect are hard to untangle. Fincher and Thornhill offer the notion that new religions are actively generated as societal markers that inhibit inter-group interactions. One could equally argue, however, that a tendency to avoid contacts with other social groups prevents the spread of some cultural traits at the expense of others, and so merely preserves an intrinsic diversity.

This, indeed, is the basis of some theoretical models for how cultural exchange and transmission occurs [5]. Where opportunities for interaction are fewer, there is more likelihood that several ‘island cultures’ will coexist rather than being consumed by a dominant one.

And the theory of Fincher and Thornhill tells us nothing about religion per se, beyond its simple function as a way of discriminating those ‘like you’ from those who aren’t. It might as well be any other societal trait, such as style of pottery or family names. In fact, compared with such indicators, religion is a fantastically baroque and socially costly means of separating friend from foe. As recent ethnic conflicts in African nations have shown, humans are remarkably and fatefully adept at identifying the smallest signs of difference.

What we have here, then, is very far from a theory of how and why religions arise and spread. The main value of the work may instead reside in the suggestion that there are ‘hidden’ biological influences on the dynamics of cultural diversification. It is also, however, a timely reminder that religion is not so much a personal belief (deluded or virtuous, according to taste) as, in Durkheim’s words, a ‘social fact’.


References

1. Fincher, C. L. & Thornhill, R. Proc. R. Soc. B doi:10.1098/rspb.2008.0688.
2. Wilson, D. S. Darwin’s Cathedral: Evolution, Religion, and the Nature of Society (University of Chicago Press, 2002).
3. Fincher, C. L. et al., Proc. R. Soc. B 275, 1279-1285 (2008).
4. Fincher, C. L. & Thornhill, R. Oikos doi:10.1111/j.0030-1299.2008.16684.x.
5. Axelrod, R. J. Conflict Resolution 41, 203-226 (1997).

Thursday, July 17, 2008

Who says the Internet broadens your horizons?
[Here’s the long version of my latest, understandable shortened Muse for Nature News.]

A new finding that electronic journals create a narrowing of scientific scholarship illustrates the mixed blessings of online access.

It’s a rare scientist these days who does not know his or her citation index, most commonly in the form of the h-index introduced in 2005 by physicist Jorge Hirsch [1]. Proposed as a measure of the cumulative impact of one’s published works, this and related indices are being used informally to rank scientists, whether this be for drawing up lists of the most stellar performers or for assessing young researchers applying for tenure. Increasingly, careers are being weighed up through citation records.

All this makes more pressing the question of how papers get cited in the first place: does this provide an honest measure of their worth? A study published in Science by sociologist James Evans at the University of Chicago adds a new ingredient to this volatile ferment [2]. He has shown that the increasing availability of papers and journals online, including what may be decades of back issues, is paradoxically leading to a narrowing of the number and range of papers cited. Evans suggests that this is the result of the way browsing of print journals is being replaced by focused online searches, which tend both to identify more recent papers and to quickly converge on a smaller subset of them.

The argument is that when a journal goes online, fewer people flick through the print version and so there is less chance that readers will just happen across a paper related to their work. Rather, an automated search, or following hyperlinks from other online articles, will take them directly to the most immediately relevant articles.

Evans has compiled citation data for 34 million articles from a wide range of scientific disciplines, some dating back as far as 1945. He has studied how citation patterns changed as many of the journals became available online. On average, a hypothetical journal would, by making five years of its issues available free or commercially online, suffer a drop in the number of its own articles cited from 600 to 200.

That sounds like a bad business model, but in fact there are some important qualifications here. It doesn’t necessarily mean that a journal gets cited less when it goes online, but simply that its citations get focused on fewer distinct articles. And all these changes are set against an ever-growing body of published work, which means that more and more papers are getting cited overall. The changes caused by going online are relative, set within the context of a still widening and deepening universe of citations.

All the same, this means that the trend for online access is making citation patterns narrower than they would be otherwise: fixated on fewer papers and fewer journals.

In some ways, the narrowing is not a bad thing. Online searching can deliver you more quickly to just those papers that are most immediately relevant to your own work, without having to wade through more peripheral material. This may in turn mean that the citation lists in papers are more helpful and pertinent to readers.

Online access also makes it much easier for researchers to check citation details – to look at what a reference actually said, rather than what someone else implies they said. It’s not clear how often this is actually done, however – one study
(see also here), using mis-citations as a proxy, has suggested that 70-90 percent of literature citations have simply been copied from other reference lists, rather than being directly consulted [3,4]. But at the very least, easier access should reduce the chances of that.

Yet there are two reasons in particular why Evans’ findings are concerning. One is in fact a mixed blessing. With online resources, scientific consensus is reached more quickly and efficiently, because for example hyperlinked citations allow you to deduce rapidly which papers other are citing. Some search strategies also rely on consensual views about relevance and significance.

This might mean that less attention, time and effort get wasted down dead ends. But it also means there is more chance of missing something important. “It pushes science in the direction of a press release”, says Evans. “Unless they are picked up immediately, things will be forgotten more quickly.”

Moreover, feedback about the value judgements of others seems to lead to amplification of opinions in a way that is not necessarily linked to ‘absolute’ value [5]. It’s an example of the rich-get-richer or ‘Matthew’ effect, whereby fame becomes self-fulfilling and a few individuals get disproportionate rewards at the expense of other, perhaps equally deserving cases. While highly cited papers may indeed deserve to be, it seems the citation statistics would not look very different if these papers had simply benefited from random amplification of negligible differences in quality [6]. Again, this could happen even with old-style manual searching of journals, but online searches make it more likely.

The other worry is that this trend exacerbates the already lamented narrowing of researchers’ horizons. It is by scanning through the contents pages of journals that you find out what others outside your field are doing. If scientists are reading only the papers that are directly relevant to their immediate research, science as a whole will suffer, not least because its tightly drawn disciplines will cease to be fertilized by ideas from outside.

Related to this concern is the possibility of collective amnesia: the past ceases to matter in a desperate bid to keep abreast of the present. Older scientists have probably been complaining that youngsters no longer ‘read the old literature’ ever since science journals existed, but it seems that neglecting the history of your field is made more likely with online tools.

There’s a risk of overplaying this issue, however. It’s likely that so-called ‘ceremonial citation’, the token nod to venerable and unread papers, has been going on for a long time. And the increased availability of foundational texts online can only be a good thing. Nonetheless, Evans’ data indicate that online access is driving citations to become ‘younger’ and reducing an article’s shelf-life. This must surely increase the danger of reinventing the wheel. And there is an important difference between having decided that an old paper is not sufficiently relevant to cite, and having assumed it, or having not even known of its existence.

In many ways these trends are just an extension to the scientific research community of things that have been much debated in the broader sphere of news media, where the possibilities for personalization of content leads to a solipsistic outlook in which individuals hear only the things they want to hear. (The awful geek-speak for this – individuated news – itself makes the point, having apparently been coined in ignorance of the fact that individuation already has a different historical meaning.) Instead of reading newspapers, the fear is that people will soon read only the ‘Daily Me.’ Web journalist Steve Outing has said that “90 percent of my daughters’ media consumption is ‘individuated’. For kids today, non-individuated media is outside the norm.” We may be approaching the point where that also applies to young scientists, particularly if it is the model they have become accustomed to as children.

Ultimately, the concerns that Evans raises are thus not a necessary consequence of the mere fact of online access and archives, but stem from the cultural norms within which this material is becoming available. And it is no response – or at least, a futile one – to say that we must bring back the days when scientists would have to visit the library each week and pick up the journals. The efficiency of online searching and the availability of archives are both to be welcomed. But a laissez-faire attitude to this ‘literature market’ could have some unwelcome consequences, in particular the risk of reduced meritocracy, loss of valuable research, and increased parochialism. The paper journal may be on the way out, but we’d better make sure that the journal club doesn’t go the same way.

References
1. J. E. Hirsch, Proc. Natl Acad. Sci. USA 102, 16569-16572 (2005).
2. J. A. Evans, Science 321, 395-399 (2008).
3. M. V. Simkin & V. P. Roychowdhury, Complex Syst. 14, 269-274 (2003).
4. M. V. Simkin & V. P. Roychowdhury, Scientometrics 62, 367-384 (2005).
5. M. J. Sagalnik et al., Science 311, 854-856 (2006).
6. M. V. Simkin & V. P. Chowdhury, Annals Improb. Res. 11, 24-27 (2005).

Sunday, July 13, 2008

Is music just for babies?

I’m grateful to a friend for pointing me towards a recent preposterous article on music by Terry Kealey in the Times, suggesting in essence that music is anti-intellectual, regressive and appeals to our baser instincts. Now, I have sparred with Terry before and I know that he likes to be provocative. I don’t want to seem to be rising to the bait like some quivering Verdi aficionado. But really, he shouldn’t be allowed to be so naughty without being challenged. I have to say that his article struck me as a classic case of a little knowledge being a dangerous thing.

His bizarre opening gambit seems to be that music and intelligence are somehow mutually exclusive, so that one may make way for the other. This will come as news to any neuroscientist or psychologist who has ever studied music. A large part of the argument seems to rest on the idea that perfect pitch is a sign of mental incapacity. Isn’t it particularly common in autistic people and children, he asks? Er, no, frankly. Sorry, it’s as simple as that. Terry may be confusing the fact that children can acquire perfect pitch through learning more easily than adults – but that’s true of many things, including language (which presumably does not make language an infantile attribute). Perfect pitch is also more common in Chinese people, but I think even a controversialist like Terry might stop short of wanting to say that this proves his point. Merely, it seems to be enhanced in speakers of tonal languages, which stands to reason.

But more to the point – and this is a bit of a giveaway – perfect pitch has nothing to do with musical ability. There is no correlation between the two. It is true that many composers had/have perfect pitch, but that’s no mystery, because as Terry points out, it can be learnt with effort, i.e. with lots of exposure to music. It is, indeed, even possible to have perfect pitch and to be simultaneously clinically tone deaf, since one involves the identification of absolute pitch in single notes and the other of pitch relationships between multiple notes.

Birds too have perfect pitch, we’re told, and so did Neanderthals (thanks to another of Stephen Mithen’s wild speculations, swallowed hook, line and sinker). And don’t birds have music too, showing that it is for bird-brains? Sorry, again no. Anyone who thinks birds have music doesn’t know what music is. Music has syntax and hierarchical patterns. Birdsong does not – it is a linear sequence of acoustic signals. I’m afraid Terry again and again disqualifies himself during the article from saying anything on the subject.

Similarly, he claims that music has only emotional value, not intellectual. So how to explain the well-documented fact that musical training improves children’s IQ? Or that music cognition uses so many different areas of the brain – not just ‘primitive emotional centres’ such as the amygdala but logic-processing centres in the frontal cortex and areas that overlap with those used for handling language syntax? This is a statement of pure prejudice that takes no account of any evidence. ‘To a scientist, music can appear as a throwback to a primeval, swampy stage of human evolution’, Terry claims. Not to any scientist I know.

Finally, we have Terry’s remark that music encourages dictatorships, because Hitler and Stalin loved it, but Churchill and Roosevelt were indifferent. I am not going to insult his intelligence by implying that this is a serious suggestion that warrants a po-faced response, but really Terry, you have to be careful about making this sort of jest in the Times. I can imagine retired colonels all over the country snorting over their toast and marmalade: ‘Good god, the chap has a point!’

I must confess that I find something rather delightful in the fact that there are still people today who will, like some latter-day Saint Bernard of Clairvaux, denounce music as inciting ‘lust and superstition’. It’s wonderful stuff in its way, although one can’t help but scent the faint reek of the attacks on jazz in the early twentieth century, which of course had far baser motivations. Plato shared these worries too – Terry fails to point out that he felt only the right music educated the soul in virtue, while the wrong music would corrupt it. The same was true of Saint Augustine, but in his case it was his very love of music that made him fearful – he was all too aware of the strong effects it could exert, for ‘better’ or ‘worse’. In Terry Kealey’s case, it seems as though all music leaves him feeling vaguely unclean and infantilized, or perhaps just cold. That’s sad, but not necessarily beyond the reach of treatment.

Saturday, July 12, 2008

Were there architectural drawings for Chartres?

Michael Lewis has given Universe of Stone a nice review in the Wall Street Journal. The reason I want to respond to the points he raises is not to score points or pick an argument, but because they touch on interesting issues.

Lewis’s complaint about the absence of much discussion of the sculpture at Chartres is understandable if one is led by the American subtitle to expect a genuine biography of the building. And it’s natural that he would have been. But as my UK subtitle indicates, this is not in fact my aim: this is really a book about the origins of Gothic and what it indicates about the intellectual currents of the twelfth-century renaissance. The Chartrain sculpture doesn’t have so much to say about that (with some notable exceptions that I do mention).

Lewis’s most serious criticism, however, concerns the question of architectural drawings in the period when Chartres was built. As he says (and as he acknowledges I say), drawings for Gothic churches certainly did exist: there are some spectacular ones for Strasbourg and Reims Cathedrals in particular. As I say in my book, ‘These are extremely detailed and executed with high technical proficiency.’ They date from around 1250 onwards.

The question is: were similar drawings used for Chartres? Lewis is in no doubt: ‘analogous drawings would certainly have existed for Chartres.’ That's a level of certainty that other historians of Gothic don't seem to share - unsurprisingly, given that we lack any evidence either way. But most importantly, I would surely and rightly have been hauled over the coals if I had committed the cardinal sin of assuming that one period in the Middle Ages stands proxy for all others. The mid-thirteenth century was a very different time from the late twelfth, in terms of building practices as in many other respects: in particular, architecture became much more professionalized in the early thirteenth century than it had been before. My guess, as it is no more than that, is that if drawings existed for Chartres – which is certainly possible, but nothing more – they would have looked more akin to those of Villard de Honnecourt, made around 1220 or so, which have none of the precision of the Strasbourg drawings. Lewis says that the sophistication of the latter speaks of a mature tradition that must have already existed for a long time. That seems reasonable, until you consider this. Suppose all cathedrals before Chartres had been destroyed. We might, with analogous reasoning to Lewis’s, then look at its flying buttresses and say ‘Well, they certainly must have had good flying buttresses in the 1130s, since these ones are so mature.’ And of course we’d be utterly wrong. (What’s more, the skills needed to make flying buttresses are considerably more demanding than those needed to make scale drawings.)

I think Lewis may have misunderstood my text in places. I never claimed that the architect of Chartres designed it all ‘in his head’. I simply said that this is what architectural historian Robert Branner claimed. (I'm not sure I'd agree with him.) Neither did I say that architectural drawings would all be simply ‘symbolic, showing codified relationships without any real attention to dimension’ – I said that this was true of medieval art and maps.

I’m grateful to Lewis for raising this as an issue, and his comments suggest that it might be good if I spell things out a little more explicitly in the paperback (which, in the US, will probably have a different subtitle!).

Friday, July 04, 2008

Behind the mask of the LHC

[Here is my latest Muse for Nature News, which, bless them, they ran at its extravagant length and complexity.]

The physics that the Large Hadron Collider will explore has tentative philosophical foundations. But that’s a good thing.


Physicists, and indeed all scientists, should rejoice that the advent of the Large Hadron Collider (LHC) has become a significant cultural event. Dubbed the ‘Big Bang machine’, the new particle accelerator at CERN — the European centre for particle physics near Geneva — should answer some of the most profound questions in fundamental physics and may open up a new chapter in our exploration of why the world is the way it is. The breathless media coverage of the impending switch-on is a reassuring sign of the public thirst for enlightenment on matters that could easily seem recondite and remote.

But there are pitfalls with this kind of jamboree. The most obvious is the temptation for hype and false promises about what the LHC will achieve, as though all the secrets of creation are about to come tumbling out of its tunnels. And it is an uneasy spectacle to see media commentators duty-bound to wax lyrical about matters they understandably don’t really grasp. Most scientists are now reasonably alert to the dangers of overselling, even if they sometimes struggle to keep them in view.

It’s also worth reminding spectators that the LHC is no model of ‘normal’ science. The scale and cost of the enterprise are much vaster than those enjoyed by most researchers, and this very fact restricts the freedom of the scientists involved to let their imaginations and intuitions roam. The key experiments are necessarily preordained and decided by committee and consensus, a world away from a small lab following its nose. This is not intrinsically a bad thing, but it is different.

There is, however, a deeper reason to think carefully about what the prospect of the LHC offers. Triumphalism can mask the fact that there are some unresolved questions about the scientific and philosophical underpinnings of the enterprise, which will not necessarily be answered by statistical analyses of the debris of particle collisions. These issues are revealingly explored in a preprint by Alexei Grinbaum, a researcher at the French Atomic Energy Commission (CEA) in Gif-sur-Yvette [1].

Under the carpet

Let’s be clear that high-energy physics is by no means alone in preferring to sweep some foundational loose ends under the carpet so that it can get on with its day-to-day business. The same is true, for example, of condensed-matter physics (which, contrary to media impressions, is what most physicists do) and quantum theory. It is a time-honoured principle of science that a theory can be useful and valid even if its foundations have no rigorous justification.

But the best reason to tease apart the weak joints in the basement of fundamental physics is not in order to expose it as a precarious edifice — which it is not — but because these issues are so interesting in themselves.

Paramount among them, says Grinbaum, is the matter of symmetry. That’s a ubiquitous word in the lexicon of high-energy physics, but it is far from easy for a lay person to see what is meant by it. At root, the word retains its everyday meaning. But what this corresponds to becomes harder to discern when, for example, symmetry is proposed to unite classes of quantum particles or fields.

Controlling the masses

It is symmetry that anchors the notion of the Higgs particle, probably the one target of the LHC that anyone with any interest in the subject will have heard of. It is easy enough to explain that ‘the Higgs particle gives other particles their mass’ (an apocryphal quote has Lenin comparing it to the Communist Party: it controls the masses). And yes, we can offer catchy analogies about celebrities accreting hordes of hangers-on as they pass through a party. But what does this actually mean? Ultimately, the Higgs mechanism is motivated by a need to explain why a symmetry that seemed once to render equivalent two fundamental forces — the electromagnetic and weak nuclear forces — has been broken, so that the two forces now have different strengths and ranges.

This — the ‘symmetry breaking’ of a previously unified ‘electroweak’ force — is what the LHC will primarily probe. The Higgs explanation for this phenomenon fits nicely into the Standard Model of particle physics — the summation of all we currently know about this branch of reality. It is the only component of the Standard Model that remains to be verified (or not).

So far, this is pretty much the story that, if pressed beyond sound bites, the LHC’s spokespeople will tell. But here’s the thing: we don’t truly know what role symmetry does and should play in physical theory.

Practically speaking, symmetry has become the cornerstone of physics. But this now tends to pass as an unexamined truth. The German mathematician Hermann Weyl, who introduced the notion of gauge symmetry (in essence, a description of how symmetry acts on local points in space) in the 1920s, claimed that “all a priori statements in physics have their origin in symmetry”. For him and his contemporaries, laws of physics have to possess certain symmetry properties — Einstein surely had something of this sort in mind when he said that “the only physical theories that we are willing to accept are the beautiful ones”. For physicist Steven Weinberg, symmetry properties “dictate the very existence” of all physical forces — if they didn’t obey symmetry principles, the Universe would find a way to forbid them.

Breaking the pattern


But is the Universe indeed some gloriously symmetrical thing, like a cosmic diamond? Evidently not. It’s a mess, not just at the level of my desk or the arbitrary patchwork of galaxy clusters, but also at the level of fundamental physics, with its proliferation of particles and forces. That’s where symmetry-breaking comes in: when a cosmic symmetry breaks, things that previously looked identical become distinct. We get, among other things, two different forces from one electroweak force.

And the Higgs particle is generally believed to hold the key to how that happened. This ‘particle’ is just a convenient, potentially detectable signature of the broader hypothesis for explaining the symmetry breaking — the ‘Higgs mechanism’. If the mechanism works, there is a particle associated with it.

But the problem with the Higgs mechanism is that it does not and cannot specify how the symmetry is broken. As a result, it does not uniquely determine the mass of the Higgs particle. Several versions of the theory offer different estimates, which vary by a factor of around 100. That’s a crucial difference in terms of how readily the LHC might observe it, if at all. Now, accounts of this search may present this situation blandly as simply a test of competing theories; but the fact is that the situation arises because of ambiguities about what symmetry-breaking actually is.

The issue goes still deeper, however. Isn’t it curious that we should seek for an explanation of dissimilar entities in terms of a theory in which they are the same? Suppose you find that the world contains some red balls and some blue ones. Is it more natural to decide that there is a theory that explains red balls, and a different one that explains blue balls, or to assume that red and blue balls were once indistinguishable? As it happens, we already have very compelling reasons to believe that the electromagnetic and weak forces were once unified; but deciding to make unification a general aim of physical theories is quite another matter.

Physics Nobel laureate David Gross has pointed out the apparent paradox in that latter approach: “The search for new symmetries of nature is based on the possibility of finding mechanisms, such as spontaneous symmetry breaking, that hide the new symmetry” [2]. Grinbaum is arguing that it’s worth pausing to think about that assumption. To rely on symmetry arguments is to accept that the resulting theory will not predict the particular outcome you observe, where the symmetry may be broken in an arbitrary way. Only experiments can tell you what the result of the symmetry-breaking is.

Should we trust in beauty?

Einstein’s statement is revealing because it exposes a strand of platonic thinking in modern physics: beauty matters, and it is a vision of beauty based on order and symmetry. Pragmatically speaking, arguments that use symmetry have proved to be fantastically fertile in fundamental physics. But as Weyl’s remark shows, they are motivated only by assumptions about how things ought to be.

A sense of aesthetic beauty is now not just something that physicists discover in the world; it is, in the words of Gian Francesco Giudice, a theoretical physicist at CERN, “a powerful guiding principle for physicists as they try to construct new theories” [3]. They look for ways to build it in. This, as Grinbaum points out, “is logically unsound and heuristically doubtful”.

Grinbaum says that such aesthetic judgements give rise to ideas about the ‘naturalness’ of theories. This notion of naturalness figures in many areas of science, Giudice points out, but is generally dangerously subjective: it is ‘natural’ to us that the solar system is heliocentric, but it wasn’t at all to the ancient Greeks, or indeed to Tycho Brahe, the sixteenth-century Danish astrologer.

But Giudice explains that “a more precise form of naturalness criterion has been developed in particle physics and it is playing a fundamental role in the formulation of theoretical predictions for new phenomena to be observed at the LHC”. The details of this concept of naturalness are technical, but in essence it purports to explain why symmetry-breaking of the electroweak interaction left gravity so much weaker than the weak force (its name notwithstanding). The reasoning here leads to the prediction that production of the Higgs particle will be accompanied by a welter of other new particles not included in the Standard Model. The curious thing about this prediction is that it is motivated not to make any theory work out, but simply to remove the apparent ‘unnaturalness’ of the imbalance in the strengths of the two forces. It is basically a philosophical matter of what ‘seems right’.

Workable theories


There are also fundamental questions about why physics has managed to construct all manner of workable theories — of electromagnetism, say — without having to postulate the Higgs particle at all. The simple answer is that, so long as we are talking about energies well below the furious levels at which the Higgs particle becomes apparent, and which the LHC hopes to create, it is enough to subsume the whole Higgs mechanism within the concept of mass. This involves creating what physicists call an effective field theory, in which phenomena that become explicit above a certain energy threshold remain merely implicit in the parameters of the theory. Much the same principle permits us to use Newtonian mechanics when objects’ velocities are much less than the speed of light.

Effective field theories thus work only up to some limiting energy. But Grinbaum points out that this is no longer just a practical simplification but a methodology: “Today physicists tend to think of all physical theories, including the Standard Model, as effective field theories with respect to new physics at higher energies.” The result is an infinite regression of such theories, and thus a renunciation of the search for a ‘final theory’ — entirely the opposite of what you might think physics is trying to do, if you judge from popular accounts (or occasionally, from their own words).

Effective field theories are a way of not having to answer everything at once. But if they simply mount up into an infinite tower, it will be an ungainly edifice at best. As philosopher of science Stephan Hartmann at Tilburg University in the Netherlands has put it, the predictive power of such a composite theory would steadily diminish “just as the predictive power of the Ptolemaic system went down when more epicycles were added” [4].

Einstein seemed to have an intimation of this. He expressed discomfort that his theory of relativity was based not simply on known facts but on an a priori postulate about the speed of light. He seemed to sense that this made it less fundamental.

These and other foundational issues are not new to LHC physics, but by probing the limits of the Standard Model the new collider could bring them to the fore. All this suggests that it would be a shame if the results were presented simply as data points to be compared against theoretical predictions, as though to coolly assess the merits of various well-understood proposals. The really exciting fact is that the LHC should mark the end of one era — defined by the Standard Model — and the beginning of the next. And at this point, we do not even know the appropriate language to describe what will follow — whether, for example, it will be rooted in new symmetry principles (such as supersymmetry, which relates hitherto distinct particles), or extra dimensions, or something else. So let’s acknowledge and even celebrate our ignorance, which is after all the springboard of the most creative science.

References
1. Grinbaum, A. Preprint at http://www.arxiv.org/abs/0806.4268 (2008).
2. Gross, D. in Conceptual Foundations of Quantum Field Theory Cao, T. Y. (ed.) (Cambridge Univ. Press, 1999).
3. Giudice, G. F. Preprint at http://www.arxiv.org/abs/0801.2562 (2008).
4. Hartmann, S. Stud. Hist. Phil. Mod. Phys. 32, 267-304 (2001).

Wednesday, June 25, 2008

Birds that boogie
[I reckon this one speaks for itself. It is on Nature News. I just hope Snowball can handle the fame.]

YouTube videos of dancing cockatoos are not flukes but the first genuine evidence of animal dancing

When Snowball, a sulphur-crested male cockatoo, was shown last year in a YouTube video apparently moving in time to pop music, he became an internet sensation. But only now has his performance been subjected to scientific scrutiny. And the conclusion is that Snowball really can dance.

Aniruddh Patel of the Neurosciences Institute in La Jolla, California, and his colleagues say that Snowball’s ability to shake his stuff is much more than a cute curiosity. It could shed light on the biological bases of rhythm perception, and might even hold implications for the use of music in treating neurodegenerative disease.

‘Music with a beat can sometimes help people with Parkinson’s disease to initiate and coordinate walking’, says Patel. ‘But we don’t know why. If nonhuman animals can synchronize to a beat, what we learn from their brains could be relevant for understanding the mechanisms behind the clinical power of rhythmic music in Parkinson’s.’

Anyone watching Snowball can see that his foot-tapping seems to be well synchronized with the musical beat. But it was possible that in the original videos he was using timing cues from people dancing off camera. His previous owner says that he and his children would encourage Snowball’s ‘dancing’ with rhythmic gestures of their own.

Genuine ‘dancing’ – the ability to perceive and move in time with a beat – would also require that Snowball adjust his movements to match different rhythmic speeds (tempi).

To examine this, Patel and his colleagues went to meet Snowball. He had been left by his previous owner at a bird shelter, Birdlovers Only Rescue Service Inc. in Schererville, Indiana, in August 2007, along with a CD containing a song to which his owner said that Snowball liked to dance: ‘Everybody’ by the Backstreet Boys.

Patel and colleagues videoed Snowball ‘dancing’ in one of his favourite spots, on the back of an armchair in the office of Birdlovers Only. They altered the tempi of the music in small steps, and studied whether Snowball stayed in synch.

This wasn’t as easy as it might sound, because Snowball didn’t ‘dance’ continuously during the music, and sometimes he didn’t get into the groove at all. So it was important to check whether the episodes of apparent synchrony could be down to pure chance.

‘On each trial he actually dances at a range of tempi’, says Patel. But the lower end of this range seemed to correlate with the beat of the music. ‘When the music tempo was slow, his tempo range included slow dancing. When the music was fast, his tempo range didn’t include these slower tempi.’

A statistical check on these variations showed that the correlation between the music’s rhythm and Snowball’s slower movements was very unlikely to have happened by chance. ‘To us, this shows that he really does have tempo sensitivity, and is not just ‘doing his own thing’ at some preferred tempo’, says Patel.

He says that Snowball is unlikely to be unique. Adena Schachner of Harvard University has also found evidence of genuine synchrony in YouTube videos of parrots, and also in studies of perhaps the most celebrated ‘intelligent parrot’, the late Alex, trained by psychologist Irene Pepperberg [1]. Patel [2] and Schachner will both present their findings at the 10th International Conference on Music Perception and Cognition in Sapporo, Japan, in August.

Patel and his colleagues hope to explore whether Snowball’s dance moves are related to the natural sexual-display movements of cockatoos. Has he invented his own moves, or simply adapted those of his instinctive repertoire? Will he dance with a partner, and if so, will that change his style?

But the implications extend beyond the natural proclivities of birds. Patel points out that Snowball’s dancing behaviour is better than that of very young children, who will move to music but without any real synchrony to the beat [3]. ‘Snowball is better than a typical 2-4 year old, but not as good as a human adult’, he says. (Some might say the same of Snowball’s musical tastes.)

This suggests that a capacity for rhythmic synchronization is not a ‘musical’ adaptation, because animals have no genuine ‘music’. The question of whether musicality is biologically innate in humans has been highly controversial – some argue that music has served adaptive functions that create a genetic predisposition for it. But Snowball seems to be showing that an ability to dance to a beat does not stem from a propensity for music-making.

References

1. Pepperberg, I. M. Alex & Me (HarperCollins, 2008).
2. Patel, A. D. et al., Proc. 10th Int. Conf. on Music Perception and Cognition, eds M. Adachi et al. (Causal Productions, Adelaide, in press).
3. Eerola, T. et al., Proc. 9th Int. Conf. on Music Perception and Cognition, eds M. Baroni et al. (2006).

Wednesday, June 18, 2008

Fly me to the moon?

Last Monday I took part in a debate at the Royal Institution on human spaceflight: is it humanity’s boldest endeavour or one of our greatest follies? My opponent was Kevin Fong of UCL, who confirmed all my initial impressions: he is immensely personable, eloquent and charming, and presents the sanest and least hyperbolic case for human spaceflight you’re ever likely to hear. All of which was bad news for my own position, of course, but in truth this was a debate I was never going to win: a show of hands revealed an overwhelming majority in favour of sending humans into space at the outset, and that didn’t change significantly (I was gratified that I seemed to pick up a few of the swing voters). And perhaps rightly so: if Kevin was put in charge of prioritizing and publicizing human spaceflight in the west, I suspect I’d find it pretty unobjectionable too. Sadly, we have instead the likes of the NASA PR machine and the bloomin’ Mars Society. (The only bit of hype I detected from Kevin all evening was about the importance to planetary geology of the moon rocks returned by Apollo – he seemed to accept (understandably, as an anaesthetist) the absurdly overblown claims of Ian Crawford.) In any event, it was very valuable to hear the ‘best case’ argument for human spaceflight, so that I could sharpen my own views on the matter. As I said then, I’m not against it in principle (I’m more of an agnostic) – but my goodness, there’s a lot of nonsense said and done in practice, and it seems even the Royal Astronomical Society bought some of it. Here, for what it is worth, is a slightly augmented version of the talk I gave.

*****

Two weeks ago I watched the documentary In the Shadow of the Moon, and was reminded of how exciting the Apollo missions were. Like most boys growing up in the late 60s, I wanted to be an astronaut. I retain immense respect for the integrity, dedication and courage of those who pioneered human spaceflight.

So it’s not idly that I’ve come to regard human spaceflight today as a monumental waste of money. I’ve been forced to this conclusion by the stark facts of how little it has achieved and might plausibly achieve in the near future, in comparison to what can be done without it.

Having watched those grainy, monochrome pictures in 1969, and having duly built my Airfix lunar modules and moon buggies, as a teenager I then watched Carl Sagan’s TV series Cosmos at the start of the 1980s. Now, Sagan did say ‘The sky calls to us; if we do not destroy ourselves we will one day venture to the stars.’ And I suspect he is right. But he, like me, didn’t seem to be in any great hurry about that. Or rather, I think he felt that we were essentially going there already, because Sagan drew on the results then just arriving from the Voyager spacecraft, launched only a year or so before the series was made and at that time investigating Jupiter and Saturn. He also reaped the bounty of the earlier Mariner missions to Venus and Mars, which offered images that remain stunning even now. The moon landings were a fantastic human achievement, but it was the unmanned missions that I encountered through Cosmos that really opened my eyes to the richness and the strangeness of the universe. Even in Technicolor, the moon is a drab place; but here, thanks to the Mariners and Voyagers, were worlds of swirling colour, of ice sheets and volcanoes and dust storms and molten sulphur. Did I feel short-changed that we weren’t sending humans to these places? On the contrary, I think I sensed even then that humans don’t belong here; they would simply be absurd, insignificant, even unwelcome intruders.

There had been Skylab in the 1970s, of course, in Earth orbit for six years, and that seemed kind of fun but now I recall a nagging sense that I wasn’t sure quite what they were doing up there, beyond a bit of microgravitational goofing around. And then came the space shuttle, and the Challenger disaster of 1986, and I began to wonder, what exactly is the aim of all this tentative astronautics at the edge of space?

And all the while that human spaceflight was losing its way, unmanned missions were offering us jaw-dropping sights. The Magellan mission wowed us on Venus, the Galileo mission gave thrilling views of Jupiter and its moons, and the rovers Opportunity and Spirit continue to wander on Mars sending back breathtaking postcards. And most recently, the Cassini-Huygens mission to Saturn and its moon Titan has shown us images of the strangest world we’ve ever seen, with methane lakes oozing up against shores encrusted with organic material under the drizzle of a methane rain.

This has all made me look again at the arguments put forward for why humans should go into space. And I’ve yet to find one that convinces me of its value, at this stage in our technological evolution.

One of the first arguments we hear is that there are technological spinoffs. We need to be cautious about this from the outset, because if you put a huge amount of money into developing any new technology, you’re bound to get some useful things from it. Of course, it is probably impossible to quantify, and perhaps rather meaningless to ask, what we would have found if we had directed even a small fraction of the money spent on human spaceflight directly into research on the sort of products it has spun off; but the fact remains that if you want a new kind of miniature heart pump or a better alloy for making golf clubs or better thermal insulation – if you really decide that you need these things badly – then sending people into space is a peculiar way of going about it. Whatever you want to say about the ragbag of products that have had some input from human spaceflight technology, I don’t think you can call them cost-effective. We’ve also got to take care that we distinguish between the spinoffs that have come from unmanned spaceflight.

What’s more, the spinoff argument has been routinely distorted. You ask many people what are the major spinoffs from spaceflight and they will say ‘Teflon’. So let me tell you: DuPont’s major Teflon plant in Virginia was producing a million pounds of it a year in 1950, and Teflon cookware was in the stores when Yuri Gagarin orbited the earth. Then people might say ‘Velcro’ – no, invented in Switzerland in 1941. Or if they’re American, they might cite the instant fruit drink Tang, which NASA simply bought off the supermarket shelf for their astronauts. When the head of NASA, Mike Griffin, referred to spinoffs in a recent speech defending human spaceflight, the first examples he reached for were these three – even though he then admitted that these didn’t come from the space program at all! You have to wonder why these spinoff myths have been allowed to persist for so long – was there really nothing better to replace them?

Then there’s the argument that you can do great science in space. Here again it is not too strong to say that some advocates routinely peddle false claims. Yes, you can do some neat experiments in space. For example, you can look at the fine details of how crystals grow, undisturbed by the convection currents that stir things up under gravity. And that also means you can grow more perfect crystals. Fine – but have we truly benefited from it, beyond clearing up a few small questions about the basic science of crystal growth? One common claim is that these improved crystals, when made from biomolecules, can offer up a more accurate picture of where all the atoms sit, so that we can design better drugs to interact with them. But I am not aware of any truly significant advance in drug development that has relied in any vital way on crystals grown in space. If I’ve overlooked something, I’d be happy to know of it, although you can’t always rely on what you read to make that judgement. In 1999, for example, it was claimed that research on an anti-flu drug had made vital use of protein crystals grown in a NASA project on board a space shuttle. NASA issued a press release with the headline ‘NASA develops flu drugs in space’. To which one of the people involved in the study replied by saying the following: ‘the crystals used in this project were grown here on Earth. One grown on Mir [the Russian space station, and nothing to do with NASA] was used in the initial stages, but it was not significantly better than the Earth-grown crystals.’

I’m confident of this much: if you ask protein crystallographers which technology has transformed their ability to determine crystal structures with great precision, it won’t cross their minds to mention microgravity. They will almost certainly cite the advent of high-intensity synchrotron X-ray sources here on Earth. Crystals grown in space are different, we’re told. Yes, American physicist Robert Park has replied, they are: ‘They cost more. Three orders of magnitude more.’

What we do learn in space that we can’t easily learn on Earth is the effect of low or zero gravity on human physiology. That’s often cited as a key scientific motivation for space stations. But wait a minute. Isn’t there a bit of circularity in the argument that the reason to put people in space is to find out what happens to them when you put them there?

One of the favourite arguments for human space exploration, particularly of the moon and Mars, is that only humans can truly go exploring. Only we can make expert judgements in an instant based on the blend of logic and intuition that one can’t program into robots. Well, there’s probably some truth in that, but it doesn’t mean that the humans have to physically be there to do it. Remote surgery has demonstrated countless times now that humans can use their skill and judgement in real time to guide robotics. NASA researchers have been calling the shots all along with way with the Mars rovers. This pairing of human intelligence with remote, robust robotics is now becoming recognized as the obvious way to explore extreme environments on Earth, and it surely applies in space too. It’s been estimated that, compared with unmanned missions, the safety requirements for human exploration push up launch costs by at least a factor of ten. We still lose a fair number of unmanned missions, but we can afford to, both in financial and in human terms. Besides, it’s easy to imagine ways in which robots can in fact be far more versatile explorers than humans, for example by deploying swarms of miniature robots to survey large areas. And in view of the current rate of advance in robotics and computer intelligence, who knows what will become feasible within the kind of timescale inevitably needed to even contemplate a human mission to Mars. I accept that even in 50 years time there may well be things humans could do on Mars that robots cannot; but I don’t think it is at all clear that those differences will in themselves be so profound as to merit the immense extra cost, effort and risk involved in putting humans there.

And now let’s come to what might soon be called the Hawking justification for human space exploration: we ‘need’ another world if we’re going to survive as a species. At a recent discussion on human exploration, NASA astronaut and former chief scientist John Grunsfeld put it this way: ‘single-planet species don’t survive.’ He admitted that he couldn’t prove it, but this is one of the most unscientific things I’ve heard said about human space exploration. How do you even begin to formulate that opinion? I have an equally unscientific, but to my mind slightly more plausible suggestion: ‘species incapable of living on a single, supremely habitable planet don’t survive.’

Quite aside from these wild speculations, one wonders how some scientists can be quite so blind to what our local planetary environment is like. They seem ready to project visions of Earth onto any other nearby world, just as Edgar Rice Burroughs did in his Mars novels. If you’ve ever flown across Siberia en route to the Far East, you know what it is like down there: there’s not a sign of human habitation for miles upon miles. Humans are incredibly adaptable to harsh environments, but there are places on Earth where we just can’t survive unaided. Well, let me tell you: compared with the Moon and Mars, Siberia is like Bognor Regis. Humans will not live autonomously here while any of us is alive, nor our children. It may be that one day we can run a moonbase, much as we have run space stations. But if the Earth goes belly up, the Moon and Mars will not save us, and to suggest otherwise is fantasy that borders on the irresponsible.

I was once offered an interesting justification for human space exploration by American planetary scientist Brian Enke. In response to a critique of mine, he said this:
‘I can’t think of a better way to devastate the space science budget in future years than to kill the goose that lays the golden eggs, the manned space program. We would destroy our greatest justification and base of support in the beltway. Why should Uncle Sam fund space science at its current levels if it gives up on manned space exploration? Our funding depends upon a tenuous mindset - a vision of a progressive future that leads somewhere.’

In other words, we scientists may not be terribly interested in human spaceflight, but it’s what the public loves, and we can’t expect their support if we take that away.

Now, I have some sympathy with this; I can see what Brian means. But I can’t see how a human space program could be honestly justified on these grounds. Scientists surely have a responsibility to explain clearly to the public what they think they can achieve, and why they regard it as worth achieving. The moment we begin to offer false promises or create cosmetic goals, we are in deep trouble. Is there any other area of science in which we divert huge resources to placating public opinion, and even if there was, should we let that happen? In any event, human spaceflight is so hideously expensive that it’s not clear, once we have indulged this act of subterfuge, that we will have much money left to do the real science anyway. That is becoming very evidently an issue for NASA now, with the diversion of funds to fulfil George Bush’s grandiose promise of a human return to the moon by 2020, not to mention the persistent vision of a manned mission to Mars. If we give the ‘beltway’ what they want (or what we think they want), will there be anything left in the pot?

In fact, the more I think, in the light of history, about this notion of assuaging the public demand for ‘vision’, the more unsettling it becomes. Let’s put it this way. In the early 1960s, your lover says ‘Why are you a good-for-nothing layabout? Just look at what the guy next door is building – why can’t you do that?’ And so you say, ‘All right my dear, I’ll build you a rocket to take us to the moon.’ Your lover brightens up instantly, saying ‘Hey, that’s fantastic. I love you after all.’ And so you get to work, and before long your lover is saying ‘Why are you spending all this damned time and money on a space rocket?’ But you say, ‘Trust me, you’ll love it.’ The grumbling doesn’t stop, but you do it, and you go to the moon, and your lover says ‘Honey, you really are fabulous. I’ll love you forever.’ Two years later, the complaining has started again: ‘So you went to the moon. Big deal. Well, you can stop now, I’m not impressed any more.’ So you stop and go back to tinkering in your garage.

The years go by, and suddenly it’s the 1990s, and your lover is discontented again. ‘What have you ever achieved?’ and so on. ‘Oh, but I took us to the moon’, you say. ‘Big deal.’ ‘Well, you could go there again.’ ‘Hmm…’ ‘All right’, you say, exasperated, ‘look, we’ll go the moon again and then to Mars.’ ‘Oh honey, that’s so wonderful, if you do that I’ll love you forever.’ And what’s this? You believe it! You really believe that two years after you’ve been to Mars, they won’t be saying ‘Oh, Mars! Get a life. What else can you do?’ What a sucker. And indeed, what else will you do? Where will you go after that, to keep them happy for a few years longer?

We’re told that space science inspires young people to become scientists. I think this is true. But how do we know that they might not be equally motivated by scientific and technological achievements on Earth? Has anyone ever tried to answer that question? Likewise, how do we compare the motivation that comes from putting people into space with that from the Mars rovers or the Huygens mission to Titan? How would young people feel about being one of the scientists who made these things possible and who were the first to see the images they obtained? Is the allure of astronautics really so much more persuasive than anything else science has to offer young people? Do we know that it is really so uniquely motivating? I don’t believe that has ever been truly put to the test.

I mentioned earlier some remarks by NASA’s head Mike Griffin about human spaceflight. These were made in the context of a speech last year about the so-called ‘real’ reasons we send people into space. Sure, he said, we can justify doing this in hard-nosed cost-benefit terms, by talking about spinoffs, importance for national security, scientific discovery and so on. Now, as I’ve said, I think all those justifications can in fact be questioned, but in any case Griffin argued that they were merely the ‘acceptable’ reasons for space exploration, the kind of arguments used in public policy making. But who, outside of those circles, talks and thinks like that, he asked. The ‘real’ reasons why humans try to fly the Atlantic and climb Everest, he said, have nothing to do with such issues; they are, in Griffin’s words, ‘intuitive and compelling but not immediately logical’, and are summed up in George Mallory’s famous phrase about why we go up mountains: ‘Because it is there’. We want to excel, we want to leave something for future generations. The real reasons, Griffin said, are old-fashioned, they are all about the American pioneer spirit.

This is what the beltway wants to hear! That’s the Columbus ideal! Yes, the real reason many people, in the US at least, will confess to an enthusiasm for human spaceflight is that it speaks of the boldness and vision that has allowed humanity to achieve wonderful things. Part of this is mere hubris – the idea that we’ll have not ‘really’ been to Mars until we’ve stamped our big, dirty feet on the place (and planted our national flag). But part is understandable and valid: science does need vision and ambition. But in terms of space travel, this trades on the illusion that space is just the next frontier, like Antarctica but a bit further away. Well, it’s not. Earth is an oasis in a desert vaster than we can imagine. I can accept the Moon as a valid and clearly viable target, and we’ve been there. I do think that one day humans will go to Mars, and I’m not unhappy about that ultimate prospect, though I see no purpose in trying to do it with our current, fumbling technologies. But what then? Space does not scale like Earth: it has dimensions in time and space that do not fit with our own. Space is not the Wild West; it is far, far stranger and harder than that.

Actually, invoking the Columbus spirit is apt, because of course Columbus’s voyage was essentially a commercial one. And this, it seems, is the direction in which space travel is now going. In 2004 a privately financed spaceplane called SpaceShipOne won the Ansari X Prize, an award of US$10 million offered for the first non-government organization to launch a reusable manned spacecraft into space twice within two weeks. SpaceShipOne was designed by aerospace engineer Burt Rutan, financed by Microsoft co-founder Paul Allen. Rutan is now developing the space vehicle that Richard Branson plans to use for his Virgin Galactic business, which will offer the first commercial space travel. The plan is that Rutan’s SpaceShipTwo will take space tourists 100 kilometres up into suborbital space at a cost of around $200,000 each. Several other companies are planning similar schemes, and space tourism looks set to happen in one way or another. Part of me deplores this notion of space as a playground for the rich. But part of me thinks that perhaps this is how human spaceflight really ought to be done, if we must do it at all: let’s admit its frivolity, marvel at the inventiveness that private enterprise can engender, and let the wasted money come from the pockets of those who want it.

I must confess that I couldn’t quite believe the pathos in one particular phrase from Mike Griffin’s speech: ‘Who can watch people assembling the greatest engineering project in the history of mankind – the International Space Station – and not wonder at the ability of people to conceive and to execute that project?’ I’m hoping Griffin doesn’t truly believe this, but I fear he does. I think most scientists would put it a little differently, something like this: ‘Who can watch people assembling the most misconceived and pointless engineering project in the history of mankind – the International Space Station – and not wonder at the ability of people to burn dollars?’ Scientists disagree about a lot of things, but there’s one hypothesis that will bring near-unanimity: the International Space Station is a waste of space.

Ronald Reagan told the United States in 1984 that the space station would take six years to build and would cost $8 billion. Sixteen years and tens of billions of dollars later, NASA enlisted the help of 15 other nations and promised that the station would be complete by 2005. The latest NASA plans say it will be finished by the end of this decade. And it had better be, because in 2010 the shuttles will be decommissioned.

It is easy to mock the ISS, with its golf-playing astronauts, its Pizza Hut deliveries, its drunken astronauts and countless malfunctions. But you have to ask yourself: why is it so easy to mock it? Perhaps because it really is risible?

Robert Park, the physicist at the University of Maryland who I mentioned earlier and who has consistently been one of the sanest voices on space exploration, summed this up very recently in a remark with which I want to leave you. He said: ‘There is a bold, adventurous NASA that explores the universe. That NASA had a magnificent week. Having traveled 423 million miles since leaving Earth, the Phoenix Mars Lander soft-landed in the Martian arctic. Its eight-foot backhoe will dig into the permafrost subsoil to see if liquid water exists. There is another NASA that goes in circles on the edge of space. That NASA is having a problem with the toilet on the ISS. I need not go into detail to explain what happens when a toilet backs up in zero gravity - it defines ugly.’

Which vision of space exploration would you rather have?