Friday, April 20, 2007

Physicists start saying farewell to reality
Quantum mechanics just got even stranger
[This is my pre-edited story for Nature News on a paper published this week, which even this reserved Englishman must acknowledge to be deeply cool.]

There’s only one way to describe the experiment performed by physicist Anton Zeilinger and his colleagues: it’s unreal, dude.

Measuring the quantum properties of pairs of light particles (photons) pumped out by a laser has convinced Zeilinger that “we have to give up the idea of realism to a far greater extent than most physicists believe today.”

By realism, what he means is the idea that objects have specific features and properties: that a ball is red, that a book contains the works of Shakespeare, that custard tastes of vanilla.

For everyday objects like these, realism isn’t a problem. But for objects governed by the laws of quantum mechanics, such as photons or subatomic particles, it may make no sense to think of them as having well defined characteristics. Instead, what we see may depend on how we look.

Realism in this sense has been under threat ever since the advent of quantum mechanics in the early twentieth century. This seemed to show that, in the quantum world, objects are defined only fuzzily, so that all we can do is to adduce the probabilities of their possessing particular characteristics.

Albert Einstein, one of the chief architects of quantum theory, could not believe that the world was really so indeterminate. He supposed that there was a deeper level of reality yet to be uncovered: so-called ‘hidden variables’ that specified any object’s properties precisely.

Allied to this assault on reality was the apparent prediction of what Einstein called ‘spooky action at a distance’: disturbing one particle could instantaneously determine the properties of another particle, no matter how far away it is. Such interdependent particles are said to be entangled, and this action at a distance would violate the principle of locality: the idea that only local events govern local behaviour.

In the 1960s the Irish physicist John Bell showed how to put locality and realism to the test. He deduced that they required two experimentally measurable quantities of entangled quantum particles such as photons to be equal. The experiments were carried out in the ensuing two decades, and they showed that Bell’s equality is violated.

This means that either realism or locality, or both, fails to apply in the quantum world. But which of these cases is it? That’s what Zeilinger, based at the University of Vienna, and his colleagues have set out to test [1].

They have devised another ‘equality’, comparable to Bell’s, that should hold up if quantum mechanics is non-local but ‘realistic’. “It’s known that you can save realism if you kick out locality”, Zeilinger says.

The experiment involves making pairs of entangled photons and measuring a quantum property of each of them called the polarization. But whereas the tests of Bell’s equality measured the so-called ‘linear’ polarization – crudely, whether the photons’ electromagnetic fields oscillate in one direction or the opposite – Zeilinger’s experiment looks at a different sort of polarization, called elliptical polarization, for one of the photons.

If the quantum world can be described by non-local realism, quantities derived from these polarization measurements should be equal. But Zeilinger and colleagues found that they weren’t.

This doesn’t rule out all possible non-local realistic models, but it does exclude an important subset of them. Specifically, it shows that if you have a group of photons all with independent polarizations, then you can’t ascribe specific polarizations to each. It’s rather like saying that in a car park it is meaningless to imagine that particular cars are blue, white or silver.

If the quantum world is not realistic in this sense, then how does it behave? Zeilinger says that some of the alternative non-realist possibilities are truly weird. For example, it may make no sense to imagine ‘counterfactual determinism’: what would happen if we’d made a different measurement. “We do this all the time in daily life”, says Zeilinger – for example, imagining what would happen if we’d tried to cross the road when that truck was coming.

Or we might need to allow the possibility of present actions affecting the past, as though choosing to read a letter or not affects what it says.

Zeilinger hopes his work will stimulate others to test such possibilities. “I’m sure our paper is not the end of the road”, he says. “But we have a little more evidence that the world is really strange.”

Reference
1. Gröblacher, S. et al. Nature 446, 871 – 875 (2007).

Tuesday, April 17, 2007


Tales of the expected

[This is the pre-edited version of my latest Muse article for Nature online news.]

A recent claim of water on an extrasolar planet raises broader questions about how science news is reported.

“Scientists discover just what they expected” is not, for obvious reasons, a headline you see very often. But it could serve for probably a good half of the stories reported in the public media, and would certainly have been apt for the recent reports of water on a planet outside our solar system.

The story is this: astronomer Travis Barman of the Lowell Observatory in Flagstaff, Arizona, has claimed to find a fingerprint of water vapour in the light from a Sun-like star 150 light years away as it passes through the atmosphere of the star’s planet HD 209458b [T. Barman, Astrophys. J. in press (2007); see the paper here].

The claim is tentative and may be premature. But more to the point, at face value it confirms precisely what was expected for HD 209458b. Earlier observations of this Jupiter-sized planet had failed to see signs of water – but if it were truly absent, something would be seriously wrong with our understanding of planetary formation.

The potential interest of the story is that water is widely considered by planetary scientists to be the prerequisite for life. But if it’s necessary, it is almost certainly not sufficient. There is water on most of the other planets in our solar system, as well as several of their moons and indeed in the atmosphere of the Sun itself. But as yet there is no of sign of life on any of them.

The most significant rider is that to support life as we know it, water must be in the liquid state, not ice or vapour. That may be the case on Jupiter’s moons Europa and Callisto, as it surely once was (and may still be, sporadically) on Mars. But in fact we don’t even know for sure that water is a necessary condition for life: there is no reason to think, apart from our unique experience of terrestrial life, that other liquid solvents could not sustain living systems.

All of this makes Barman’s discovery – which he reported with such impeccable restraint that it could easily have gone unnoticed – intriguing, but very modestly so. Yet it has been presented as revelatory. “There may be water beyond our solar system after all”, exclaimed the New York Times. “First sign of water found on an alien world”, said New Scientist (nice to know that, in defiance of interplanetary xenophobia, Martians are no longer aliens).

As science writers are dismayingly prone to saying sniffily “oh, we knew that already”, I’m hesitant to make too much of this. It’s tricky to maintain a perspective on science stories without killing their excitement. But the plain fact is that there is water in the universe almost everywhere we look – certainly, it is a major component of the vast molecular clouds from which stars and planets condense.

And so it should be, given that its component atoms hydrogen and oxygen are respectively the most abundant and the third most common in the cosmos. Relatively speaking, ours is a ‘wet’ universe (though yes, liquid water is perhaps rather rare).

The truth is that scientists work awfully hard to verify what lazier types might be happy to take as proven. Few doubted that Arthur Eddington would see, in his observations of a solar eclipse in 1919, the bending of light predicted by Einstein’s theory of general relativity. But it would seem churlish in the extreme to begrudge the headlines that discovery generated.

Similarly, it would be unfair to suggest that we should greet the inevitable sighting of the Higgs boson (the so-called ‘God’ particle thought to give other particles their mass) with a shrug of the shoulders, once it turns up at the billion-dollar particle accelerator constructed at CERN in Geneva.

These painstaking experiments are conducted not so that their ‘success’ produces startling front-page news but because they test how well, or how poorly, we understand the universe. Both relativity and quantum mechanics emerged partly out of a failure to find the expected.

In the end, the interest of science news so often resides not in discovery but in context: not in what the experiment found, but in why we looked. Barman’s result, if true, tells us nothing we did not know before, except that we did not know it. Which is why it is still worth knowing.

Wednesday, April 04, 2007


Violin makers miss the best cuts
[This is the pre-edited version of my latest article for Nature’s online news. For more on the subject, I recommend Ulrike Wegst’s article “Wood for Sound” in the American Journal of Botany 93, 1439 (2006).]

Traditional techniques fail to select wood for its sound


Despite their reputation as master craftspeople, violin makers don’t choose the best materials. According to research by a team based in Austria, they tend to pick their wood more for its looks than for its acoustic qualities.

Christoph Buksnowitz of the University of Natural Resources and Applied Life Sciences in Vienna and his coworkers tested wood selected by renowned violin makers (luthiers) to see how beneficial it was to the violin’s sound. They found that the luthiers were generally unable to identify the woods that performed best in laboratory acoustic tests [C. Buksnowitz et al. J. Acoust. Soc. Am. 121, 2384 - 2395 (2007)].

That was admittedly a tall order, since the luthiers had to make their selections just by visual and tactile inspection, without measuring instruments. But this is normal practice in the trade: the instrument-makers tend to depend on rules of thumb and subjective impressions when deciding which pieces of wood to use. “Some violin makers develop their instruments in very high-tech ways, but most seem to go by design criteria optimized over centuries of trial and error”, says materials scientist Ulrike Wegst of the Max Planck Institute for Metals Research in Stuttgart, Germany.

Selecting wood for musical instruments has been made a fine art over the centuries. For a violin, different types of wood are traditionally employed for the different parts of the instrument: ebony and rosewood for the fingerboard, maple for the bridge, and spruce for the soundboard of the body. The latter amplifies the resonance of the strings, and accounts for much of an instrument’s tonal qualities.

Buksnowitz and colleagues selected 84 samples of instrument-quality Norway spruce, one of the favourite woods for violin soundboards. They presented these to 14 top Austrian violin makers in the form of boards measuring 40 by 15 cm. The luthiers were asked to grade the woods according to acoustics, appearance, and overall suitability for making violins.

While the luthiers had to rely on their senses and experience, using traditional techniques such as tapping the woods to assess their sound, the researchers then conducted detailed lab tests of the strength, hardness and acoustic properties.

Comparing the professional and scientific ratings, the researchers found that there was no relation between the gradings of the instrument-makers and the properties that would give the wood a good sound. Even testing the wood’s acoustics by knocking is a poor guide when the wood is still in the form of a plank.

The assessments, they concluded, were being made primarily on visual characteristics such as colour and grain. That’s not as superficial as it might seem; some important properties, such as density, do match with things that can be seen by eye. “Visual qualities can tell us a lot about the performance of a piece of wood”, says Buksnowitz.

He stresses that the inability of violin makers to identify the best wood shouldn’t be seen as a sign of incompetence. “I admire their handiwork and have an honest respect for their skills”, he says. “It is still the talent of the violin maker that creates a master’s violin.”

Indeed, it is a testament to these skills that a luthier can make a first-class instrument from less than perfect wood. They can shape and pare it to meet the customer’s needs, fitting the intrinsic properties of the wood to the taste of the musician. “There are instrument-makers who would say they can build a good instrument from any piece of wood”, Buksnowitz says. “The experienced maker can allow for imperfections in the material and compensate for them”, Wegst agrees.

But Buksnowitz points out that the most highly skilled makers, such as Amati and Stradivari, are not limited by their technique, and so their only hope of making even better instruments is to find better wood.

At the other end of the scale, when violins are mass-produced and little skill enters the process at all, then again the wood could be the determining factor in how good the instrument sounds.

Instrument-makers themselves recognize that there is no general consensus on what is meant by ‘quality’. They agree that they need a more objective way of assessing this, the researchers say. “We want to cooperate with craftsmen to identify the driving factors behind this vague term”, says Buksnowitz.

Wegst agrees that this would be valuable. “As in wine-making, a more systematic approach could make instrument-making more predictable”, she says.

Thursday, March 29, 2007

Prospect - a response

David Whitehouse, once a science reporter for the BBC, has responded to my denunciation of ‘climate sceptics’ in Prospect. Here are his comments – I don’t find them very compelling, but you can make up your own mind:

"Philip Ball veers into inconsistent personal opinion in the global warming debate. He says the latest IPCC report comes as close to blaming humans for global warming as scientists are likely to. True, its summary replaced “likely to be caused by humans” with “very likely”, but that is hardly a great stride towards certainty, especially when deeper in the report is says that it is only “likely” that current global temperatures are the highest they’ve been in the past 1,300 years.
As for “sceptics” saying false and silly things, Ball should look to the alarmist reports about global warming so common in the media. These “climate extremists” are obviously saying false, silly things, as even scientists who adhere to the consensus have begun to notice. And it’s data, not economics, that will be the future battleground. The current period of warming began in 1975, yet the very data the IPCC uses shows that since 2002 there has been no upward trend. If this trend does not re-establish itself with force, and soon, we will shortly be able to judge who has been silliest.”

The first point kind of defeats itself: by implying that the IPCC’s move towards a stronger statement is rather modest, Whitehouse illustrates my point, which is that the IPCC is (rightly) inherently conservative (see my last entry below) and so this is about as committed a position as we could expect to get. If they had jumped ahead of the science and claimed 100% certainty, you can guess who’d be the first to criticize them for it.

Then Whitehouse points out that climate extremists say silly and false things too. Indeed they do. The Royal Society, who Whitehouse has falsely accused of trying to suppress research that casts doubt on anthropogenic climate change, has spent a lot of time and energy criticizing groups who do that, such as Greenpeace. I condemn climate alarmism too. Yes, the Independent has been guilty of that – and is balanced out by the scepticism of the right-wing press, such as the Daily Telegraph. But Whitehouse’s point seems to be essentially that the sceptics’ false and silly statements are justified by those of their opponents. I suspect that philosophers have got a name for this piece of sophistry. Personally, I would rather than everyone try harder not to say false and silly things.

I don’t know whether Whitehouse’s next comment, about the ‘current warming’ beginning in 1975 is false and/or silly, or just misinformed. But if it’s the latter, that would be surprising for a science journalist. There was a warming trend throughout the 20th century, which was interrupted between 1940 and 1970. It has been well established that this interruption is reproduced in climate models that take account of the changes in atmospheric aerosol levels (caused by human activities): aerosols, which have a cooling influence, temporarily masked the warming. So the warming due to CO2 was continuous for at least a century, but was modified for part of that time by aerosols. The trend since 1975 was thus not the start of anything new. This is not obscure knowledge, and one can only wonder at why sceptics continue to suppress it.

As for the comment that the warming has levelled off since 2002: well, the sceptics make a huge deal of how variable the climate system is when they want to imply that the current warming may be just a natural fluctuation, but clearly they like to cherry-pick their variations. They argue that the variability is too great to see a trend reliably over many decades, but now here’s Whitehouse arguing for a ‘trend’ over a few years. Just look at the graphs and tell me whether the period from 2002 to 2006 can possibly be attributed to variability or to a change in trend. Can you judge? As any climatologist will tell you, it is utterly meaningless to judge such things on the basis of a few years. Equally, we can’t attach too much significance, in terms of assessing trends, to the fact that the last Northern Hemisphere winter was the warmest since records began. (Did Whitehouse forget to mention that?) But that fact hardly suggests that we’re starting to see the end of global warming.

“Who has been silliest” – OK, this is a rhetorical flourish, but writers should pick their rhetoric carefully. If the current consensus on a warming trend generated by human activity proves to be wrong, or counteracted by some unforeseen negative feedback, that will not make the scientists silly. It will mean simply that they formed the best judgement based on the data available. Yes, there are other possible explanations, but at this point none of them looks anywhere near as compelling, or even likely.

My real point is that it would be refreshing if, just once, a climate sceptic came up with an argument that gave me pause and forced me to go and look at the literature and see if it was right. But their arguments are always so easily refuted with information that I can take straight off the very narrow shelves of my knowledge about climate change. That’s the tiresome thing. I suppose this may sound immodest, but truly my intention is just the opposite: if I, as a jobbing science writer, can so readily see why these arguments are wrong or why they omit crucial factors – or at the very least, why the climate community would reject them – then why do these sceptics, all of them smart people, not see this too? I am trying hard to resist the suspicion of intellectual dishonesty; but how much resistance am I expected to sustain?
When it’s right to be reticent

[This is the pre-edited version of my latest article for muse@nature.com]

The caution of climate scientists is commendable even if caution is out of fashion.

Jim Hansen is no stranger to controversy. Ever since the 1980s he has been much more outspoken about the existence and perils of human-induced climate change than the majority of his scientific colleagues. A climate modeller at NASA’s Goddard Institute for Space Studies in New York, Hansen has flawless credentials to speak about climate change – and his readiness to do so has led to accusations of political interference and censorship (see here).

But his views haven’t only ruffled political feathers – they have dismayed other scientists too, who are uncomfortable with what they see as Hansen’s impatience with science’s inherent caution.

So in some ways, Hansen’s latest foray will surprise no one. In a preprint submitted for publication, he claims that “scientific reticence” is seriously underselling the potential danger that climate change poses – specifically, that it “is inhibiting communication of a threat of potentially large sea level rise.” Because disintegration of polar ice sheets is poorly understood, it is very difficult for scientists to make a reliable estimate of the likely future changes in sea level. As a result, Hansen charges, they have put figures on those aspects of sea-level rise they can estimate with some confidence, but have refrained from doing so for this key ingredient of the problem, giving the impression that the probable changes will be much smaller than those Hansen considers likely.

The responsibility for pronouncing on such issues falls primarily on the Intergovernmental Panel on Climate Change (IPCC), which Hansen regards as conservative. This, he admits, contributes to IPCC’s authority and is “probably a necessary characteristic, given that the IPCC document is produced as a consensus among most nations in the world and represents the views of thousands of scientists.” The most recent IPCC report has been characterised as the most strongly worded yet, but its conclusions apparently still required much negotiation and compromise.

And yet Hansen believes that “Given the reticence that IPCC necessarily exhibits, there need to be supplementary mechanisms” for communicating the latest scientific knowledge to the public and policy makers. He calls for a panel of leading scientists to “hear evidence and issue a prompt plain-written report” on the dangers – which clearly he envisages as a much more forceful statement about impending climate catastrophe and the need for immediate action to “get on a fundamentally different energy and greenhouse gas emissions path”.

This is a strange proposal, however. Basically, Hansen is calling on the scientific community to collect their scientific thoughts and then to speak out unscientifically – which is to say, without the caveats and caution that are the stock-in-trade of good science. However, Hansen points out that in fact scientists do this all the time – when they are talking among themselves. He recalls how, challenged by a lawyer acting on behalf of US automobile manufacturers to name a single glaciologist who agreed with his view that ice-sheet break-up would cause sea-level rise of more than a metre by 2100, he could not do so. Even though he had heard plenty of such scientists express deep concerns to this effect in private exchanges, none had said anything definitive in public.

Why wouldn’t they do that, if it’s really what they thought? Hansen posits what he call a “John Mercer effect”. In 1978 Mercer, a glaciologist at Ohio State University, suggested [1] that anthropogenic global warming could cause the West Antarctic ice sheet to disintegrate and sea level to surge by over 5 m within 50 years. Mercer’s paper was disputed by other scientists, who were generally portrayed as the sober and authoritative counterbalance to Mercer’s “alarmism”.

“It seemed to me”, says Hansen, “that the scientists preaching caution and downplaying the dangers of climate change fared better in receipt of research funding.” This reticence, he suggests, is encouraged and rewarded both professionally and financially.

Hansen says he experienced this himself in the early days of climate-change research. He was one of the first to point out, in a paper coauthored in 1981, that rising levels of atmospheric carbon dioxide could be linked to a warming trend throughout the twentieth century [2]. At that time the trend itself wasn’t so clear – the globe was only just emerging from a three-decade cooling spell, now known to be caused by atmospheric aerosol particles that temporarily outweighed the greenhouse-gas contributions.

But by 1989 Hansen was prepared to state with confidence that we could already see the effects of human-induced greenhouse warming in action. His colleagues felt this was jumping the gun – that it was still too early to rule out natural climate variability.

This history is instructive in the face of common claims from ‘climate sceptics’ that climate scientists play up the threat of global warming in order to secure funding. Anyone who witnessed (as I did) the slow and meticulous process that brought climate scientists from this position in the late 1980s to what is effectively a consensus today that human-induced climate change is almost certainly now evident will recognise the nonsense of the sceptics’ claim. The dogged reluctance to commit to that view in the late 1980s [3] looks rather remarkable now; but it was correct, and the community can regard its restraint with pride.

Yet it also means that Hansen was in a sense right back then. Such retrospective vindication, however, is not in itself justification. He could just as easily have been wrong. His views may have been based on sound intuition, but the science wasn’t yet there to support it.

All the same, Hansen is right to say that “scientific reticence” poses problems. He points out that, because the climate system is nonlinear (and in particular, because there are positive feedbacks to ice-sheet melting), excessive caution could end up sounding the alarm too late. Possibly it already has.

The question is what to do about that. But the real issue here is not that scientists are “reticent” – it is that the public, politicians and leaders are not accustomed to reasoning and debating as scientists do. It is within the very grain of science – Popper’s legacy, of course – that it advances by self-doubt. The contemporary culture, on the other hand (and probably it has never been very different), favours dogmatic, absolute statements, unencumbered with caveats. If they prove to be wrong, no matter – another equally definitive statement will blot out memory of the last one. Thus you can say something such as HIV does not cause AIDS, or there is no such thing as society, and still be taken seriously years later as a commentator on current affairs.

The moment it abandons its caution and claims false certainty, science loses its credibility; indeed, it ceases to be true science. This is not to say that scientists should commit to nothing for fear of being proved wrong. Nor is it by any means a call for scientists to step back from making pronouncements that guide public policy – if anything, they should do more of that. But when they are talking about scientific issues, scientists cannot afford to abandon their (public) reticence. It is as individuals, not as community spokespeople, that they should feel free, as Hansen rightly does, to voice views, intuitions and beliefs that reach beyond the strict confines that science permits.

References
1. Mercer, J. Nature 271, 321 – 325 (1978).
2. Hansen, J. et al. Science 213, 957 – 966 (1981).
3. Kerr, R. Science 244, 1041 – 1043 (1989).

Friday, March 16, 2007

More noise from the markets

Those wacky economic analysts are at it again. Since I enjoy Paul Mason’s cheeky-chappie appearances as the business correspondent on BBC2’s Newsnight, and because I am told he is indeed a nice chap, I don’t wish to cast aspersions. But his article on the world economy in New Statesman last week (12 March, p.16) showed the kind of thing that passes as routine in the world of quotidian economics. “When the world’s most powerful people gathered amid the snows of Davos in late January, there was a tangible warm glow being given off by the economic cycle… Six weeks later, the financial markets are in turmoil and what was first shrugged off as a ‘correction’ is being seriously monitored as a potential crash.”

OK, so the forecasts were wrong again. Big news. And so the ‘cycle’ somehow stopped ‘cycling’ (or, as economists would say, the cycle changed earlier than expected, which their ‘cycles’, uniquely in science, are permitted to do). Big news again. But get this as the ‘explanation’ offered by the head of strategy at the consulting firm Accenture: “People had undervalued risk, assuming that because the economy is benign there’s not going to be volatility.” I love it. These impressive words – “undervaluing risk”, overlooking “volatility” – translate to something simple: “people forgot that the economy fluctuates”. People thought that because things were good, they were going to stay good.

Now, the idea that market traders were unrealistically optimistic is not especially shaming for them. This is just Keynes’ old “animal spirits” at work, as ever they are. But what a weird situation it causes when analysts are called upon to explain the consequences. These savants, whose salaries would make your eyes water, sagely pronounce, “ah yes, well the market did something unexpected because traders guessed wrong. They imagined that the market was not going to fluctuate, though it always does.” Ah, thanks for clearing that one up.

At root, this transmutation of the bleeding obvious into lucrative analysis stems yet again from the fact that market agents behave in a way that we all recognize as thoroughly human and natural, but which is not permitted in traditional economics. So to those who monitor and interpret the economy, it looks like wisdom of the highest order.

Tuesday, March 13, 2007


Can you tell true art from fake?

Well, find out. Mikhail Simkin at UCLA (whose work on 'false citations' in the scientific literature is highly revealing about the laxity that exists in checking sources) has put a test online in which you are invited to distinguish between some paintings by Modernist 'greats' such as Klee, Mondrian and Malevich, and "ridiculous fakes" that Simkin has mocked up. So far, over fifty thousand people have taken the test, and Simkin has now revealed the results. Surprise: on average, people identify about 8 out of 12 pictures correctly. In other words, they do better than random guessing, but not by much.

What does that mean? The cynic would say that it shows that 'modern' art is mostly a matter of the Emperor's new clothes: detach the great names and we often can't tell if we're looking at genius or doodling. That, of course, is a very old story.

But it would also be a simplistic one. Actually, I was surprised by the choices Simkin made for the test. Several of the images are obviously computer-generated. And most of not all of the true 'great works' would be recognized by anyone with a reasonable knowledge of 20th century art. I got one wrong, suspecting a 'fake' to be 'real'. But this didn't mean I was particularly impressed by the fake. Nor am I all that impressed by some of the 'reals'.

And it seems Simkin has a curiously old-fashioned notion of 'modern art', appearing to equate it with Modernist painting that is mostly almost a century old. Why not try the same thing with, I don't know, Hirst or Ofili or Gary Hume (if you insist on making art = painting in the first place)? You might find the same results, but at least they'd feel a bit more relevant.

Besides, are you really judging a Malevich by looking at a small and rather low-quality image on a computer screen?

The key point, though, is that underlying Simkin's test seems to be the notion that 'real art' would be instantly identifiable because it would show great skill, which would somehow render it timeless and universal. I'm not going to rehearse the case against that reactionary position, except to say that the galleries are full of paintings from previous ages rendered with consummate skill that seem to us now to be dull, irrelevant, pointless and conservative (which isn't to say that they are – although they might be – but only that times have moved on). Besides, the quality of art isn't something that is decided by democratic vote. Sorry about that seemingly elitist notion, but it has to be true. If it wasn't, artists might as well give up and abandon the stage to people who paint pretty watercolours.

It is true that the pomposity of the art world needs pricking, and often. Contemporary art often now seems to be awarded greatness by media cravenness, self-promotion, and the vagaries of the Matthew principle (the rich get richer). There's a great deal of silliness about, mostly thanks to the sad infatuation with celebrity that Western culture is passing through (well, I'm an optimist). But replacing critical judgement with vox pop ballots seems likely to merely pander to that, not to challenge it.

All the same, Simkin's paper is great fun to read. I only hope it triggers discussion rather than sneering.

Friday, March 09, 2007

If addiction's the problem, prohibition's not the answer

[This is the pre-edited version of my latest muse article for Nature's online news.]

China's ban on new internet cafés raises questions about its online culture

The decision by China to freeze the opening of any new Internet cafés for a year from this July has inevitably been interpreted as a further attempt by the Chinese authorities to control and censor access to politically sensitive information.

China defends the ban on the grounds of protecting susceptible teenagers from becoming addicted to games, chatrooms and online porn. Yu Wen, deputy of the National People's Congress, has been quoted as saying "It is common to see students from primary and middle schools lingering in internet bars overnight, puffing on cigarettes and engrossed in online games."

The restriction on internet cafés will certainly assist the Chinese government's programme of web censorship (although there are already more than 110,000 of these places in China). But to suggest that the move is merely a cynical attempt to dress up state interference as welfare would be to overlook another reason why it should be challenged.

It’s quite possible that the government is genuinely alarmed at the fact that, according to a recent report by the Chinese Academy of Sciences, teenagers in China are becoming addicted to the internet younger and in greater numbers than in other countries. The report claimed that 13 percent of users played or chatted online for more than 38 hours a week – longer than the typical working week of European adults.

Sure, you can try to address this situation (which is disturbing if the figures are right) by limiting users' access to their drug. But anyone involved in treating additive behaviour knows that you'll solve little unless you get to the cause.

Why is the cyberworld so attractive to Chinese teenagers? It doesn't take much insight to see a link between repression in daily life and the liberation (partly but not entirely illusory) offered online.

Yet it would be simplistic to ascribe the desire to escape online with the political oppression that certainly exists in Chinese society. After all, there are more oppressive places in the world. Indeed, it is arguably the liberalization of Chinese society that adds to the factors contributing to its internet habit.

There is in fact a nexus of such factors that might be expected to prime young people in China for addition to the net: among them, the increase in wealth and leisure and the emergence of a middle class, the replacement of a demonized West with a glamorized one (both are dangerous), the conservatism and expectations of a strongly filial tradition, the loneliness of a generation lacking siblings because of China's one-child policy, and the allure and status of new technology in a rapidly modernizing society.

Stephanie Wang, a specialist on Chinese internet regulation at the Berkman Center for Internet and Society at Harvard Law School, suggests that the problems of internet use by young people may also simply be more visible in China than in the West, where it tends to happen behind the closed doors of teenagers’ bedrooms rather than in public cybercafĂ©s. Wang adds that the online demographic in Asia is more biased towards young people, and probably more male-dominated.

The Chinese government hardly helps its cause by justifying internet control with puritanical rhetoric: talk of "information purifiers", "online poison" and the need for a "healthy online culture" all too readily suggests the prurient mixture of horror and fascination that characterizes the attitude of many repressive regimes to more liberal cultures. But let's not forget that much the same was once said in the West about the corrupting influence of rock'n'roll.

And anyway, surely youth has always needed an addition. In a culture where alcohol abuse is rare, drug use carries terrifyingly draconian penalties, sexuality is repressed and pop culture is sanitized, getting your kicks online might seem your only option. As teenage vices go, it is pretty mild.

As with all new technologies, from television to cell phones, the antisocial behaviour they can elicit is all too easily blamed on the technology itself. That's far safer than examining the latent social traits that the technology has made apparent. In this regard, China is perhaps only reacting as other cultures have done previously.

So rather than adding more bricks to its Great Firewall, or fretting about youngsters chain-smoking their way through the mean streets of Grand Theft Auto, China might benefit from thinking about why it has the addition-prone youth cyberculture that it claims to have.

Wednesday, February 28, 2007



Roll on the robots


This is the pre-edited version of my Materials Witness column for the April issue of Nature Materials.

Spirit, the redoubtable Martian rover, has spent the past year driving on just five of its six wheels. In February the Rover’s handling team said it had perfected the art of manoeuvring with one wheel missing, but the malfunction raises the question of whether there are better ways for robots to get around. Walking robots are becoming more efficient thanks to a better understanding of the ‘passive’ mechanism of human locomotion; but a single tumble might put such a robot out of action permanently in remote or extraterrestrial environments.

So a recent survey of rolling robots provided by Rhodri Armour and Julian Vincent of the University of Bath (J. Bionic Eng. 3, 195-208; 2006) is timely. They point out that spherical robots have several advantages: for example, they’ll never ‘fall over’, the mechanics can all be enclosed in a protective hard shell, the robot can move in any direction and can cope with collisions, uneven and soft surfaces.

But how do you make a sphere roll from the inside? Several answers have been explored in designs for spherical robots. One developed at the Politecnico di Bari in Italy aims to use an ingenious internal driver, basically a sprung rod with wheels at each end. It’s a tricky design to master, and so far only a cylindrical prototype exists. Other designs include spheres with ‘cars’ inside (the treadwheel principle), pairs of hemispherical wheels, moving internal ballast masses – the Roball made at the UniversitĂ© de Sherbrooke in QuĂ©bec, and the Rotundus of Uppsala University in Sweden – and gyroscopic rollers like Carnegie Mellon’s Gyrover.

But Armour and Vincent suggest that one of the best designs is that in which masses inside the sphere can be moved independently along radial arms to shift the centre of gravity in any direction. The Spherobot under development at Michigan State University, and the August robot designed in Iran use this method, as does the wheel-shaped robot made at Ritsumeikan University in Kyoto, which is a deformable rubber hoop with ‘smart’ spokes that can crawl up a shallow incline and even jump into the air.

Although rolling robots clearly have a lot going for them, it might give us pause for thought that nature seems very rarely to employ rolling. There are a few organisms that make ‘intentional’ use of passive rolling, being able to adopt spherical shapes that are blown by the wind or carried along by gravity: tumbleweed is perhaps the most familiar example, but the Namib golden wheel spider cartwheels down sand dunes to escape wasps, and woodlice, when attacked, curl into balls and roll away. Active rollers are rarer still: Armour and Vincent can identify only the caterpillar of the Mother-of-Pearl moth and a species of shrimp, both of which perform somersaults.

Is this nature’s way of telling us that rolling has limited value for motion? That might be jumping to conclusions; after all, wheels are equally scarce in nature, but they serve engineering splendidly.

Tuesday, February 27, 2007

Science on Stage: two views

Carl Djerassi has struck back at Kirsten Shepherd-Barr’s rather stinging critique of his plays in a review of Kirsten’s book Science on Stage in Physics Today. I think his comments are a little unfair; Carl has his own agenda of using theatre to smuggle some science into culture, which is a defensible aim but doesn’t acknowledge that the first question must be: is this good theatre? Or as Kirsten asks, does it have ‘theatricality’? Here is my own take on her book, published in the July issue of Nature Physics last year.

Science on Stage: From Doctor Faustus to Copenhagen
Kirsten Shepherd-Barr
Princeton University Press, 2006
Cloth $29.95
ISBN 0-691-12150-8
264 pages

Over the past decade or so, science has been on stage as never before. Michael Frayn’s Copenhagen (1998), which dramatized the wartime meeting between Werner Heisenberg and Niels Bohr, is perhaps the most celebrated example; but Tom Stoppard had been exploring scientific themes for some time in Hapgood (1988) and Arcadia (1993), while Margaret Edison’s Wit (1998) and David Auburn’s Proof (2001) were both Pulitzer prize-winning Broadway hits, the latter now also a Hollywood movie. There are plenty of other examples.

While this ‘culturization’ of science has largely been welcomed by scientists – it certainly suggests that theatre has a more sophisticated relationship with science than that typified by the ‘mad scientist’ of cinematic tradition – there has been a curious lack of insightful discussion of the trend. Faced with ‘difficult’ scientific concepts, theatre critics tend to seek recourse in bland clichĂ©s about ‘mind-boggling ideas’. Scientists, meanwhile, all too often betray an artistic conservatism by revealing that their idea of theatre is an entertaining night out watching a bunch of actors behind a proscenium arch.

Thank goodness, then, for Kirsten Shepherd-Barr’s book. It represents the first sustained, serious attempt that I have seen to engage with the questions posed by science in theatre. In particular, while there has been plenty of vague talk about pedagogical opportunities, about Snow’s two cultures and about whether the ‘facts are right’, Shepherd-Barr explores what matters most about ‘science plays’: how they work (or not) as theatre.

Despite the book’s subtitle, it does not really try to offer a comprehensive historical account of science in theatre. All the same, one can hardly approach the topic without acknowledging several landmark plays of the past that have had a strong scientific content. It is arguably stretching the point to include Marlowe’s Dr Faustus (c.1594), despite its alchemical content, since this retelling of a popular folk legend is largely a morality tale which can be understood fully only in the context of its times. But while that is equally true of Ben Jonson’s The Alchemist (c.1610), both plays are important in terms of the archetypes they helped establish for the dramatic scientist: as arrogant Promethean man and as wily charlatan. There are echoes of both in the doctors of Ibsen’s plays, for example.

More significant for the modern trend is Bertolt Brecht’s Life of Galileo (1938/45) a far more nuanced look at the moral dilemmas that scientists face. Like Copenhagen, Galileo has drawn criticism from some scientists and science historians over the issue of historical accuracy. Some of these criticisms simply betray an infantile need to sustain Galileo as the heroic champion of rationalism in the face of church dogma. That is bad history too, but then, scientists are notorious (or should be) for their lack of real interest in history, as opposed to anecdote. Here Shepherd-Barr is admirably clear and patient, explaining that Copenhagen “takes history simply as material for creating theatre that does what art in general does: poses questions.”

Yet this is something scientists and historians seem to feel uncomfortable about. Writing about Copenhagen, historian Robert Marc Friedman has said “regardless of the playwright's intentions and even extreme care in creating his characters, audiences may leave the theatre with a wide range of impressions. In the case of the London production of Copenhagen on the evening that I attended, members of the audience with whom I spoke came away believing Bohr to be no better morally than Heisenberg; perhaps even less sympathetic. I am not sure, however, that this was the playwright's intention… I felt uncomfortable.” There is something chillingly Stalinist about this view of theatre and art. Should we also worry whether we have correctly divined the playwright’s “intentions” in Hamlet or King Lear?

Shepherd-Barr negotiates admirably around these lacunae between the worlds of science and art. Perhaps her key insight is that the most successful science plays are those that don’t just talk about their themes but embody them, as when the action of Arcadia reveals the thermodynamic unidirectionality of time. But most importantly, she reminds us that theatre is primarily not about words or ideas, but performance. That’s why theatre is so much stronger and more exciting a vehicle for dealing with scientific themes than film (which almost always does it miserably) or even literature. Good theatre, whatever its topic, doesn’t just engage but involves its audience: it is an experiment in which the presence of the observer is critical. Brecht pointed that out; but it is perhaps in theatre’s experimental forms, such as those pioneered by Jacques Lecoq and Peter Brook (who staged Oliver Sack’s The Man Who Mistook His Wife for a Hat in 1991) and exemplified in John Barrow and Luca Ronconi’s Infinities and Theatre de Complicite’s Mnemonic, that we see how much richer it can be than the remote, ponderous literalness of film. What could be more scientific-spirited than this experimental approach? When science has given us such extraordinary new perspectives on the world, surely theatre should be able to do more than simply show us people talking about it.
Don’t censor the state climatologists

Aware that I will no doubt be dismissed as the yes-man of the ‘climate-change consensus’ for my critique of climate sceptics in Prospect (see below), I want to say that I am dismayed at the news that two US state climatologists are being given some heat for disagreeing with the idea that global warming is predominantly anthropogenic. First, it seems that state climatologists have many concerns, of which global climate change is just one (and a relatively minor one at that). But more importantly, it is absurd to expect any scientist to determine their position by fiat so that it is aligned with state policy or any other political position. The matter is quite simple: if the feeling is that a scientist’s position on an issue undermines their credentials as a scientist, they should not be given this kind of status in the first place. If it is true that, as Mike Hopkins says in his Nature story (and Mike gets things right) “Oregon governor Ted Kulongoski said that he wants to strip Oregon's climatologist George Taylor of his title for not agreeing that global warming is predominantly caused by humans”, then Kulongoski is wrong. The only reason Taylor ought to be stripped of his title is that he has been found to be a demonstrably bad climatologist. The same with Pat Michaels at Virginia. As it happens, my impression of Michaels is that he is no longer able to be very objective on the issue of climate change – in other words, he doesn’t seem to be very trustworthy as a scientist on that score. But I’m prepared to believe that he says what he does in good faith, and of course should be allowed to argue his case. Trying to force these two guys to fall in line with the state position is simply going to fan the conspiracy theorists’ flames (I’m awaiting Benny Peiser’s inevitable take on this). But even if these paranoid sceptics did not exist, the demands would be wrong.
The more voices, the better the result in Wiki world
Here's the pre-edited version of my latest article for news@nature…

The secret to the quality of Wikipedia entries is lots of edits by lots of people

Why is Wikipedia so good? While the debate about just how good it is has been heated, the free online encyclopaedia offers a better standard of information than we might have any right to expect from a resource that absolutely anyone can write and edit.

Three groups of researchers claim now to have untangled the process by which many Wikipedia entries achieve an impressive accuracy [1-3]. They say that the best articles are those that are highly edited by many different contributors.

Listening to lots of voices rather than a few doesn't always guarantee the success that Wikipedia enjoys – just think of all those rotten movies written by committee. Collaborative product design in commerce and industry also often generates indifferent results. So why does Wiki work where others have failed?

Wikipedia was created by Jimmy Wales in January 2001, since when it has grown exponentially both in terms of the number of users and the information content. In 2005, a study of its content by Nature [4] concluded that the entries were of a comparable standing to those generated by experts for the Encyclopaedia Britannica (a claim that the EB quickly challenged).

The idea behind Wikipedia is encapsulated in writer James Surowiecki's influential book is The Wisdom of Crowds[5]: the aggregate knowledge of a wide enough group of people will always be superior to that of any single expert. In this sense, Wikipedia challenges the traditional notion that an elite of experts knows best. This democratic, open-access philosophy has been widely imitated, particularly in online resources.

At face value, it might seem obvious that the wider the community you consult, the better your information will be – that simply increases your chances of finding a real expert on Mozart or mud wrestling. But how do you know that the real experts will be motivated to contribute, and that their voices will not be drowned out or edited over by other less-informed ones?

The crucial question, say Dennis Wilkinson and Bernardo Huberman of Hewlett Packard's research laboratories in Palo Alto, California, is: how do the really good articles get to be that way? The idea behind Wikipedia is that entries are iterated to near-perfection by a succession of edits. But do edits by a (largely) unregulated crowd really make an entry better?

Right now there are around 6.4 million articles on Wikipedia, generated by over 250 million edits from 5.77 million contributors. Wilkinson and Huberman is have studied the editing statistics, and say that they don't simply follow the statistical pattern expected from a random process in which each edit is made independently of the others [1].

Instead, there are an abnormally high number of very highly edited entries. The researchers say this is just what is expected if the number of new edits to an article is proportional to the number of previous edits. In other words, edits attract more edits. The disproportionately highly edited articles, the researchers say, are those that deal with very topical issues.

And does this increased attention make them better? Yes, it does. Although the quality of an entry is not easy to assess automatically, Wilkinson and Huberman assume that those articles selected as the 'best' by the Wikipedia user community are indeed in some sense superior. These, they say, are more highly edited, and by a greater number of users, than less visible entries.

Who is making these edits, though? Some have claimed that Wikipedia articles don't truly draw on the collective wisdom of its users, but are put together mostly by a small, select elite, including the system's administrators. Wales himself has admitted that he spends "a lot of time listening to four or five hundred" top users.

Aniket Kittur of the University of California at Los Angeles and coworkers have set out to discover who really does the editing [2]. They have looked at 4.7 million pages from the English-language Wikipedia, subjected to a total of about 58 million revisions, to see who was making the changes, and how.

The results were striking. In effect, the Wiki community has mutated since 2001 from an oligarchy to a democracy. The percentage of edits made by the Wikipedia 'elite' of administrators increased steadily up to 2004, when it reached around 50 per cent. But since then it has steadily declined, and is now just 10 per cent (and falling).

Even though the edits made by this elite are generally more substantial than those made by the 'masses', their overall influence has clearly waned. Wikipedia is now dominated by users who are much more numerous than the elite but individually less active. Kittur and colleagues compare this to the rise of a powerful bourgeoisie within an oligarchic society.

This diversification of contributors is beneficial, Ofer Arazy and coworkers at the University of Alberta in Canada have found [3]. They say that, of the 42 Wikipedia entries assessed in the 2005 Nature study, the number of errors decreased as the number of different editors increased.

The main lesson for tapping effectively into the 'wisdom of the crowd', then, is that the crowd should be diverse: represented by many different views and interests. In fact, in 2004 Lu Hong and Scott Page of the University of Michigan showed that a problem-solving team selected at random from a diverse collection of individuals will usually perform better than a team made up of those who individually perform best – because the latter tend to be too similar, and so draw on too narrow a range of options [6]. For crowds, wisdom depends on variety.

Reference
1. Wilkinson, D. M. & Huberman, B. A. preprint http://xxx.arxiv.org/abs/cs.DL/0702140 (2007).
2. Kittur, A. et al. preprint (2007).
3. Arazy, O. et al. Paper presented at 16th Workshop on Information Technologies and Systems, Milwaukee, 9-10 December 2006.
4. Giles, J. Nature 438, 900-901 (2005).
5. Surowiecki, J. The Wisdom of Crowds (Random House, 2004).
6. Hong, L. & Page, S. E. Proc. Natl Acad. Sci. USA 101, 16385-16389 (2004).

Friday, February 23, 2007


The secret of Islamic patterns
This is the pre-edited version of my latest piece for news@nature. The online version acquired some small errors that may or may not be put right. But what a great paper!

Muslim artists may have used a sophisticated tiling scheme to design their geometric decorations

The complex geometrical designs used for decoration by Islamic artists in the Middle Ages, as seen in buildings such as the Alhambra palace in southern Spain, were planned using a sophisticated tiling system that enabled them to make patterns not known in the West until 20 years ago, two physicists have claimed.

By studying many Islamic designs, Peter Lu of Harvard University in Cambridge, Massachusetts, and Paul Steinhardt of Princeton University in New Jersey have decided they were put together not using a compass and ruler, as previously assumed, but by tessellating a small number of different tiles with complex shapes.

The researchers think that this technique was developed around the start of the thirteenth century, and that by the fifteenth century it had become advanced enough to generate complex patterns now known as quasiperiodic. These were 'discovered' in the 1970s by the British mathematical physicist Roger Penrose, and were later found to account for puzzling materials called quasicrystals. Discovered in 1984 in metal alloys, quasicrystals initially foxed scientists because they seemed to break the geometric rules that govern regular (crystalline) packing of atoms.

The findings provide a further illustration of how advanced Islamic mathematics was in comparison with the medieval West. From around the eleventh century, much of the understanding of science and maths in the Christian West came from Islamic sources. Arabic and Persian scholars preserved the learning of the ancient Greeks, such as Aristotle, Ptolemy and Euclid, in translations and commentaries.

The Muslim writers also made original contributions to these fields. Western scholars learnt Arabic and travelled to the East to make Latin translations of the Islamic books. Among the mathematical innovations of the Islamic world were the use of algebra, algorithms (both of which are words derived from Arabic) and the use of numerals now known as 'Arabic' (although derived in turn from Indian notation).

The mathematical complexity of Islamic decoration has long been admired. The artists used such motifs because representational art was discouraged by the Koran. “The buildings decorated this way were among the most monumental structures in the society, combining both political and religious functions”, says Lu. “There was a great interest, then, in using these structures to broadcast the power and sophistication of the controlling elite, and therefore to make the ornament and decoration equally monumental.”

Lu and Steinhardt now propose that these designs were created in a previously unsuspected way. They say that the patterns known as girih, consisting of geometric polygon and star shapes interlaced with zigzagging lines, were produced from a set of just a handful of tiling shapes ranging from pentagons and decagons (regular ten-sided polygons) to bow-ties, which can be pieced together in many different ways. The two physicists show how these tiles could themselves be drawn using geometric constructions with compasses that were known by medieval Islamic mathematicians.

Some scrolls written by Islamic artists to explain their design methods show tiles with these shapes explicitly, confirming that they were used as 'conceptual building blocks' in making the design. Lu says that they’ve found no evidence that the tiles were actually made as physical objects. “But we speculate they were”, he adds, “so as to be used as templates in laying out the actual tiling on the side of a building.”

Lu and Steinhardt say that designing this way was simpler and faster than starting with the zigzag lines themselves: packing them together in different regular arrays automatically generates the complex patterns. “Once you have the tiles, you can make complicated patterns, even quasicrystalline ones, by following a few simple rules”, says Lu.

The researchers have shown that many patterns on Islamic buildings can be built up from the girih tiles. The resulting patterns are usually periodic – they repeat again and again, and so can be perfectly superimposed on themselves when shifted by a particular distance – but this regularity can be hard to spot, compared say with that of a hexagonal honeycomb pattern.

The patterns also contain many shapes, such as polygons with 5, 10 and 12 sides, that cannot themselves be packed together periodically without leaving gaps. This property of the polygons means that scientists long believed that it was impossible for crystals to show five- ten- and twelvefold symmetries, such that rotating them by a fifth, tenth or twelfth of a full circle would allow them to be superimposed on themselves.

So when 'crystals' that appeared to have these symmetries were discovered in 1984, they seemed to violate the basic rules of geometry. But it became clear that these quasicrystals aren't perfectly periodic. In the same year, Steinhardt pointed out how patterns with the same geometric properties as quasicrystyals could be constructed from the tiling scheme devised by Penrose.

Steinhardt and Lu say that, while there is no sign that the Islamic artists knew of the Penrose tiling, their girih tiling method provides an alternative way to make the same quasicrystalline patterns. The researchers say that a design on the Darb-i-Imam shrine in Isfahan, Iran, made in 1453, is virtually equivalent to a Penrose tiling. One of the mesmerizing features of this pattern is that, like a true quasicrystal, it looks regular but never repeats exactly.

“I’d conjecture that this was quite deliberate”, says Lu. “They wanted to extend the pattern without it repeating. While they were not likely aware of the mathematical properties and consequences of the construction rule they devised, they did end up with something that would lead to what we understand today to be a quasicrystal.”

Reference
Lu, P. J. & Steinhardt, P. J. Science 315, 1106 - 1110 (2007).

Postscript
I have received some comments from Roger Penrose on this work, sadly too late for inclusion in the Nature piece but which provide some valuable perspective on the discovery. This is what he says:
"The patterns are fascinating, and very beautiful, and it is remarkable how much these ancient architects were able to anticipate concerning 5-fold quasi-symmetric organization. But, as Steinhardt (and, in effect, Lu) have confirmed directly with me, the Islamic patterns are not the same as my patterns (on several counts: different basic shapes, no matching rules, no evidence that they used anything like a "Penrose pattern" to guide them, the hierarchical structure indicated by their subdivision of large shapes into smaller ones is not strictly followed, and would not, in any case, enable the patterns to map precisely to a "Penrose tiling"). I do, however, regard this work of Steinhardt and Lu as a most intriguing and significant discovery, and one wonders what more the ancient Islamic designers may have known about such things. I should perhaps add that the great Astronomer Johannes Kepler, in his Harmonice Mundi (vol.2), published in 1619, had independently produced a regular pentagon tiling that is much closer to my own tilings than anything that I have seen so far in this admittedly wonderful Islamic work."

Peter Lu, incidentally, has indicated that he agrees with everything that Penrose says here. The relationship between the Darb-i-Imam pattern and a Penrose tiling is subtle - much more so, it seems, than media reports of this work have tended to imply.

Tuesday, February 13, 2007

When research goes PEAR-shaped

I’ve got a muse@nature.com column up today about the closure of the lab at Princeton that was investigating paranormal phenomena. Inevitably these things have to be chopped and changed before they appear, but here’s the pre-edited version. I feel scientists have no need to get too heavy about this kind of thing – if nothing else, it could serve as an interesting discussion point for students learning about how science is, and should be, done. To judge from the descriptions I’ve read of the PEAR lab and its ethos, we could probably do with a bit more of that in the scientific community. But why, oh why, do these people feel the need to come up with a ‘theory’ that is just a tangle of words? It is, in the time-honoured phrase, not even wrong. Sometimes you can’t help feeling that quantum theory has a lot to answer for.

There should be room for a bit of fringe science – but it's liable to suck you in.

It can't do a great deal for your self-esteem when media interest in your research project seems to catch fire only in response to the project's demise. But Robert Jahn and Brenda Dunne of the Princeton Engineering Anomalies Research (PEAR) laboratory probably aren't too bothered by that. For the attention generated by the closure of the PEAR lab – or rather, by the suggestion in the New York Times that this removes a source of ongoing embarrassment to the university – can surely only enhance the profile of Jahn and Dunne's longer-term vision of exploring "consciousness-related anomalies".

What "anomalies", exactly? With meticulous care, Jahn and Dunne avoid describing the phenomena they've studied using the more familiar words: telekinesis and telepathy. They have been studying people's ability to control machines and to transmit images from remote locations using only the power of the human mind. According to your perspective, that choice of language is a way of either promoting the paranormal by stealth or avoiding knee-jerk criticism.

The affair has inevitably ignited debates about the limits of academic freedom and responsibility. The NY Times quotes physicist Robert Park, a noted debunker of pseudo-science, as saying "It’s been an embarrassment to science, and I think an embarrassment for Princeton", while physicist Will Happer at Princeton says "I don’t believe in anything [Jahn] is doing, but I support his right to do it."

The university itself is trying to keep out of the fray. While stressing that the work done at PEAR was, like most other research at the university, privately funded, Princeton spokeswoman Cass Cliatt says that the lab's closure "was not a university decision". She adds that "the work at the lab was always understood by the university to be a personal interest of Professor Jahn's." Jahn, now an emeritus professor, was former dean of the engineering school and is an expert on electric propulsion.

Jahn and Dunne, a developmental psychologist, confirm that the decision was theirs. "We have accomplished what we originally set out to do 28 years ago, namely to determine whether these effects are real and to identify their major correlates", they say. With Jahn about to retire, "it is time for the next generation of scholars to take over." They hope that their work will be continued through the International Consciousness Research Laboratories, a network established in 1996 and now boasting members from 20 countries.

Some will surely share Park's view that this sort of thing gives science a bad name. But they'd be wrong to let the matter rest there, because PEAR's research reveals some interesting things about the practice and sociology of science.

The PEAR project offers a glimpse of what scientists can expect if they decide to dabble in what is conventionally termed the paranormal. Reasonable scientists cannot rule out the possibility of telekinesis, telepathy and other such 'anomalies' of the mind, simply because there are still such huge gaps in our understanding of consciousness and the brain. But most will say, again reasonably enough, that because all previous attempts to study these putative phenomena have failed to establish anything like a consistent, reproducible and unequivocal body of data, the chances of doing any serious science on the subject are minimal. As John Webster said of witchcraft in the seventeenth century, "There is no greater folly than to be very inquisitive and laborious to find out the causes of such a phenomenon as never had any existence."

In short, they regard effects like these as examples of what American chemist Irving Langmuir famously called pathological science. Experience teaches us that these things, from N-rays to cold fusion and homeopathy, are will 'o' the wispshttp://www2.blogger.com/img/gl.italic.gif: too elusive for fruitful research, and probably imaginary if not downright fraudulent.

At least, this is the standard positivist position. But perhaps a stronger reason why scientists usually steer clear of such things is that it would be professional suicide not to. In a paper called 'The PEAR Proposition'1, published in the Journal of Scientific Exploration (a journal produced by the Society for Scientific Exploration, of which Jahn and Dunne are both officers), the PEAR duo describe the hostility they experienced at Princeton when the lab was set up. They found "covert ridicule,… grudging concession of academic freedom, and… uneasiness in public discussion of the subject." Most scientists find this sort of work not outrageous but simply embarrassing.

Predictably, Jahn and Dunne found it virtually impossible to publish their findings. Their papers, many of which reported the effects of subjects' mental and emotional states on a computerized random number generator, were returned with the comment that they treated an "inappropriate topic". One journal editor said that he would consider the text only when the authors were able to transmit it telepathically.

It is no wonder, then, that those from the academic community who swim in these murky waters are older and already established in their mainstream disciplines. The 'leaders emeritus' of the Society for Scientific Exploration are Peter Sturrock and Laurence Fredrick, emeritus professors at Stanford and Virginia respectively, both with secure reputations in space physics. Not only have such people earned themselves a bit of academic slack (as well as the ability to attract funding) but they cannot simply be cold-shouldered in the way that younger researchers would be. For the same reason, Nobel laureate physicist Brian Josephson has been permitted for years to pursue research on 'mind-matter unification' at Cambridge University amid what one senses to be a mixture of unease and resignation from his colleagues.

'The PEAR Proposition' contains many poignant notes. It shows how awkwardly the habits of academia sit with discussion of the everyday world of human interactions – an unavoidable issue in this line of work. The authors' talk of the "superficial jocularities" of their lab celebrations and the "spontaneous repartee therein" evoke a deeply uncool avuncularity, while Jahn and Dunne hardly do justice to their evidently relaxed working relationship by saying that it "constituted a virtual complementarity of strategic judgment that has triangulated our operational implementation in a particularly productive fashion." It's hard to doubt that the PEAR lab, with its artwork on the walls, its parties and its stuffed animals, was a lot more fun than most research labs. That the attempts to capture this atmosphere in the language of academese are so stilted says a lot about how routinely successful this language is in stripping the research literature of its humanity.

But in the end, this fascinating document undermines itself. When Jahn and Dunne talk about "the tendency of the desired effects to hide within the underlying random data-substructures", and the way their volunteers would often produce "better scores" in their first series of tests, they echo the way that other researchers of pathological science, such as cold fusion and the 'memory of water', betrayed their lack of objectivity with talk of "good runs" and "bad runs".

And perhaps that is the real worry in looking for marginal and unreliable phenomena. Jahn and Dunne are commendably honest about the "bemusing" and "capricious" nature of their measurements, but that only adds to the impression that they decided they were engaged in a battle of wits with nature, who did her darnedest to hide the truth of the matter.

It would be a poorer world that castigates and shuns any researcher who dabbles in unorthodox or even positively weird ideas. But the PEAR experience should be sobering reading for anyone thinking of doing that: it suggests that these things suck you in. You start off with random number generators and unimpeachable experimental technique, and before long you are talking about "an ongoing two-way exchange between a primordial Source and an organizing Consciousness." You have been warned.

Reference
1. Jahn, R. G. & Dunne, B. J. J. Sci. Explor. 19, 195 - 245 (2005).

Friday, February 09, 2007

Sceptical of the sceptics

Here’s the pre-edited version of my March Lab Report column for Prospect. In the course of writing it, I found it necessary to look at some of what has been written and said by the well-known climate-change sceptics, such as those named in the article. This has been interesting. No, let me rephrase that. By a monumental effort of will, I have suppressed the fury, frustration, stupefaction and despair that their comments are apt to induce, and found a precarious way to treat them as ‘interesting’. What I mean by that is that these remarks, coming from people who are undoubtedly smart, are so ill-informed, illogical, prejudiced and emotional that it makes little sense to approach them without trying to get some perspective on what the real issues are. The comments here by Melanie Phillips are a case in point – they are so dripping with furious contempt and scorn that there can be little doubt this touches on something rather personal to her. I suspect that in many of these cases, the issue is that warnings of climate change threaten to compromise a libertarian approach to life, because they imply that there are some freedoms we enjoy now that might have to be curtailed in the future. But I’m guessing, and frankly I don’t find it a very appealing prospect to try to analyse these people.

It would be a quixotic task to try to point out all the errors in the climate-sceptic rants – that would take too long, it would achieve little, and it would be rather boring. What is most striking, however, is that very often these errors are so elementary that they show that these people actually have no interest in trying to understand climate science, or science in general, but just want to find flaws and parade them. That is why the climate-sceptic position is rather repetitive, even obsessive: you just know that they are going to reel out the ‘hockey stick’ argument, even though, first, the criticism of Michael Mann’s work is still very contentious, and second, and most significantly, it is a laughable nonsense to imply that the whole notion of global warming rests on Mann’s ‘hockey stick’. Indeed, the sceptics’ arguments always depend on the notion that we assess global warming, and the anthropogenic contribution to it, by looking at global mean surface temperatures. It must be well over ten years ago now that scientists were explaining that the tell-tale sign of human influence is to be found in the fingerprint of regional differences in the warming trend (and the fingerprint is indeed there).

All the same, I cannot resist pointing out just a few of the idiocies in some of the sceptics’ arguments. This from Phillips has nothing to do with climate change, but tells us at once that this is not someone with more than a cartoon knowledge of the history of science. In lambasting scientists who have found higher than expected methane emissions from plants, she says:
“No doubt Galileo had the same problem when all medieval parchments agreed that the sun went round the earth; or Christopher Columbus, when all navigational maps agreed that the earth was flat.”
Yes, and newspapers print this stuff.
“People say ‘the ice caps are melting’. Well, some are; but others are growing.”
Hmm… aside from the north and south polar ice caps, where are these ‘others’?
“People say ‘the seas are rising’. Well, some are, but others are falling; and where they are rising, the cause often lies in the movement of land rather than any effects of climate change.”
Plain wrong, as simple as that.
“The earth’s climate is influenced by a vastly complex series of factors which interact with each other in literally millions of ways. Computer models, which have created global warming theory, simply cannot deal with all these factors. If over-simplified material is fed into the computers, over-simplified conclusions come out at the other end.”
Melanie Phillips has decided that computer models do not do a good job of modeling the climate system? She is an expert on this? She discounts the endless model verification checks that climate modelers run? On what grounds? Will the Daily Mail let her print any statement she likes (apparently it had no qualms in permitting her to say that most of the Earth’s atmosphere is water vapour).

Nigel Lawson is an interesting case, not least because he used to control the UK’s purse strings, and so you’d like to hope this is a man with a clear head for facts. But if his reasoning on the economy was like his reasoning on climate change, that’s a truly scary thought. Here we have a marshalling of the ‘facts’ that is so selective and so distorted that you wonder just what passes for normal debate in Westminster. Oh, and the occasional lie, such as that the Royal Society tried “to prevent the funding of climate scientists who do not share its alarmist view”. (They did nothing of the sort; Bob Ward of the RS asked ExxonMobil when it intended to honour its promise to stop funding lobby groups who promote disinformation about climate change. There was no suggestion of stopping any funds to scientists.) Lawson’s comment that “the new priests are scientists (well rewarded with research grants for their pains) rather than clerics of the established religions” is about as close as I’ve seen a sceptic come to aping the stock phrases of cranks everywhere, but is also revealing in its implication that Lawson seems to find the idea of experts who know more than him offensive – a common affliction of the privileged and well educated non-scientist.

Alright, enough. I’ll start despairing again if I’m not careful. Here’s the column.
*****************************************************************

The latest report by the Intergovernmental Panel on Climate Change has come as near to blaming global warming on human activities as any scientists are likely to, while adding that its extent and consequences may be worse than we thought. The IPCC has previously been so (properly) tentative that even climate-change sceptics will have a hard time casting them as scaremongerers. So where does this leave the sceptics now?

Many politicians and scientists are hoping they will now shut up. But that’s to make the mistake of thinking this is an argument over scientific evidence.

Consider this, for instance: “As most of you have heard many times, the consensus of climate scientists believe sin global warming. Historically, the claim of consensus has been the first refuge of scoundrels; it is a way to avoid debate by claiming that the matter is already settled. Whenever you hear the consensus of scientists agrees on something or other, reach for your wallet, because you're being had. Let's be clear: the work of science has nothing whatever to do with consensus. Consensus is the business of politics. In science, consensus is irrelevant.”

This is from Michael Crichton – for the author of Jurassic Park has been giving high-level speeches about the ‘myth’ of climate change, and has even been summoned as an ‘expert witness’ on the matter by the US Senate. We need only concede that the Earth orbits the Sun and that humans are a product of Darwinian evolution to see that Crichton is not so much indulging in sophistry as merely saying something stupid. But because he is a smart fellow, stupidity can't account for it.

That's really the issue in tackling climate-change sceptics. There is no mystery about the way that some critics of the IPCC’s conclusions are simply protecting vested interests – ExxonMobil’s funding of groups that peddle climate-change disinformation, or the US government's extensive interference in federally funded climate science needs no more complex explanation than that. But this isn't true 'scepticism' – it is merely denial motivated by self-interest.

The real sceptics – strange bedfellows such as David Bellamy, Nigel Lawson, Melanie Phillips, a handful of real scientists, and Crichton – are a different phenomenon. For them there is a personal agenda involved. It’s less obvious what that might be than in, say, the comparable case of the ‘sceptics’ who denied the link between HIV and AIDS in the early 1990s. But what is immediately evident to the trained ear is that the sceptics’ denials carry the classic hallmarks of the crank – a belief that one's own reasoning betters that of professionals (even though the errors are usually elementary), a victim mentality, an instant change of tack when convincingly refuted, and (always a giveaway) a historically naive invocation of Galileo’s persecution. Of course, some of them simply tell outright lies too.

Bjorn Lomborg is a slightly different matter, since his objections focus less on denying climate change and more on denying the need to do anything about it. Nonetheless, although the economic arguments are complex, Lomborg's rhetoric – for example, suggesting that because climate change is less pressing than, say, AIDS, we should ignore it – is simplistic to a degree that again does not equate with his evident intelligence.

Economics is indeed going to be the future battleground. Yes, the argument goes, so climate change is happening, but that doesn’t mean we have to do anything to prevent it. Far better to adapt to it. This line has been pushed by heavier hitters than Lomborg, such as the eminent economists William Nordhaus at Yale and Partha Dasgupta at Cambridge, who reject the economic analysis of Nicholas Stern. The argument has some force in purely economic terms – which is perhaps not the foremost consideration if you live in coastal Bangladesh or on the Marshall islands – but it will take a lot of either faith or foolishness to let economics alone guide us into uncharted waters where we cannot rule out mass famine, decimation of biodiversity and unforeseen positive feedbacks that accelerate the warming. That’s not what economics is for.

Yet economists are right to say that we need informed rather than knee-jerk responses, and that these will surely involve compromises rather than dreaming of arresting the current trends. But by turning now to economics, however, the celebrity sceptics will only betray their agenda. It’s time to seek more reasoned voices of caution.

*****

How long before we witness the rise of the bird flu sceptic? (Matthew Parris has already staked his claim.) They could be right in one sense – according to Albert Osterhaus, chairman of the European Scientific Working group on Influenza (ESWI), “Isolated outbreaks of avian influenza in Europe are a problem in terms of economy, animal welfare and biodiversity, but the threat to public health will probably be manageable.” But they’ll almost certainly be wrong in another. The H5N1 virus is all too often portrayed as a bolt from the blue, like a bit of really rotten luck. In truth it’s illustrative of a fact of life in the viral world, where, to put it bluntly, shit happens. Last November, leading US virologists Robert Webster and Elena Govorkova stated baldly that “there is no question that there will be another influenza pandemic some day.” The ESWI agrees, and warns that Europe is ill prepared for it. Even if H5N1 doesn’t get us (by mutating into a form readily transmitted between humans), another virus will. Flu viruses are legion, and unavoidable. Here, at least, is one threat for which mitigation, not prevention, is the only option. H5N1 seems less transmissible in warmer weather, but one hopes even climate sceptics won’t see that as a point in their favour.

Friday, February 02, 2007

Space wars

I have an editorial piece on news@nature on China’s recent missile destruction of a satellite. The commentary in the scientific press has had much to say about the possible hazards of the space debris this created, but less about the implications and significance of the act for space militarization. This is my take on that.

Published online: 24 January 2007; doi:10.1038/news070122-8

muse@nature.com: A dangerous game in space
Is China's satellite zapping simply old-fashioned sabre-rattling? Or is it a rational step to restrict the use of space weapons?

How do you reconcile China's shooting down of a satellite earlier this month with the subsequent insistence by its foreign ministry spokesman, Liu Jianchao, that China opposes military competition in space?

China has not yet explained its objectives. But the action makes perfect sense in the context of game theory, the conventional framework for analysing conflict and cooperation.

Put simply, if you want to spur nations to collaborate in curbing space militarization, good intentions are not enough. You need to show that you can get tough if the need arises.

A benign interpretation of China's action, then, is that it might accomplish what years of talking have not: force the United States to negotiate an international treaty on space weaponry. Does China have such a specific goal in mind? Or does it merely wish to leave its options open in dealing with rebellious Taiwan?

These are dangerous questions. But it is worth bearing in mind that the Chinese test is at least consistent with a completely rational approach to securing international enforcement of the peaceful use of space.

The classic scenario to explore cooperation between nations using game theory is the Prisoner's Dilemma. Here, two players are each given the choice of cooperating with each other or betraying the other person (defecting), with different rewards or penalties for each potential outcome. Mutual cooperation is more beneficial to both players than is mutual defection. But temptation gets in the way: the player who defects against a cooperator wins the biggest prize of all.

Although the rational strategy in a one-off bout of the Prisoner's Dilemma is to defect, it runs against self-interest in repeated rounds. Then, the most successful way to play is often a 'tit-for-tat' strategy, in which a player will initially cooperate, then respond in kind to the other player's previous choice.

Robert Axelrod, the political scientist at the University of Michigan in Ann Arbor who pioneered the study of Prisoner's Dilemma strategies, points out that in the real world, players who follow the tit-for-tat strategy need to cultivate a reputation for toughness. Other players must know that provocation will be met with retaliation. In the case of China, the message could be that the militarization of space will not be prevented simply by condemning it, but rather by showing that you can and will play the game if necessary.

The real world is, of course, not a computer simulation, in which the agents are rational. Although game theory is studied in defence-policy circles, no one denies that it gives little more than a cartoon picture of international relations.

But in this case the model fits. China and Russia have been calling for years for a treaty to constrain space weapons. Not only have these calls been ignored by the United States, but last year the White House issued perhaps the most aggressive policy statement about space since the chilliest days of the Cold War. It stated baldly that the United States "will oppose the development of new legal regimes or other restrictions that seek to prohibit or limit US access to or use of space."

The document not only asserted the United States' right to pursue its "national interests" (including "foreign policy objectives") by preserving its "freedom of action" in space, but also threatened to deny adversaries the same freedom.

Is China an 'adversary'? Friendly overtures between NASA and the China National Space Administration might suggest otherwise, but NASA is not the Pentagon. The United States is not only still pursuing its national missile-defence programme but is also developing laser-based weapons that can knock out satellites from the ground or aircraft. It is hardly surprising then, that anyone who is serious about stopping such a relentless and defiant pursuit of space weaponry through international agreement will deploy the bullish lessons of game theory.

This is not to say that the Chinese test is defensible. It is understandable that its neighbours, such as Japan and Australia, should be dismayed by it, and that Taiwan should regard it as an act of aggression. And there is every chance that the United States will interpret it as the opening shot of an arms race rather than as a summons to the negotiating table.

China might think that keeping a strong hand relies on not making its intentions too explicit. All the same, there is a difference between developing space weapons at the same time as opposing the militarization of space, and developing weapons while refusing to ban them. Which would you prefer?