Wednesday, February 28, 2007

Roll on the robots

This is the pre-edited version of my Materials Witness column for the April issue of Nature Materials.

Spirit, the redoubtable Martian rover, has spent the past year driving on just five of its six wheels. In February the Rover’s handling team said it had perfected the art of manoeuvring with one wheel missing, but the malfunction raises the question of whether there are better ways for robots to get around. Walking robots are becoming more efficient thanks to a better understanding of the ‘passive’ mechanism of human locomotion; but a single tumble might put such a robot out of action permanently in remote or extraterrestrial environments.

So a recent survey of rolling robots provided by Rhodri Armour and Julian Vincent of the University of Bath (J. Bionic Eng. 3, 195-208; 2006) is timely. They point out that spherical robots have several advantages: for example, they’ll never ‘fall over’, the mechanics can all be enclosed in a protective hard shell, the robot can move in any direction and can cope with collisions, uneven and soft surfaces.

But how do you make a sphere roll from the inside? Several answers have been explored in designs for spherical robots. One developed at the Politecnico di Bari in Italy aims to use an ingenious internal driver, basically a sprung rod with wheels at each end. It’s a tricky design to master, and so far only a cylindrical prototype exists. Other designs include spheres with ‘cars’ inside (the treadwheel principle), pairs of hemispherical wheels, moving internal ballast masses – the Roball made at the Université de Sherbrooke in Québec, and the Rotundus of Uppsala University in Sweden – and gyroscopic rollers like Carnegie Mellon’s Gyrover.

But Armour and Vincent suggest that one of the best designs is that in which masses inside the sphere can be moved independently along radial arms to shift the centre of gravity in any direction. The Spherobot under development at Michigan State University, and the August robot designed in Iran use this method, as does the wheel-shaped robot made at Ritsumeikan University in Kyoto, which is a deformable rubber hoop with ‘smart’ spokes that can crawl up a shallow incline and even jump into the air.

Although rolling robots clearly have a lot going for them, it might give us pause for thought that nature seems very rarely to employ rolling. There are a few organisms that make ‘intentional’ use of passive rolling, being able to adopt spherical shapes that are blown by the wind or carried along by gravity: tumbleweed is perhaps the most familiar example, but the Namib golden wheel spider cartwheels down sand dunes to escape wasps, and woodlice, when attacked, curl into balls and roll away. Active rollers are rarer still: Armour and Vincent can identify only the caterpillar of the Mother-of-Pearl moth and a species of shrimp, both of which perform somersaults.

Is this nature’s way of telling us that rolling has limited value for motion? That might be jumping to conclusions; after all, wheels are equally scarce in nature, but they serve engineering splendidly.

Tuesday, February 27, 2007

Science on Stage: two views

Carl Djerassi has struck back at Kirsten Shepherd-Barr’s rather stinging critique of his plays in a review of Kirsten’s book Science on Stage in Physics Today. I think his comments are a little unfair; Carl has his own agenda of using theatre to smuggle some science into culture, which is a defensible aim but doesn’t acknowledge that the first question must be: is this good theatre? Or as Kirsten asks, does it have ‘theatricality’? Here is my own take on her book, published in the July issue of Nature Physics last year.

Science on Stage: From Doctor Faustus to Copenhagen
Kirsten Shepherd-Barr
Princeton University Press, 2006
Cloth $29.95
ISBN 0-691-12150-8
264 pages

Over the past decade or so, science has been on stage as never before. Michael Frayn’s Copenhagen (1998), which dramatized the wartime meeting between Werner Heisenberg and Niels Bohr, is perhaps the most celebrated example; but Tom Stoppard had been exploring scientific themes for some time in Hapgood (1988) and Arcadia (1993), while Margaret Edison’s Wit (1998) and David Auburn’s Proof (2001) were both Pulitzer prize-winning Broadway hits, the latter now also a Hollywood movie. There are plenty of other examples.

While this ‘culturization’ of science has largely been welcomed by scientists – it certainly suggests that theatre has a more sophisticated relationship with science than that typified by the ‘mad scientist’ of cinematic tradition – there has been a curious lack of insightful discussion of the trend. Faced with ‘difficult’ scientific concepts, theatre critics tend to seek recourse in bland clichés about ‘mind-boggling ideas’. Scientists, meanwhile, all too often betray an artistic conservatism by revealing that their idea of theatre is an entertaining night out watching a bunch of actors behind a proscenium arch.

Thank goodness, then, for Kirsten Shepherd-Barr’s book. It represents the first sustained, serious attempt that I have seen to engage with the questions posed by science in theatre. In particular, while there has been plenty of vague talk about pedagogical opportunities, about Snow’s two cultures and about whether the ‘facts are right’, Shepherd-Barr explores what matters most about ‘science plays’: how they work (or not) as theatre.

Despite the book’s subtitle, it does not really try to offer a comprehensive historical account of science in theatre. All the same, one can hardly approach the topic without acknowledging several landmark plays of the past that have had a strong scientific content. It is arguably stretching the point to include Marlowe’s Dr Faustus (c.1594), despite its alchemical content, since this retelling of a popular folk legend is largely a morality tale which can be understood fully only in the context of its times. But while that is equally true of Ben Jonson’s The Alchemist (c.1610), both plays are important in terms of the archetypes they helped establish for the dramatic scientist: as arrogant Promethean man and as wily charlatan. There are echoes of both in the doctors of Ibsen’s plays, for example.

More significant for the modern trend is Bertolt Brecht’s Life of Galileo (1938/45) a far more nuanced look at the moral dilemmas that scientists face. Like Copenhagen, Galileo has drawn criticism from some scientists and science historians over the issue of historical accuracy. Some of these criticisms simply betray an infantile need to sustain Galileo as the heroic champion of rationalism in the face of church dogma. That is bad history too, but then, scientists are notorious (or should be) for their lack of real interest in history, as opposed to anecdote. Here Shepherd-Barr is admirably clear and patient, explaining that Copenhagen “takes history simply as material for creating theatre that does what art in general does: poses questions.”

Yet this is something scientists and historians seem to feel uncomfortable about. Writing about Copenhagen, historian Robert Marc Friedman has said “regardless of the playwright's intentions and even extreme care in creating his characters, audiences may leave the theatre with a wide range of impressions. In the case of the London production of Copenhagen on the evening that I attended, members of the audience with whom I spoke came away believing Bohr to be no better morally than Heisenberg; perhaps even less sympathetic. I am not sure, however, that this was the playwright's intention… I felt uncomfortable.” There is something chillingly Stalinist about this view of theatre and art. Should we also worry whether we have correctly divined the playwright’s “intentions” in Hamlet or King Lear?

Shepherd-Barr negotiates admirably around these lacunae between the worlds of science and art. Perhaps her key insight is that the most successful science plays are those that don’t just talk about their themes but embody them, as when the action of Arcadia reveals the thermodynamic unidirectionality of time. But most importantly, she reminds us that theatre is primarily not about words or ideas, but performance. That’s why theatre is so much stronger and more exciting a vehicle for dealing with scientific themes than film (which almost always does it miserably) or even literature. Good theatre, whatever its topic, doesn’t just engage but involves its audience: it is an experiment in which the presence of the observer is critical. Brecht pointed that out; but it is perhaps in theatre’s experimental forms, such as those pioneered by Jacques Lecoq and Peter Brook (who staged Oliver Sack’s The Man Who Mistook His Wife for a Hat in 1991) and exemplified in John Barrow and Luca Ronconi’s Infinities and Theatre de Complicite’s Mnemonic, that we see how much richer it can be than the remote, ponderous literalness of film. What could be more scientific-spirited than this experimental approach? When science has given us such extraordinary new perspectives on the world, surely theatre should be able to do more than simply show us people talking about it.
Don’t censor the state climatologists

Aware that I will no doubt be dismissed as the yes-man of the ‘climate-change consensus’ for my critique of climate sceptics in Prospect (see below), I want to say that I am dismayed at the news that two US state climatologists are being given some heat for disagreeing with the idea that global warming is predominantly anthropogenic. First, it seems that state climatologists have many concerns, of which global climate change is just one (and a relatively minor one at that). But more importantly, it is absurd to expect any scientist to determine their position by fiat so that it is aligned with state policy or any other political position. The matter is quite simple: if the feeling is that a scientist’s position on an issue undermines their credentials as a scientist, they should not be given this kind of status in the first place. If it is true that, as Mike Hopkins says in his Nature story (and Mike gets things right) “Oregon governor Ted Kulongoski said that he wants to strip Oregon's climatologist George Taylor of his title for not agreeing that global warming is predominantly caused by humans”, then Kulongoski is wrong. The only reason Taylor ought to be stripped of his title is that he has been found to be a demonstrably bad climatologist. The same with Pat Michaels at Virginia. As it happens, my impression of Michaels is that he is no longer able to be very objective on the issue of climate change – in other words, he doesn’t seem to be very trustworthy as a scientist on that score. But I’m prepared to believe that he says what he does in good faith, and of course should be allowed to argue his case. Trying to force these two guys to fall in line with the state position is simply going to fan the conspiracy theorists’ flames (I’m awaiting Benny Peiser’s inevitable take on this). But even if these paranoid sceptics did not exist, the demands would be wrong.
The more voices, the better the result in Wiki world
Here's the pre-edited version of my latest article for news@nature…

The secret to the quality of Wikipedia entries is lots of edits by lots of people

Why is Wikipedia so good? While the debate about just how good it is has been heated, the free online encyclopaedia offers a better standard of information than we might have any right to expect from a resource that absolutely anyone can write and edit.

Three groups of researchers claim now to have untangled the process by which many Wikipedia entries achieve an impressive accuracy [1-3]. They say that the best articles are those that are highly edited by many different contributors.

Listening to lots of voices rather than a few doesn't always guarantee the success that Wikipedia enjoys – just think of all those rotten movies written by committee. Collaborative product design in commerce and industry also often generates indifferent results. So why does Wiki work where others have failed?

Wikipedia was created by Jimmy Wales in January 2001, since when it has grown exponentially both in terms of the number of users and the information content. In 2005, a study of its content by Nature [4] concluded that the entries were of a comparable standing to those generated by experts for the Encyclopaedia Britannica (a claim that the EB quickly challenged).

The idea behind Wikipedia is encapsulated in writer James Surowiecki's influential book is The Wisdom of Crowds[5]: the aggregate knowledge of a wide enough group of people will always be superior to that of any single expert. In this sense, Wikipedia challenges the traditional notion that an elite of experts knows best. This democratic, open-access philosophy has been widely imitated, particularly in online resources.

At face value, it might seem obvious that the wider the community you consult, the better your information will be – that simply increases your chances of finding a real expert on Mozart or mud wrestling. But how do you know that the real experts will be motivated to contribute, and that their voices will not be drowned out or edited over by other less-informed ones?

The crucial question, say Dennis Wilkinson and Bernardo Huberman of Hewlett Packard's research laboratories in Palo Alto, California, is: how do the really good articles get to be that way? The idea behind Wikipedia is that entries are iterated to near-perfection by a succession of edits. But do edits by a (largely) unregulated crowd really make an entry better?

Right now there are around 6.4 million articles on Wikipedia, generated by over 250 million edits from 5.77 million contributors. Wilkinson and Huberman is have studied the editing statistics, and say that they don't simply follow the statistical pattern expected from a random process in which each edit is made independently of the others [1].

Instead, there are an abnormally high number of very highly edited entries. The researchers say this is just what is expected if the number of new edits to an article is proportional to the number of previous edits. In other words, edits attract more edits. The disproportionately highly edited articles, the researchers say, are those that deal with very topical issues.

And does this increased attention make them better? Yes, it does. Although the quality of an entry is not easy to assess automatically, Wilkinson and Huberman assume that those articles selected as the 'best' by the Wikipedia user community are indeed in some sense superior. These, they say, are more highly edited, and by a greater number of users, than less visible entries.

Who is making these edits, though? Some have claimed that Wikipedia articles don't truly draw on the collective wisdom of its users, but are put together mostly by a small, select elite, including the system's administrators. Wales himself has admitted that he spends "a lot of time listening to four or five hundred" top users.

Aniket Kittur of the University of California at Los Angeles and coworkers have set out to discover who really does the editing [2]. They have looked at 4.7 million pages from the English-language Wikipedia, subjected to a total of about 58 million revisions, to see who was making the changes, and how.

The results were striking. In effect, the Wiki community has mutated since 2001 from an oligarchy to a democracy. The percentage of edits made by the Wikipedia 'elite' of administrators increased steadily up to 2004, when it reached around 50 per cent. But since then it has steadily declined, and is now just 10 per cent (and falling).

Even though the edits made by this elite are generally more substantial than those made by the 'masses', their overall influence has clearly waned. Wikipedia is now dominated by users who are much more numerous than the elite but individually less active. Kittur and colleagues compare this to the rise of a powerful bourgeoisie within an oligarchic society.

This diversification of contributors is beneficial, Ofer Arazy and coworkers at the University of Alberta in Canada have found [3]. They say that, of the 42 Wikipedia entries assessed in the 2005 Nature study, the number of errors decreased as the number of different editors increased.

The main lesson for tapping effectively into the 'wisdom of the crowd', then, is that the crowd should be diverse: represented by many different views and interests. In fact, in 2004 Lu Hong and Scott Page of the University of Michigan showed that a problem-solving team selected at random from a diverse collection of individuals will usually perform better than a team made up of those who individually perform best – because the latter tend to be too similar, and so draw on too narrow a range of options [6]. For crowds, wisdom depends on variety.

1. Wilkinson, D. M. & Huberman, B. A. preprint (2007).
2. Kittur, A. et al. preprint (2007).
3. Arazy, O. et al. Paper presented at 16th Workshop on Information Technologies and Systems, Milwaukee, 9-10 December 2006.
4. Giles, J. Nature 438, 900-901 (2005).
5. Surowiecki, J. The Wisdom of Crowds (Random House, 2004).
6. Hong, L. & Page, S. E. Proc. Natl Acad. Sci. USA 101, 16385-16389 (2004).

Friday, February 23, 2007

The secret of Islamic patterns
This is the pre-edited version of my latest piece for news@nature. The online version acquired some small errors that may or may not be put right. But what a great paper!

Muslim artists may have used a sophisticated tiling scheme to design their geometric decorations

The complex geometrical designs used for decoration by Islamic artists in the Middle Ages, as seen in buildings such as the Alhambra palace in southern Spain, were planned using a sophisticated tiling system that enabled them to make patterns not known in the West until 20 years ago, two physicists have claimed.

By studying many Islamic designs, Peter Lu of Harvard University in Cambridge, Massachusetts, and Paul Steinhardt of Princeton University in New Jersey have decided they were put together not using a compass and ruler, as previously assumed, but by tessellating a small number of different tiles with complex shapes.

The researchers think that this technique was developed around the start of the thirteenth century, and that by the fifteenth century it had become advanced enough to generate complex patterns now known as quasiperiodic. These were 'discovered' in the 1970s by the British mathematical physicist Roger Penrose, and were later found to account for puzzling materials called quasicrystals. Discovered in 1984 in metal alloys, quasicrystals initially foxed scientists because they seemed to break the geometric rules that govern regular (crystalline) packing of atoms.

The findings provide a further illustration of how advanced Islamic mathematics was in comparison with the medieval West. From around the eleventh century, much of the understanding of science and maths in the Christian West came from Islamic sources. Arabic and Persian scholars preserved the learning of the ancient Greeks, such as Aristotle, Ptolemy and Euclid, in translations and commentaries.

The Muslim writers also made original contributions to these fields. Western scholars learnt Arabic and travelled to the East to make Latin translations of the Islamic books. Among the mathematical innovations of the Islamic world were the use of algebra, algorithms (both of which are words derived from Arabic) and the use of numerals now known as 'Arabic' (although derived in turn from Indian notation).

The mathematical complexity of Islamic decoration has long been admired. The artists used such motifs because representational art was discouraged by the Koran. “The buildings decorated this way were among the most monumental structures in the society, combining both political and religious functions”, says Lu. “There was a great interest, then, in using these structures to broadcast the power and sophistication of the controlling elite, and therefore to make the ornament and decoration equally monumental.”

Lu and Steinhardt now propose that these designs were created in a previously unsuspected way. They say that the patterns known as girih, consisting of geometric polygon and star shapes interlaced with zigzagging lines, were produced from a set of just a handful of tiling shapes ranging from pentagons and decagons (regular ten-sided polygons) to bow-ties, which can be pieced together in many different ways. The two physicists show how these tiles could themselves be drawn using geometric constructions with compasses that were known by medieval Islamic mathematicians.

Some scrolls written by Islamic artists to explain their design methods show tiles with these shapes explicitly, confirming that they were used as 'conceptual building blocks' in making the design. Lu says that they’ve found no evidence that the tiles were actually made as physical objects. “But we speculate they were”, he adds, “so as to be used as templates in laying out the actual tiling on the side of a building.”

Lu and Steinhardt say that designing this way was simpler and faster than starting with the zigzag lines themselves: packing them together in different regular arrays automatically generates the complex patterns. “Once you have the tiles, you can make complicated patterns, even quasicrystalline ones, by following a few simple rules”, says Lu.

The researchers have shown that many patterns on Islamic buildings can be built up from the girih tiles. The resulting patterns are usually periodic – they repeat again and again, and so can be perfectly superimposed on themselves when shifted by a particular distance – but this regularity can be hard to spot, compared say with that of a hexagonal honeycomb pattern.

The patterns also contain many shapes, such as polygons with 5, 10 and 12 sides, that cannot themselves be packed together periodically without leaving gaps. This property of the polygons means that scientists long believed that it was impossible for crystals to show five- ten- and twelvefold symmetries, such that rotating them by a fifth, tenth or twelfth of a full circle would allow them to be superimposed on themselves.

So when 'crystals' that appeared to have these symmetries were discovered in 1984, they seemed to violate the basic rules of geometry. But it became clear that these quasicrystals aren't perfectly periodic. In the same year, Steinhardt pointed out how patterns with the same geometric properties as quasicrystyals could be constructed from the tiling scheme devised by Penrose.

Steinhardt and Lu say that, while there is no sign that the Islamic artists knew of the Penrose tiling, their girih tiling method provides an alternative way to make the same quasicrystalline patterns. The researchers say that a design on the Darb-i-Imam shrine in Isfahan, Iran, made in 1453, is virtually equivalent to a Penrose tiling. One of the mesmerizing features of this pattern is that, like a true quasicrystal, it looks regular but never repeats exactly.

“I’d conjecture that this was quite deliberate”, says Lu. “They wanted to extend the pattern without it repeating. While they were not likely aware of the mathematical properties and consequences of the construction rule they devised, they did end up with something that would lead to what we understand today to be a quasicrystal.”

Lu, P. J. & Steinhardt, P. J. Science 315, 1106 - 1110 (2007).

I have received some comments from Roger Penrose on this work, sadly too late for inclusion in the Nature piece but which provide some valuable perspective on the discovery. This is what he says:
"The patterns are fascinating, and very beautiful, and it is remarkable how much these ancient architects were able to anticipate concerning 5-fold quasi-symmetric organization. But, as Steinhardt (and, in effect, Lu) have confirmed directly with me, the Islamic patterns are not the same as my patterns (on several counts: different basic shapes, no matching rules, no evidence that they used anything like a "Penrose pattern" to guide them, the hierarchical structure indicated by their subdivision of large shapes into smaller ones is not strictly followed, and would not, in any case, enable the patterns to map precisely to a "Penrose tiling"). I do, however, regard this work of Steinhardt and Lu as a most intriguing and significant discovery, and one wonders what more the ancient Islamic designers may have known about such things. I should perhaps add that the great Astronomer Johannes Kepler, in his Harmonice Mundi (vol.2), published in 1619, had independently produced a regular pentagon tiling that is much closer to my own tilings than anything that I have seen so far in this admittedly wonderful Islamic work."

Peter Lu, incidentally, has indicated that he agrees with everything that Penrose says here. The relationship between the Darb-i-Imam pattern and a Penrose tiling is subtle - much more so, it seems, than media reports of this work have tended to imply.

Tuesday, February 13, 2007

When research goes PEAR-shaped

I’ve got a column up today about the closure of the lab at Princeton that was investigating paranormal phenomena. Inevitably these things have to be chopped and changed before they appear, but here’s the pre-edited version. I feel scientists have no need to get too heavy about this kind of thing – if nothing else, it could serve as an interesting discussion point for students learning about how science is, and should be, done. To judge from the descriptions I’ve read of the PEAR lab and its ethos, we could probably do with a bit more of that in the scientific community. But why, oh why, do these people feel the need to come up with a ‘theory’ that is just a tangle of words? It is, in the time-honoured phrase, not even wrong. Sometimes you can’t help feeling that quantum theory has a lot to answer for.

There should be room for a bit of fringe science – but it's liable to suck you in.

It can't do a great deal for your self-esteem when media interest in your research project seems to catch fire only in response to the project's demise. But Robert Jahn and Brenda Dunne of the Princeton Engineering Anomalies Research (PEAR) laboratory probably aren't too bothered by that. For the attention generated by the closure of the PEAR lab – or rather, by the suggestion in the New York Times that this removes a source of ongoing embarrassment to the university – can surely only enhance the profile of Jahn and Dunne's longer-term vision of exploring "consciousness-related anomalies".

What "anomalies", exactly? With meticulous care, Jahn and Dunne avoid describing the phenomena they've studied using the more familiar words: telekinesis and telepathy. They have been studying people's ability to control machines and to transmit images from remote locations using only the power of the human mind. According to your perspective, that choice of language is a way of either promoting the paranormal by stealth or avoiding knee-jerk criticism.

The affair has inevitably ignited debates about the limits of academic freedom and responsibility. The NY Times quotes physicist Robert Park, a noted debunker of pseudo-science, as saying "It’s been an embarrassment to science, and I think an embarrassment for Princeton", while physicist Will Happer at Princeton says "I don’t believe in anything [Jahn] is doing, but I support his right to do it."

The university itself is trying to keep out of the fray. While stressing that the work done at PEAR was, like most other research at the university, privately funded, Princeton spokeswoman Cass Cliatt says that the lab's closure "was not a university decision". She adds that "the work at the lab was always understood by the university to be a personal interest of Professor Jahn's." Jahn, now an emeritus professor, was former dean of the engineering school and is an expert on electric propulsion.

Jahn and Dunne, a developmental psychologist, confirm that the decision was theirs. "We have accomplished what we originally set out to do 28 years ago, namely to determine whether these effects are real and to identify their major correlates", they say. With Jahn about to retire, "it is time for the next generation of scholars to take over." They hope that their work will be continued through the International Consciousness Research Laboratories, a network established in 1996 and now boasting members from 20 countries.

Some will surely share Park's view that this sort of thing gives science a bad name. But they'd be wrong to let the matter rest there, because PEAR's research reveals some interesting things about the practice and sociology of science.

The PEAR project offers a glimpse of what scientists can expect if they decide to dabble in what is conventionally termed the paranormal. Reasonable scientists cannot rule out the possibility of telekinesis, telepathy and other such 'anomalies' of the mind, simply because there are still such huge gaps in our understanding of consciousness and the brain. But most will say, again reasonably enough, that because all previous attempts to study these putative phenomena have failed to establish anything like a consistent, reproducible and unequivocal body of data, the chances of doing any serious science on the subject are minimal. As John Webster said of witchcraft in the seventeenth century, "There is no greater folly than to be very inquisitive and laborious to find out the causes of such a phenomenon as never had any existence."

In short, they regard effects like these as examples of what American chemist Irving Langmuir famously called pathological science. Experience teaches us that these things, from N-rays to cold fusion and homeopathy, are will 'o' the wisps too elusive for fruitful research, and probably imaginary if not downright fraudulent.

At least, this is the standard positivist position. But perhaps a stronger reason why scientists usually steer clear of such things is that it would be professional suicide not to. In a paper called 'The PEAR Proposition'1, published in the Journal of Scientific Exploration (a journal produced by the Society for Scientific Exploration, of which Jahn and Dunne are both officers), the PEAR duo describe the hostility they experienced at Princeton when the lab was set up. They found "covert ridicule,… grudging concession of academic freedom, and… uneasiness in public discussion of the subject." Most scientists find this sort of work not outrageous but simply embarrassing.

Predictably, Jahn and Dunne found it virtually impossible to publish their findings. Their papers, many of which reported the effects of subjects' mental and emotional states on a computerized random number generator, were returned with the comment that they treated an "inappropriate topic". One journal editor said that he would consider the text only when the authors were able to transmit it telepathically.

It is no wonder, then, that those from the academic community who swim in these murky waters are older and already established in their mainstream disciplines. The 'leaders emeritus' of the Society for Scientific Exploration are Peter Sturrock and Laurence Fredrick, emeritus professors at Stanford and Virginia respectively, both with secure reputations in space physics. Not only have such people earned themselves a bit of academic slack (as well as the ability to attract funding) but they cannot simply be cold-shouldered in the way that younger researchers would be. For the same reason, Nobel laureate physicist Brian Josephson has been permitted for years to pursue research on 'mind-matter unification' at Cambridge University amid what one senses to be a mixture of unease and resignation from his colleagues.

'The PEAR Proposition' contains many poignant notes. It shows how awkwardly the habits of academia sit with discussion of the everyday world of human interactions – an unavoidable issue in this line of work. The authors' talk of the "superficial jocularities" of their lab celebrations and the "spontaneous repartee therein" evoke a deeply uncool avuncularity, while Jahn and Dunne hardly do justice to their evidently relaxed working relationship by saying that it "constituted a virtual complementarity of strategic judgment that has triangulated our operational implementation in a particularly productive fashion." It's hard to doubt that the PEAR lab, with its artwork on the walls, its parties and its stuffed animals, was a lot more fun than most research labs. That the attempts to capture this atmosphere in the language of academese are so stilted says a lot about how routinely successful this language is in stripping the research literature of its humanity.

But in the end, this fascinating document undermines itself. When Jahn and Dunne talk about "the tendency of the desired effects to hide within the underlying random data-substructures", and the way their volunteers would often produce "better scores" in their first series of tests, they echo the way that other researchers of pathological science, such as cold fusion and the 'memory of water', betrayed their lack of objectivity with talk of "good runs" and "bad runs".

And perhaps that is the real worry in looking for marginal and unreliable phenomena. Jahn and Dunne are commendably honest about the "bemusing" and "capricious" nature of their measurements, but that only adds to the impression that they decided they were engaged in a battle of wits with nature, who did her darnedest to hide the truth of the matter.

It would be a poorer world that castigates and shuns any researcher who dabbles in unorthodox or even positively weird ideas. But the PEAR experience should be sobering reading for anyone thinking of doing that: it suggests that these things suck you in. You start off with random number generators and unimpeachable experimental technique, and before long you are talking about "an ongoing two-way exchange between a primordial Source and an organizing Consciousness." You have been warned.

1. Jahn, R. G. & Dunne, B. J. J. Sci. Explor. 19, 195 - 245 (2005).

Friday, February 09, 2007

Sceptical of the sceptics

Here’s the pre-edited version of my March Lab Report column for Prospect. In the course of writing it, I found it necessary to look at some of what has been written and said by the well-known climate-change sceptics, such as those named in the article. This has been interesting. No, let me rephrase that. By a monumental effort of will, I have suppressed the fury, frustration, stupefaction and despair that their comments are apt to induce, and found a precarious way to treat them as ‘interesting’. What I mean by that is that these remarks, coming from people who are undoubtedly smart, are so ill-informed, illogical, prejudiced and emotional that it makes little sense to approach them without trying to get some perspective on what the real issues are. The comments here by Melanie Phillips are a case in point – they are so dripping with furious contempt and scorn that there can be little doubt this touches on something rather personal to her. I suspect that in many of these cases, the issue is that warnings of climate change threaten to compromise a libertarian approach to life, because they imply that there are some freedoms we enjoy now that might have to be curtailed in the future. But I’m guessing, and frankly I don’t find it a very appealing prospect to try to analyse these people.

It would be a quixotic task to try to point out all the errors in the climate-sceptic rants – that would take too long, it would achieve little, and it would be rather boring. What is most striking, however, is that very often these errors are so elementary that they show that these people actually have no interest in trying to understand climate science, or science in general, but just want to find flaws and parade them. That is why the climate-sceptic position is rather repetitive, even obsessive: you just know that they are going to reel out the ‘hockey stick’ argument, even though, first, the criticism of Michael Mann’s work is still very contentious, and second, and most significantly, it is a laughable nonsense to imply that the whole notion of global warming rests on Mann’s ‘hockey stick’. Indeed, the sceptics’ arguments always depend on the notion that we assess global warming, and the anthropogenic contribution to it, by looking at global mean surface temperatures. It must be well over ten years ago now that scientists were explaining that the tell-tale sign of human influence is to be found in the fingerprint of regional differences in the warming trend (and the fingerprint is indeed there).

All the same, I cannot resist pointing out just a few of the idiocies in some of the sceptics’ arguments. This from Phillips has nothing to do with climate change, but tells us at once that this is not someone with more than a cartoon knowledge of the history of science. In lambasting scientists who have found higher than expected methane emissions from plants, she says:
“No doubt Galileo had the same problem when all medieval parchments agreed that the sun went round the earth; or Christopher Columbus, when all navigational maps agreed that the earth was flat.”
Yes, and newspapers print this stuff.
“People say ‘the ice caps are melting’. Well, some are; but others are growing.”
Hmm… aside from the north and south polar ice caps, where are these ‘others’?
“People say ‘the seas are rising’. Well, some are, but others are falling; and where they are rising, the cause often lies in the movement of land rather than any effects of climate change.”
Plain wrong, as simple as that.
“The earth’s climate is influenced by a vastly complex series of factors which interact with each other in literally millions of ways. Computer models, which have created global warming theory, simply cannot deal with all these factors. If over-simplified material is fed into the computers, over-simplified conclusions come out at the other end.”
Melanie Phillips has decided that computer models do not do a good job of modeling the climate system? She is an expert on this? She discounts the endless model verification checks that climate modelers run? On what grounds? Will the Daily Mail let her print any statement she likes (apparently it had no qualms in permitting her to say that most of the Earth’s atmosphere is water vapour).

Nigel Lawson is an interesting case, not least because he used to control the UK’s purse strings, and so you’d like to hope this is a man with a clear head for facts. But if his reasoning on the economy was like his reasoning on climate change, that’s a truly scary thought. Here we have a marshalling of the ‘facts’ that is so selective and so distorted that you wonder just what passes for normal debate in Westminster. Oh, and the occasional lie, such as that the Royal Society tried “to prevent the funding of climate scientists who do not share its alarmist view”. (They did nothing of the sort; Bob Ward of the RS asked ExxonMobil when it intended to honour its promise to stop funding lobby groups who promote disinformation about climate change. There was no suggestion of stopping any funds to scientists.) Lawson’s comment that “the new priests are scientists (well rewarded with research grants for their pains) rather than clerics of the established religions” is about as close as I’ve seen a sceptic come to aping the stock phrases of cranks everywhere, but is also revealing in its implication that Lawson seems to find the idea of experts who know more than him offensive – a common affliction of the privileged and well educated non-scientist.

Alright, enough. I’ll start despairing again if I’m not careful. Here’s the column.

The latest report by the Intergovernmental Panel on Climate Change has come as near to blaming global warming on human activities as any scientists are likely to, while adding that its extent and consequences may be worse than we thought. The IPCC has previously been so (properly) tentative that even climate-change sceptics will have a hard time casting them as scaremongerers. So where does this leave the sceptics now?

Many politicians and scientists are hoping they will now shut up. But that’s to make the mistake of thinking this is an argument over scientific evidence.

Consider this, for instance: “As most of you have heard many times, the consensus of climate scientists believe sin global warming. Historically, the claim of consensus has been the first refuge of scoundrels; it is a way to avoid debate by claiming that the matter is already settled. Whenever you hear the consensus of scientists agrees on something or other, reach for your wallet, because you're being had. Let's be clear: the work of science has nothing whatever to do with consensus. Consensus is the business of politics. In science, consensus is irrelevant.”

This is from Michael Crichton – for the author of Jurassic Park has been giving high-level speeches about the ‘myth’ of climate change, and has even been summoned as an ‘expert witness’ on the matter by the US Senate. We need only concede that the Earth orbits the Sun and that humans are a product of Darwinian evolution to see that Crichton is not so much indulging in sophistry as merely saying something stupid. But because he is a smart fellow, stupidity can't account for it.

That's really the issue in tackling climate-change sceptics. There is no mystery about the way that some critics of the IPCC’s conclusions are simply protecting vested interests – ExxonMobil’s funding of groups that peddle climate-change disinformation, or the US government's extensive interference in federally funded climate science needs no more complex explanation than that. But this isn't true 'scepticism' – it is merely denial motivated by self-interest.

The real sceptics – strange bedfellows such as David Bellamy, Nigel Lawson, Melanie Phillips, a handful of real scientists, and Crichton – are a different phenomenon. For them there is a personal agenda involved. It’s less obvious what that might be than in, say, the comparable case of the ‘sceptics’ who denied the link between HIV and AIDS in the early 1990s. But what is immediately evident to the trained ear is that the sceptics’ denials carry the classic hallmarks of the crank – a belief that one's own reasoning betters that of professionals (even though the errors are usually elementary), a victim mentality, an instant change of tack when convincingly refuted, and (always a giveaway) a historically naive invocation of Galileo’s persecution. Of course, some of them simply tell outright lies too.

Bjorn Lomborg is a slightly different matter, since his objections focus less on denying climate change and more on denying the need to do anything about it. Nonetheless, although the economic arguments are complex, Lomborg's rhetoric – for example, suggesting that because climate change is less pressing than, say, AIDS, we should ignore it – is simplistic to a degree that again does not equate with his evident intelligence.

Economics is indeed going to be the future battleground. Yes, the argument goes, so climate change is happening, but that doesn’t mean we have to do anything to prevent it. Far better to adapt to it. This line has been pushed by heavier hitters than Lomborg, such as the eminent economists William Nordhaus at Yale and Partha Dasgupta at Cambridge, who reject the economic analysis of Nicholas Stern. The argument has some force in purely economic terms – which is perhaps not the foremost consideration if you live in coastal Bangladesh or on the Marshall islands – but it will take a lot of either faith or foolishness to let economics alone guide us into uncharted waters where we cannot rule out mass famine, decimation of biodiversity and unforeseen positive feedbacks that accelerate the warming. That’s not what economics is for.

Yet economists are right to say that we need informed rather than knee-jerk responses, and that these will surely involve compromises rather than dreaming of arresting the current trends. But by turning now to economics, however, the celebrity sceptics will only betray their agenda. It’s time to seek more reasoned voices of caution.


How long before we witness the rise of the bird flu sceptic? (Matthew Parris has already staked his claim.) They could be right in one sense – according to Albert Osterhaus, chairman of the European Scientific Working group on Influenza (ESWI), “Isolated outbreaks of avian influenza in Europe are a problem in terms of economy, animal welfare and biodiversity, but the threat to public health will probably be manageable.” But they’ll almost certainly be wrong in another. The H5N1 virus is all too often portrayed as a bolt from the blue, like a bit of really rotten luck. In truth it’s illustrative of a fact of life in the viral world, where, to put it bluntly, shit happens. Last November, leading US virologists Robert Webster and Elena Govorkova stated baldly that “there is no question that there will be another influenza pandemic some day.” The ESWI agrees, and warns that Europe is ill prepared for it. Even if H5N1 doesn’t get us (by mutating into a form readily transmitted between humans), another virus will. Flu viruses are legion, and unavoidable. Here, at least, is one threat for which mitigation, not prevention, is the only option. H5N1 seems less transmissible in warmer weather, but one hopes even climate sceptics won’t see that as a point in their favour.

Friday, February 02, 2007

Space wars

I have an editorial piece on news@nature on China’s recent missile destruction of a satellite. The commentary in the scientific press has had much to say about the possible hazards of the space debris this created, but less about the implications and significance of the act for space militarization. This is my take on that.

Published online: 24 January 2007; doi:10.1038/news070122-8 A dangerous game in space
Is China's satellite zapping simply old-fashioned sabre-rattling? Or is it a rational step to restrict the use of space weapons?

How do you reconcile China's shooting down of a satellite earlier this month with the subsequent insistence by its foreign ministry spokesman, Liu Jianchao, that China opposes military competition in space?

China has not yet explained its objectives. But the action makes perfect sense in the context of game theory, the conventional framework for analysing conflict and cooperation.

Put simply, if you want to spur nations to collaborate in curbing space militarization, good intentions are not enough. You need to show that you can get tough if the need arises.

A benign interpretation of China's action, then, is that it might accomplish what years of talking have not: force the United States to negotiate an international treaty on space weaponry. Does China have such a specific goal in mind? Or does it merely wish to leave its options open in dealing with rebellious Taiwan?

These are dangerous questions. But it is worth bearing in mind that the Chinese test is at least consistent with a completely rational approach to securing international enforcement of the peaceful use of space.

The classic scenario to explore cooperation between nations using game theory is the Prisoner's Dilemma. Here, two players are each given the choice of cooperating with each other or betraying the other person (defecting), with different rewards or penalties for each potential outcome. Mutual cooperation is more beneficial to both players than is mutual defection. But temptation gets in the way: the player who defects against a cooperator wins the biggest prize of all.

Although the rational strategy in a one-off bout of the Prisoner's Dilemma is to defect, it runs against self-interest in repeated rounds. Then, the most successful way to play is often a 'tit-for-tat' strategy, in which a player will initially cooperate, then respond in kind to the other player's previous choice.

Robert Axelrod, the political scientist at the University of Michigan in Ann Arbor who pioneered the study of Prisoner's Dilemma strategies, points out that in the real world, players who follow the tit-for-tat strategy need to cultivate a reputation for toughness. Other players must know that provocation will be met with retaliation. In the case of China, the message could be that the militarization of space will not be prevented simply by condemning it, but rather by showing that you can and will play the game if necessary.

The real world is, of course, not a computer simulation, in which the agents are rational. Although game theory is studied in defence-policy circles, no one denies that it gives little more than a cartoon picture of international relations.

But in this case the model fits. China and Russia have been calling for years for a treaty to constrain space weapons. Not only have these calls been ignored by the United States, but last year the White House issued perhaps the most aggressive policy statement about space since the chilliest days of the Cold War. It stated baldly that the United States "will oppose the development of new legal regimes or other restrictions that seek to prohibit or limit US access to or use of space."

The document not only asserted the United States' right to pursue its "national interests" (including "foreign policy objectives") by preserving its "freedom of action" in space, but also threatened to deny adversaries the same freedom.

Is China an 'adversary'? Friendly overtures between NASA and the China National Space Administration might suggest otherwise, but NASA is not the Pentagon. The United States is not only still pursuing its national missile-defence programme but is also developing laser-based weapons that can knock out satellites from the ground or aircraft. It is hardly surprising then, that anyone who is serious about stopping such a relentless and defiant pursuit of space weaponry through international agreement will deploy the bullish lessons of game theory.

This is not to say that the Chinese test is defensible. It is understandable that its neighbours, such as Japan and Australia, should be dismayed by it, and that Taiwan should regard it as an act of aggression. And there is every chance that the United States will interpret it as the opening shot of an arms race rather than as a summons to the negotiating table.

China might think that keeping a strong hand relies on not making its intentions too explicit. All the same, there is a difference between developing space weapons at the same time as opposing the militarization of space, and developing weapons while refusing to ban them. Which would you prefer?