Monday, June 04, 2007


Tendentious tilings

[This is my Materials Witness column for the July issue of Nature Materials]

Quasicrystal enthusiasts may have been baffled by a rather cryptic spate of comments and clarifications following in the wake of a recent article claiming that medieval Islamic artists had the tools needed to construct quasicrystalline patterns. That suggestion was made by Peter Lu at Harvard University and Paul Steinhardt at Princeton (Science 315, 1106; 2007). [See my previous post on 23 February 2007] But in a news article in the same issue, staff writer John Bohannon explained that these claims had already caused controversy, being allegedly anticipated in the work of crystallographer Emil Makovicky at the University of Copenhagen (Science 315, 1066; 2007).

The central thesis of Lu and Steinhardt is that Islamic artists used a series of tile shapes, which they call girih tiles, to construct their complex patterns. They can be used to make patterns of interlocking pentagons and decagons with the ‘forbidden’ symmetries characteristic of quasicrystalline metal alloys, in which these apparent symmetries, evident in diffraction patterns, are permitted by a lack of true periodicity.

Although nearly all of the designs evident on Islamic buildings of this time are periodic, Lu and Steinhardt founds that those on a fifteenth-century shrine in modern-day Iran can be mapped almost perfectly onto another tiling scheme, devised by mathematician Roger Penrose, which does generate true quasicrystals.

But in 1992 Makovicky made a very similar claim for a different Islamic tomb dating from 1197. Some accused Lu and Steinhardt of citing Makovicky’s work in a way that did not make this clear. The authors, meanwhile, admitted that they were unconvinced by Makovicky’s analysis and didn’t want to get into an argument about it.

The dispute has ruffled feathers. Science subsequently published a ‘clarification’ that irons out barely perceptible wrinkles in Bohannon’s article, while Lu and Steinhardt attempted to calm the waters with a letter in which they ‘gladly acknowledge’ earlier work (Science 316, 982; 2007). It remains to be seen whether that will do the trick, for Makovicky wasn’t the only one upset by their paper. Design consultant Jay Bonner in Santa Fe has also made previous links between Islamic patterns and quasicrystals.

Most provocatively, Bonner discusses the late-fifteenth-century Topkapi architectural scroll that furnishes the key evidence for Lu and Steinhardt’s girih scheme. Bonner points out how this scroll reveals explicitly the ‘underlying polygonal sub-grid’ used to construct the pattern it depicts. He proposes that the artists commonly used such a polygonal matrix, composed of tile-like elements, and demonstrates how these can create aperiodic space-filling designs.

Bonner does not mention quasicrystals, and his use of terms such as self-similarity and even symmetry do not always fit easily with that of physicists and mathematicians. But there’s no doubting that his work deepens the ‘can of worms’ that Bohannon says Lu and Steinhardt have opened.

All this suggests that the satellite conference of the forthcoming European Crystallographic Meeting in Marrakech this August, entitled ‘The enchanting crystallography of Moroccan ornaments’, might be more stormy than enchanting – for it includes back-to-back talks by Makovicky and Bonner.

Friday, May 25, 2007

Does this mean war?
[This is my latest article for muse@nature.com]

Cyber-attacks in the Baltic raise difficult questions about the threat of state-sponsored information warfare.

Is Estonia at war? Even the country’s leaders don’t seem sure. Over the past several weeks the Baltic nation has suffered serious attacks, but no one has been killed and it isn’t even clear who the enemy is.

That’s because the attacks have taken place in cyberspace. The websites of the Estonian government and political parties, as well as its media and banks, have been paralysed by tampering. Access to the sites has now been blocked to users outside the country.

This is all part of a bigger picture in which Estonia and its neighbour Russia are locked in bitter dispute sparked by the Soviet legacy. But the situation could provoke a reappraisal of what cyber-warfare might mean for international relations.

In particular, could it ever constitute a genuine act of war? “Not a single Nato defence minister would define a cyber-attack as a clear military action at present,” says the Estonian defence minister Jaak Aaviksoo — but he seems to doubt whether things should remain that way, adding that “this matter needs to be resolved in the near future.”

The changing face of war


When the North Atlantic Treaty was drafted in 1949, cementing the military alliance of NATO, it seemed clear enough what constituted an act of war, and how to respond. “An armed attack against one or more [member states] shall be considered an attack against them all,” the treaty declared. It was hard at that time to imagine any kind of effective attack that did not involve armed force. Occupation of sovereign territory was one thing (as the Suez crisis soon showed), but no one was going to mobilize troops in response to, say, economic sanctions or verbal abuse.

Now, of course, ‘war’ is itself a debased and murky term. Nation states seem ready to declare war on anything: drugs, poverty, disease, terrorism. Co-opting military jargon for quotidian activities is an ancient habit, but by doing so with such zeal, state leaders have blurred the distinctions.

Cyber-war is, however, something else again. Terrorists had already recognized the value of striking at infrastructures rather than people, as was clear from the IRA bombings of London’s financial district in the early 1990s, before the global pervasion of cyberspace. But now that computer networks are such an integral part of most political and economic systems, the potential effects of ‘virtual attack’ are vastly greater.

And these would not necessarily be ‘victimless’ acts of aggression. Disabling health networks, communications or transport administration could easily have fatal consequences. It is not scaremongering to say that cyberwar could kill without a shot being fired. And the spirit, if not currently the letter, of the NATO treaty must surely compel it to protect against deaths caused by acts of aggression.

Access denied

The attacks on Estonia websites, triggered by the government’s decision to relocate a Soviet-era war memorial, consisted of massed, repeated requests for information that overwhelmed servers and caused sites to freeze — an effect called distributed denial of service. Estonian officials claimed that many of the requests came from computers in Russia, some of them in governmental institutions.

Russia has denied any state involvement, and so far European Union and NATO officials, while denouncing the attacks as “unacceptable” and “very serious”, have not accused the Kremlin of orchestrating the campaign.

The attack is particularly serious for Estonia because of its intense reliance on computer networks for government and business. It boasts a ‘paperless government’ and even its elections are held electronically. Indeed, information technology is one of Estonia’s principal strengths – which is why it was able to batten down the hatches so quickly in response to the attack. In late 2006, Estonia even proposed to set up a cyber-defence centre for NATO.

There is nothing very new about cyber-warfare. In 2002 NATO recognized it as a potential threat, declaring an intention to “strengthen our capabilities to defend against cyber attacks”. In the United States, the CIA, the FBI, the Secret Service and the Air Force all have their own anti-cyber-terrorism squads.

But most of the considerable attention given to cyber-attack by military and defence experts has so far focused on the threat posed by individual aggressors, from bored teenage hackers to politically motivated terrorists. This raises challenges of how to make the web secure, but does not really pose new questions for international law.

The Estonia case may change that, even if (as it seems) there was no official Russian involvement. Military attacks often now focus on the use of armaments to disable communications infrastructure, and it is hard to see how cyber-attacks are any different. The United Nations Charter declares its intention to prevent ‘acts of aggression’, but doesn’t define what those are — an intentional decision so as not to leave loopholes for aggressors, which now looks all the more shrewd.
Irving Lachow, a specialist on information warfare at the National Defense University in Washington, DC, agrees that the issue is unclear at present. “One of the challenges here is figuring out how to classify a cyber-attack”, he says. “Is it a criminal act, a terrorist act, or an act of war? It is hard to make these determinations but important because different laws apply.” He says that the European Convention on Cyber Crime probably wouldn’t apply to a state-sponsored attack, and that while there are clear UN policies regarding ‘acts of war’, it’s not clear what kind of cyber-attack would qualify. “In my mind, the key issues here are intent and scope”, he says. “An act of war would try to achieve a political end through the direct use of force, via cyberspace in this case.”

And what would be the appropriate response to state-sanctioned cyber-attack? The use of military force seems excessive, and could in any case be futile. Some think that the battle will have to be joined online – but with no less a military approach than in the flesh-and-blood world. Computer security specialist Winn Schwartau, has called for the creation of a ‘Fourth Force’, in addition to the army, navy, and air force, to handle cyberspace.

That would be to regard cyberspace as just another battleground. But perhaps instead this should be seen as further reason to abandon traditional notions about what warfare is, and to reconsider what, in the twenty-first century, it is now becoming.

Wednesday, May 16, 2007

There’s no such thing as a free fly
[This is the pre-edited version of my latest article for muse@nature.com]

Neuroscience can’t show us the source of free will, because it’s not a scientific concept.

Gluing a fly’s head to a wire and watching it trying to fly sounds more like the sort of experiment a naughty schoolboy would conduct than one that turns out to have philosophical and legal implications.

But that’s the way it is for the work reported this week by a team of neurobiologists in the online journal PLoS One [1]. They say their study of the ‘flight’ of a tethered fly reveals that the fly’s brain has the ability to be spontaneous – to make decisions that aren’t predictable responses to environmental stimuli.

The researchers think this might be what underpins the notorious cussedness of laboratory animals, wryly satirized in the so-called Harvard Law of Animal Behavior: “Under carefully controlled experimental circumstances, an animal will behave as it damned well pleases.”

But in humans, this apparently volitional behaviour seems all but indistinguishable from what we have traditionally called free will. In other words, the work seems to imply that even fruit-fly brains are hard-wired to display something we might as well denote free will.

The flies are tethered inside a blank white cylinder, devoid of all environmental clues about which direction to take. If the fly is nothing but an automaton impelled hither and thither by external inputs, then it would in this circumstance be expected to fly in purely random directions. Although the wire stops the fly from actually moving, its attempts to do so create a measurable tug on the wire that reveals its ‘intentions’.

Björn Brembs of the Free University of Berlin and his colleagues found found that these efforts aren’t random. Instead, they reveal a pattern that, for an unhindered fly, would alternate localized buzzing around with occasional big hops.

This kind of behaviour has been seen in other animals (and in humans too), where it has been interpreted as a good foraging strategy: if a close search of one place doesn’t bring result, you’re better off moving far afield and starting afresh.

But this was thought to rely on feedback from the environment, and not to be intrinsic to the animals’ brains. Brembs and colleagues say that in contrast there exists a ‘spontaneity generator’ in the flies’ brains which does not depend on external information in a determinate way.

Is that really ‘free will’, though? No one is suggesting that the flies are making conscious choices; the idea is simply that this neural ‘spontaneity circuit’ is useful in evolutionary terms, and so has become hard-wired into the brain.

But it could, the researchers say, be a kind of precursor to the mental wiring of humans that would enable us to evade the prompts of our own environmentally conditioned responses and ‘make up our own minds’ – to exercise what is commonly interpreted as free will. “If such circuits exist in flies, it would be unlikely that they do not exist in humans, as this would entail that humans were more robot-like than flies”, Brembs says.

These neural circuits mean that you can know everything about an organism’s genes and environment yet still be unable to anticipate its caprices. If that’s so – and the researchers now intend to search for the neural machinery involved – this adds a new twist to the current debate that neuroscience has provoked about human free will.

Some neuroscientists have argued that, as we become increasingly informed about the way our behaviour is conditioned by the physical and chemical makeup of our brains, the notion of legal responsibility will be eroded. Criminals will be able to argue their clack of culpability on the grounds that “my brain made me do it”.

While right-wing and libertarian groups fulminate at the idea that this will hinder the law’s ability to punish and will strip the backbone from the penal system, some neuroscientists feel that it will merely change its rationale, making it concerned less with retribution and more with utilitarian prevention and social welfare. According to psychologists Joshua Greene and Jonathan Cohen of Princeton University, “Neuroscience will challenge and ultimately reshape our intuitive sense(s) of justice” [2].

If neuroscience indeed threatens free will, some of the concerns of the traditionalists are understandable. It’s hard to see how notions of morality could survive a purely deterministic view of human nature, in which our actions are simply automatic responses to external stimuli and free will is an illusion spun from our ignorance about cause and effect. And it is a short step from such determinism to the pre-emptive totalitarianism depicted in the movie Minority Report, where people are arrested for crimes they have yet to commit.

But while this ‘hard’ mechanical determinism may have made sense to political philosophers of the Enlightenment – it was the basis of Thomas Hobbes’ theory of government, for example – it is merely silly today, and for a number of reasons.

First, it places its trust in a linear, Cartesian mechanics of cogs and levers that clearly has nothing to do with the way the brain works. If nothing else, the results of Brembs and colleagues show that even the fly’s brain is highly nonlinear, like the weather system, and not susceptible to precise prediction.

Second, this discussion of ‘free will’ repeats the old canard, apparently still dear to the hearts of many neuroscientists, evolutionary biologists and psychologists, that our behaviour is governed by the way our minds work in isolation. But as neuroscientists Michael Gazzaniga and Megan Steven have pointed out [3], we act in a social context. “Responsibility is a social construct and exists in the rules of society”, they say. “It does not exist in the neuronal structures of the brain”.

This should be trivially obvious, but is routinely overlooked. Other things being equal, violent crime is frequently greater where there is socioeconomic deprivation. This doesn’t make it a valid defence to say ‘society made me do it’, but it shows that the interactions between environment, neurology and behaviour are complex and ill-served by either neurological determinism or a libertarian insistence on untrammelled ‘free will’ as the basis of responsibility and penal law.

The fact is that ‘free will’ is (like life and love) one of those culturally useful notions that turn into shackles when we try to make them ‘scientific’. That’s why it is unhelpful to imply that the brains of flies or humans might contain a ‘free will’ module simply because they have a capacity to scramble the link between cause and effect. Free will is a concept for poets and novelists, and, if it keeps them happy, for philosophers and moralists. In science and politics, it deserves no place.

Reference
1. Maye, A. et al. PLoS ONE May, e443 (2007).
2. Greene, J. & Cohen, J. Phil. Trans. R. Soc. Lond. B 359, 1775 - 1785 (2004).
3. Gazzaniga, M. S. & Steven, M. S. Sci. Am. MIND April 2005.


Philosophers, scientists and writers on free will

“The will cannot be called a free cause, but only necessary…. Things could have been produced by God in no other manner and in no other order than that in which they have been produced.”
Baruch Spinoza, Ethics

“Whatever concept one may hold, from a metaphysical point of view, concerning the freedom of the will, certainly its appearances, which are human actions, like every other natural event are determined by universal laws.”
Immanuel Kant, On History

“As a matter of fact, if ever there shall be discovered a formula which shall exactly express our wills and whims; if there ever shall be discovered a formula which shall make it absolutely clear what those wills depend upon, and what laws they are governed by, and what means of diffusion they possess, and what tendencies they follow under given circumstances; if ever there shall be discovered a formula which shall be mathematical in its precision, well, gentlemen, whenever such a formula shall be found, man will have ceased to have a will of his own—he will have ceased even to exist.”
Fyodor Dostoevsky, Notes from the Underground

“Free will is for history only an expression connoting what we do not know about the laws of human life.”
Leo Tolstoy, War and Peace

“There once was a man who said ‘Damn!’
It is borne in upon me I am
An engine that moves
In predestinate grooves
I’m not even a bus, I’m a tram.”
Maurice Evan Hare,1905

“We cannot prove… that human behaviour… is fully determined, but the position becomes more plausible as facts accumulate.”
B. F. Skinner, About Behaviorism

“Free will, as we ordinarily understand it, is an illusion. However, it does not follow… that there is no legitimate place for responsibility.”
Joshua Greene & Jonathan Cohen, 2004

Monday, May 14, 2007

Should we get engaged?
[This is the pre-edited version of my Crucible column for the June issue of Chemistry World.]

In 2015 the BBC broadcast a documentary called ‘Whatever happened to nanotechnology?’ Remember the radical predictions being made in 2006, it asked, such as curing blindness? Well, things didn’t turn out to be so simple. On the other hand, nor have the forecasts of nano-doom come to pass. Instead, there’s simply been plenty of solid, incremental science that has laid the groundwork for a brighter technological future.

This scenario, imagined in a European Union working paper, “Strategy for Communication Outreach in Nanotechnology”, sounds a little unlikely, not least because television is increasingly less interested in stories with such anodyne conclusions. But this, the paper suggests, is the optimistic outcome: one where nanotech has not been derailed by inept regulation, industrial mishaps and public disenchantment.

The object of the exercise is to tell the European Commission how to promote “appropriate communication in nanotechnology.” The present working paper explains that “all citizens and stakeholders, in Europe and beyond, are welcome to express comments, opinions and suggestions by end June 2007”, which will inform a final publication. So there’s still time if you feel so inclined.

One of the striking things about this paper is that it implies one now has to work frightfully hard, using anything from theatre to food, to bridge the divide between science and the public – and all, it seems, so that the public doesn’t pull the plug through distrust. If that’s really so, science is in deep trouble. But it may be in the marketplace, not the research lab, that public perception really holds sway.

What, however, is “appropriate communication” of technology?

Previous EU documents have warned that nanotechnology is poorly understood and difficult to grasp, and that its benefits are tempered by risks that need to be openly stated and investigated. “Without a serious communication effort,” one report suggests, “nanotechnology innovations could face an unjust negative public reception. An effective two-way dialogue is indispensable, whereby the general public’s views are taken into account and may be seen to influence [policy] decisions”.

This is, of course, the current mantra of science communication: engagement, not education. The EU paper notes that today’s pubic is “more sceptical and less deferential”, and that therefore “instead of the one-way, top down process of seeking to increase people’s understanding of science, a two-way iterating dialogue must be addressed, where those seeking to communicate the wonders of their science also listen to the perceptions, concerns and expectations of society.”

And so audiences are no longer lectured by a professor but discuss the issues with panels that include representatives from Greenpeace. There’s much that is productive and progressive in that. But in his bracingly polemical book The March of Unreason (OUP, 2005), Lord Dick Taverne challenges its value and points out that ‘democracy’ is a misplaced ideal in science. “Why should science be singled out as needing more democratic control when other activities, which could be regarded as equally ‘elistist’ and dependent on special expertise, are left alone?” he asks. Why not ‘democratic art’?

Taverne’s critique is spot-on. There now seems to be no better sport than knocking ‘experts’ who occasionally get things wrong, eroding the sense that we should recognize expertise at all. This habitual skepticism isn’t always the result of poor education – or rather, it is often the result of an extremely expensive but narrow one. The deference of yore often led to professional arrogance; but today’s universal skepticism makes arrogance everyone’s prerogative.

Another danger with ‘engagement’ is that it tends to provide platforms for a narrow spectrum of voices, especially those with axes to grind. The debate over climate change has highlighted the problems of insisting on ‘balance’ at the expense of knowledge or honesty. Nanotechnology, however, has been one area where ‘public engagement’ has often been handled rather well. A three-year UK project called Small Talk hosted effective public debates and discussions on nanotechnology while gathering valuable information about what people really knew and believed. Its conclusions were rather heartening. People’s attitudes to nanotechnology are not significantly different from their attitudes to any new technology, and are generally positive. People are less concerned about specific risks than about the regulatory structures that contain it. The public perception of risk, however, continues to be a pitfall: many now think that a ‘safe’ technology is one for which all risks have been identified and eliminated. But as Taverne points out, such a zero-risk society “would be a paradise only for lawyers.”

The EU’s project is timely, however, for the UK’s Council for Science and Technology, an independent advisory body to the government, has just pronounced in rather damning terms on the government’s efforts to ‘engage’ with the social and ethical aspects of nanotech. Their report looks at progress on this issue since publication of a nanotech review in 2004 prepared for the government by the Royal Society and Royal Academy of Engineering “The report led to the UK being seen as a world leader in its engagement with nanotechnologies”, it say. “However, today the UK is losing that leading position.”

It attributes this mainly to a failure to institute a coherent approach to the study of nano-toxicology, the main immediate hazard highlighted by the 2004 review. “In the past five years, only £3m was spent on toxicology and the health and environmental impacts of nanomaterials”, it says, and “there is as yet little conclusive data concerning the long-term environmental fate and toxicity of nanomaterials.”

Mark Welland, one of the expert advisers on this report, confirms that view. “The 2004 recommendations have been picked up internationally”, he says, “but the UK government has one almost nothing towards toxicology.” Like others, he fears that inaction could open the doors to a backlash like that against genetically modified organisms or the MMR vaccine.

If that’s so, maybe we do need good ideas about how to communicate. But that’s only part of an equation that must also include responsible industrial practice, sound regulation, broad vision, and not least, good research
Prospects for the LHC
[This is my pre-edited Lab Report column for the June issue of Prospect.]

Most scientific instruments are doors to the unknown; that’s been clear ever since Robert Hooke made exquisite drawings of what he saw through his microscope. They are invented not to answer specific questions – what does a flea look like up close? – but for open-ended study of a wide range of problems. This is as true of the mercury thermometer as it is of the Hubble Space Telescope.

But the Large Hadron Collider (LHC), under construction at the European centre for high-energy physics (CERN) in Geneva, is different. Particle physicists rightly argue that, because it will smash subatomic particles into one another with greater energy than ever before, it will open a window on a whole new swathe of reality. But the only use of the LHC that anyone ever hears or cares about is the search for the Higgs boson.

This is pretty much the last missing piece of the so-called Standard Model of fundamental physics: the suite of particles and their interactions that explains all known events in the subatomic world. The Higgs boson is the particle associated with the Higgs field, which pervades all space and, by imposing a ‘drag’ on other particles, gives them their mass. (In the Standard Model all the fields that create forces have associated particles: electromagnetic fields have photons, the strong nuclear force has gluons.)

To make a Higgs boson, you need to release more energy in a particle collision than has so far been possible with existing colliders. But the Tevatron accelerator at Fermilab in Chicago comes close, and could conceivably still glimpse the Higgs before it is shut down in 2009. While no one wants to admit that this is a race, that can’t be doubted – and Fermilab would love to spot the Higgs first.

Which makes it all the more awkward that components supplied by Fermilab for the LHC have proven to be faulty – most recently, a huge magnet that shifted and ruptured a pipe. Fermilab admits to embarrassment at the ‘oversight’, but it has set the rumour mills grinding. For this and (primarily) other reasons, the LHC now seems unlikely to make its first test run at the end of this year. Among other things, it needs to be refrigerated to close to absolute zero, which can’t be done in a hurry.

Extravagant promises can only be sustained for so long without delivery, and so the delays could test public sympathy, which has so far been very indulgent of the LHC. As a multi-million instrument that has only one really big question in sight, the supercollider is already in a tight spot: everyone thinks they know the answer already (the Higgs exists), and that may yet be confirmed before the LHC comes online. But this is a universal problem for high-energy physics today, where all the major remaining questions demand unearthly energies. There’s a chance that the LHC may turn up some surprises – evidence of extra dimensions, say, or of particles that lie outside the Standard Model. But the immense and expensive technical challenges involved in exploring every theoretical wrinkle means that new ideas cannot be broached idly. And arguably science does not flourish where the agenda must be set by consensus and there is no room left for play.

*****

The idea that the UK has lost a ‘world lead’ in nanotechnology, suggested recently in the Financial Times, begged the question of when the UK ever had it. The headline was sparked by a report released in March by the Council for Science and Technology, a government advisory body. But Mark Welland, a nanotech specialist at Cambridge University and one of the report’s expert contributors, says that wires got crossed: the report’s criticisms were concerned primarily with the social, environmental and ethical aspects of nanotech. These were explored at depth in an earlier review of nanotechnology, the science of the ultrasmall, conducted by the Royal Society and the Royal Academy of Engineering and published in 2004.

That previous report highlighted the potential toxicity of nanoparticles – tiny grains of matter, which are already being used in consumer products – as one of the most pressing concerns, and recommended that the government establish and fund a coherent programme to study it. Welland says that some of those suggestions have been picked up internationally, but “nothing has happened here.” The 2004 report created an opportunity for the UK to lead the field in nano-toxicology, he says, and this is what has now been squandered.

What of the status of UK nanotech more generally? Welland agrees that it has never been impressive. “There’s no joined-up approach, and a lack of focus and cohesion between the research councils. Other European countries have much closer interaction between research and commercial exploitation. And the US and Japan have stuck their necks out a lot further. Here we have just a few pockets of stuff that’s really good.”

The same problems hamstrung the UK’s excellence in semiconductor technology in the 1970s. But there are glimmers of hope: Nokia has just set up its first nanotech research laboratory in Cambridge.

*****

As the zoo of extrasolar planets expands – well over 100 are now known – some oddballs are bound to appear. Few will be odder than HD 149026b, orbiting its star in the Hercules constellation 260 light years away. Its surface temperature of 2050 degC is about as hot as a small star, while it is blacker than charcoal and may glow like a giant ember. Both quirks are unexplained. One possibility is that the pitch-black atmosphere absorbs every watt of starlight and then instantly re-emits it – strange, but feasible. At any rate, the picture of planetary diversity gleaned from our own solar system is starting to look distinctly parochial.

Wednesday, May 02, 2007

PS This is all wrong

So there you are: your paper is written, and you’ve got it accepted in the world’s leading physics journal, and it has something really interesting to say. You’ve done the calculations and they just don’t match the observations. What this implies is dramatic: we’re missing a crucial part of the puzzle, some new physics, namely a fifth fundamental force of nature. Wow. OK, so that’s a tentative conclusion, but it’s what the numbers suggest, and you’ve been suitably circumspect in reporting it, and the referees have given the go-ahead.

Then, with the page proofs in hand, you decide to just go back and check the observations, which need a bit of number-crunching before the quantitative result drops out. And you find that the people who reported this originally haven’t been careful enough, and their number was wrong. When you recalculate, the match with conventional theory is pretty good: there’s no need to invoke any new physics after all.

So what do you do?

I’d suggest that what you don’t do is what an author has just done: add a cryptic ‘note in proof’ and publish anyway. Cryptic in that what it doesn’t say is ‘ignore all that had gone before: my main result, as described in the abstract, is simply invalid’. Cryptic in that it refers to the revision of the observed value, but says this is in good agreement ‘with the predictions above’ – by which you mean, not the paper’s main conclusions, but the ‘predictions’ using standard theory that the paper claims are way off beam. Cryptic in that this (possibly dense) science writer had to read it several times before sensing something was badly wrong.

In fact, I’d contend that you should ideally withdraw the paper. Who gains from publishing a paper that, if reported accurately, ends with a PS admitting it is wrong?

True, this is all a little complex. For one thing, it could be a postgrad’s thesis work at stake. But no one gets denied a PhD because perfectly good theoretical work turns out to be invalidated by someone else’s previous mistake. And what does a postgrad really gain by publishing a paper making bold claims in a prominent journal that ends by admitting it is wrong?

True, the work isn’t useless – as the researcher concerned argued when (having written the story and just needed to add some quotes) I contacted him, the discrepancy identified in the study is what prompted a re-analysis of the data that brought the previous error to light. But you have a preprint written that reports the new analysis; surely you can just add to that a comment alluding to this false trail and the impetus it provided. In fact, your current paper is itself already on the preprint server – you just need to cite that. The whole world no longer needs to know.

No, this is a rum affair. I’m not sure that the journal in question really knew what it was publishing – that the ‘note added in proof’ invalidated the key finding. If it did, I’m baffled by the decision. And while I’m miffed at my wasted time, the issue has much more to do with propriety. Null results are one thing, but this is just clutter. I realize it must be terribly galling to find that your prized paper has been rendered redundant on the eve of publication. But that’s science for you.

Friday, April 20, 2007

Physicists start saying farewell to reality
Quantum mechanics just got even stranger
[This is my pre-edited story for Nature News on a paper published this week, which even this reserved Englishman must acknowledge to be deeply cool.]

There’s only one way to describe the experiment performed by physicist Anton Zeilinger and his colleagues: it’s unreal, dude.

Measuring the quantum properties of pairs of light particles (photons) pumped out by a laser has convinced Zeilinger that “we have to give up the idea of realism to a far greater extent than most physicists believe today.”

By realism, what he means is the idea that objects have specific features and properties: that a ball is red, that a book contains the works of Shakespeare, that custard tastes of vanilla.

For everyday objects like these, realism isn’t a problem. But for objects governed by the laws of quantum mechanics, such as photons or subatomic particles, it may make no sense to think of them as having well defined characteristics. Instead, what we see may depend on how we look.

Realism in this sense has been under threat ever since the advent of quantum mechanics in the early twentieth century. This seemed to show that, in the quantum world, objects are defined only fuzzily, so that all we can do is to adduce the probabilities of their possessing particular characteristics.

Albert Einstein, one of the chief architects of quantum theory, could not believe that the world was really so indeterminate. He supposed that there was a deeper level of reality yet to be uncovered: so-called ‘hidden variables’ that specified any object’s properties precisely.

Allied to this assault on reality was the apparent prediction of what Einstein called ‘spooky action at a distance’: disturbing one particle could instantaneously determine the properties of another particle, no matter how far away it is. Such interdependent particles are said to be entangled, and this action at a distance would violate the principle of locality: the idea that only local events govern local behaviour.

In the 1960s the Irish physicist John Bell showed how to put locality and realism to the test. He deduced that they required two experimentally measurable quantities of entangled quantum particles such as photons to be equal. The experiments were carried out in the ensuing two decades, and they showed that Bell’s equality is violated.

This means that either realism or locality, or both, fails to apply in the quantum world. But which of these cases is it? That’s what Zeilinger, based at the University of Vienna, and his colleagues have set out to test [1].

They have devised another ‘equality’, comparable to Bell’s, that should hold up if quantum mechanics is non-local but ‘realistic’. “It’s known that you can save realism if you kick out locality”, Zeilinger says.

The experiment involves making pairs of entangled photons and measuring a quantum property of each of them called the polarization. But whereas the tests of Bell’s equality measured the so-called ‘linear’ polarization – crudely, whether the photons’ electromagnetic fields oscillate in one direction or the opposite – Zeilinger’s experiment looks at a different sort of polarization, called elliptical polarization, for one of the photons.

If the quantum world can be described by non-local realism, quantities derived from these polarization measurements should be equal. But Zeilinger and colleagues found that they weren’t.

This doesn’t rule out all possible non-local realistic models, but it does exclude an important subset of them. Specifically, it shows that if you have a group of photons all with independent polarizations, then you can’t ascribe specific polarizations to each. It’s rather like saying that in a car park it is meaningless to imagine that particular cars are blue, white or silver.

If the quantum world is not realistic in this sense, then how does it behave? Zeilinger says that some of the alternative non-realist possibilities are truly weird. For example, it may make no sense to imagine ‘counterfactual determinism’: what would happen if we’d made a different measurement. “We do this all the time in daily life”, says Zeilinger – for example, imagining what would happen if we’d tried to cross the road when that truck was coming.

Or we might need to allow the possibility of present actions affecting the past, as though choosing to read a letter or not affects what it says.

Zeilinger hopes his work will stimulate others to test such possibilities. “I’m sure our paper is not the end of the road”, he says. “But we have a little more evidence that the world is really strange.”

Reference
1. Gröblacher, S. et al. Nature 446, 871 – 875 (2007).

Tuesday, April 17, 2007


Tales of the expected

[This is the pre-edited version of my latest Muse article for Nature online news.]

A recent claim of water on an extrasolar planet raises broader questions about how science news is reported.

“Scientists discover just what they expected” is not, for obvious reasons, a headline you see very often. But it could serve for probably a good half of the stories reported in the public media, and would certainly have been apt for the recent reports of water on a planet outside our solar system.

The story is this: astronomer Travis Barman of the Lowell Observatory in Flagstaff, Arizona, has claimed to find a fingerprint of water vapour in the light from a Sun-like star 150 light years away as it passes through the atmosphere of the star’s planet HD 209458b [T. Barman, Astrophys. J. in press (2007); see the paper here].

The claim is tentative and may be premature. But more to the point, at face value it confirms precisely what was expected for HD 209458b. Earlier observations of this Jupiter-sized planet had failed to see signs of water – but if it were truly absent, something would be seriously wrong with our understanding of planetary formation.

The potential interest of the story is that water is widely considered by planetary scientists to be the prerequisite for life. But if it’s necessary, it is almost certainly not sufficient. There is water on most of the other planets in our solar system, as well as several of their moons and indeed in the atmosphere of the Sun itself. But as yet there is no of sign of life on any of them.

The most significant rider is that to support life as we know it, water must be in the liquid state, not ice or vapour. That may be the case on Jupiter’s moons Europa and Callisto, as it surely once was (and may still be, sporadically) on Mars. But in fact we don’t even know for sure that water is a necessary condition for life: there is no reason to think, apart from our unique experience of terrestrial life, that other liquid solvents could not sustain living systems.

All of this makes Barman’s discovery – which he reported with such impeccable restraint that it could easily have gone unnoticed – intriguing, but very modestly so. Yet it has been presented as revelatory. “There may be water beyond our solar system after all”, exclaimed the New York Times. “First sign of water found on an alien world”, said New Scientist (nice to know that, in defiance of interplanetary xenophobia, Martians are no longer aliens).

As science writers are dismayingly prone to saying sniffily “oh, we knew that already”, I’m hesitant to make too much of this. It’s tricky to maintain a perspective on science stories without killing their excitement. But the plain fact is that there is water in the universe almost everywhere we look – certainly, it is a major component of the vast molecular clouds from which stars and planets condense.

And so it should be, given that its component atoms hydrogen and oxygen are respectively the most abundant and the third most common in the cosmos. Relatively speaking, ours is a ‘wet’ universe (though yes, liquid water is perhaps rather rare).

The truth is that scientists work awfully hard to verify what lazier types might be happy to take as proven. Few doubted that Arthur Eddington would see, in his observations of a solar eclipse in 1919, the bending of light predicted by Einstein’s theory of general relativity. But it would seem churlish in the extreme to begrudge the headlines that discovery generated.

Similarly, it would be unfair to suggest that we should greet the inevitable sighting of the Higgs boson (the so-called ‘God’ particle thought to give other particles their mass) with a shrug of the shoulders, once it turns up at the billion-dollar particle accelerator constructed at CERN in Geneva.

These painstaking experiments are conducted not so that their ‘success’ produces startling front-page news but because they test how well, or how poorly, we understand the universe. Both relativity and quantum mechanics emerged partly out of a failure to find the expected.

In the end, the interest of science news so often resides not in discovery but in context: not in what the experiment found, but in why we looked. Barman’s result, if true, tells us nothing we did not know before, except that we did not know it. Which is why it is still worth knowing.

Wednesday, April 04, 2007


Violin makers miss the best cuts
[This is the pre-edited version of my latest article for Nature’s online news. For more on the subject, I recommend Ulrike Wegst’s article “Wood for Sound” in the American Journal of Botany 93, 1439 (2006).]

Traditional techniques fail to select wood for its sound


Despite their reputation as master craftspeople, violin makers don’t choose the best materials. According to research by a team based in Austria, they tend to pick their wood more for its looks than for its acoustic qualities.

Christoph Buksnowitz of the University of Natural Resources and Applied Life Sciences in Vienna and his coworkers tested wood selected by renowned violin makers (luthiers) to see how beneficial it was to the violin’s sound. They found that the luthiers were generally unable to identify the woods that performed best in laboratory acoustic tests [C. Buksnowitz et al. J. Acoust. Soc. Am. 121, 2384 - 2395 (2007)].

That was admittedly a tall order, since the luthiers had to make their selections just by visual and tactile inspection, without measuring instruments. But this is normal practice in the trade: the instrument-makers tend to depend on rules of thumb and subjective impressions when deciding which pieces of wood to use. “Some violin makers develop their instruments in very high-tech ways, but most seem to go by design criteria optimized over centuries of trial and error”, says materials scientist Ulrike Wegst of the Max Planck Institute for Metals Research in Stuttgart, Germany.

Selecting wood for musical instruments has been made a fine art over the centuries. For a violin, different types of wood are traditionally employed for the different parts of the instrument: ebony and rosewood for the fingerboard, maple for the bridge, and spruce for the soundboard of the body. The latter amplifies the resonance of the strings, and accounts for much of an instrument’s tonal qualities.

Buksnowitz and colleagues selected 84 samples of instrument-quality Norway spruce, one of the favourite woods for violin soundboards. They presented these to 14 top Austrian violin makers in the form of boards measuring 40 by 15 cm. The luthiers were asked to grade the woods according to acoustics, appearance, and overall suitability for making violins.

While the luthiers had to rely on their senses and experience, using traditional techniques such as tapping the woods to assess their sound, the researchers then conducted detailed lab tests of the strength, hardness and acoustic properties.

Comparing the professional and scientific ratings, the researchers found that there was no relation between the gradings of the instrument-makers and the properties that would give the wood a good sound. Even testing the wood’s acoustics by knocking is a poor guide when the wood is still in the form of a plank.

The assessments, they concluded, were being made primarily on visual characteristics such as colour and grain. That’s not as superficial as it might seem; some important properties, such as density, do match with things that can be seen by eye. “Visual qualities can tell us a lot about the performance of a piece of wood”, says Buksnowitz.

He stresses that the inability of violin makers to identify the best wood shouldn’t be seen as a sign of incompetence. “I admire their handiwork and have an honest respect for their skills”, he says. “It is still the talent of the violin maker that creates a master’s violin.”

Indeed, it is a testament to these skills that a luthier can make a first-class instrument from less than perfect wood. They can shape and pare it to meet the customer’s needs, fitting the intrinsic properties of the wood to the taste of the musician. “There are instrument-makers who would say they can build a good instrument from any piece of wood”, Buksnowitz says. “The experienced maker can allow for imperfections in the material and compensate for them”, Wegst agrees.

But Buksnowitz points out that the most highly skilled makers, such as Amati and Stradivari, are not limited by their technique, and so their only hope of making even better instruments is to find better wood.

At the other end of the scale, when violins are mass-produced and little skill enters the process at all, then again the wood could be the determining factor in how good the instrument sounds.

Instrument-makers themselves recognize that there is no general consensus on what is meant by ‘quality’. They agree that they need a more objective way of assessing this, the researchers say. “We want to cooperate with craftsmen to identify the driving factors behind this vague term”, says Buksnowitz.

Wegst agrees that this would be valuable. “As in wine-making, a more systematic approach could make instrument-making more predictable”, she says.

Thursday, March 29, 2007

Prospect - a response

David Whitehouse, once a science reporter for the BBC, has responded to my denunciation of ‘climate sceptics’ in Prospect. Here are his comments – I don’t find them very compelling, but you can make up your own mind:

"Philip Ball veers into inconsistent personal opinion in the global warming debate. He says the latest IPCC report comes as close to blaming humans for global warming as scientists are likely to. True, its summary replaced “likely to be caused by humans” with “very likely”, but that is hardly a great stride towards certainty, especially when deeper in the report is says that it is only “likely” that current global temperatures are the highest they’ve been in the past 1,300 years.
As for “sceptics” saying false and silly things, Ball should look to the alarmist reports about global warming so common in the media. These “climate extremists” are obviously saying false, silly things, as even scientists who adhere to the consensus have begun to notice. And it’s data, not economics, that will be the future battleground. The current period of warming began in 1975, yet the very data the IPCC uses shows that since 2002 there has been no upward trend. If this trend does not re-establish itself with force, and soon, we will shortly be able to judge who has been silliest.”

The first point kind of defeats itself: by implying that the IPCC’s move towards a stronger statement is rather modest, Whitehouse illustrates my point, which is that the IPCC is (rightly) inherently conservative (see my last entry below) and so this is about as committed a position as we could expect to get. If they had jumped ahead of the science and claimed 100% certainty, you can guess who’d be the first to criticize them for it.

Then Whitehouse points out that climate extremists say silly and false things too. Indeed they do. The Royal Society, who Whitehouse has falsely accused of trying to suppress research that casts doubt on anthropogenic climate change, has spent a lot of time and energy criticizing groups who do that, such as Greenpeace. I condemn climate alarmism too. Yes, the Independent has been guilty of that – and is balanced out by the scepticism of the right-wing press, such as the Daily Telegraph. But Whitehouse’s point seems to be essentially that the sceptics’ false and silly statements are justified by those of their opponents. I suspect that philosophers have got a name for this piece of sophistry. Personally, I would rather than everyone try harder not to say false and silly things.

I don’t know whether Whitehouse’s next comment, about the ‘current warming’ beginning in 1975 is false and/or silly, or just misinformed. But if it’s the latter, that would be surprising for a science journalist. There was a warming trend throughout the 20th century, which was interrupted between 1940 and 1970. It has been well established that this interruption is reproduced in climate models that take account of the changes in atmospheric aerosol levels (caused by human activities): aerosols, which have a cooling influence, temporarily masked the warming. So the warming due to CO2 was continuous for at least a century, but was modified for part of that time by aerosols. The trend since 1975 was thus not the start of anything new. This is not obscure knowledge, and one can only wonder at why sceptics continue to suppress it.

As for the comment that the warming has levelled off since 2002: well, the sceptics make a huge deal of how variable the climate system is when they want to imply that the current warming may be just a natural fluctuation, but clearly they like to cherry-pick their variations. They argue that the variability is too great to see a trend reliably over many decades, but now here’s Whitehouse arguing for a ‘trend’ over a few years. Just look at the graphs and tell me whether the period from 2002 to 2006 can possibly be attributed to variability or to a change in trend. Can you judge? As any climatologist will tell you, it is utterly meaningless to judge such things on the basis of a few years. Equally, we can’t attach too much significance, in terms of assessing trends, to the fact that the last Northern Hemisphere winter was the warmest since records began. (Did Whitehouse forget to mention that?) But that fact hardly suggests that we’re starting to see the end of global warming.

“Who has been silliest” – OK, this is a rhetorical flourish, but writers should pick their rhetoric carefully. If the current consensus on a warming trend generated by human activity proves to be wrong, or counteracted by some unforeseen negative feedback, that will not make the scientists silly. It will mean simply that they formed the best judgement based on the data available. Yes, there are other possible explanations, but at this point none of them looks anywhere near as compelling, or even likely.

My real point is that it would be refreshing if, just once, a climate sceptic came up with an argument that gave me pause and forced me to go and look at the literature and see if it was right. But their arguments are always so easily refuted with information that I can take straight off the very narrow shelves of my knowledge about climate change. That’s the tiresome thing. I suppose this may sound immodest, but truly my intention is just the opposite: if I, as a jobbing science writer, can so readily see why these arguments are wrong or why they omit crucial factors – or at the very least, why the climate community would reject them – then why do these sceptics, all of them smart people, not see this too? I am trying hard to resist the suspicion of intellectual dishonesty; but how much resistance am I expected to sustain?
When it’s right to be reticent

[This is the pre-edited version of my latest article for muse@nature.com]

The caution of climate scientists is commendable even if caution is out of fashion.

Jim Hansen is no stranger to controversy. Ever since the 1980s he has been much more outspoken about the existence and perils of human-induced climate change than the majority of his scientific colleagues. A climate modeller at NASA’s Goddard Institute for Space Studies in New York, Hansen has flawless credentials to speak about climate change – and his readiness to do so has led to accusations of political interference and censorship (see here).

But his views haven’t only ruffled political feathers – they have dismayed other scientists too, who are uncomfortable with what they see as Hansen’s impatience with science’s inherent caution.

So in some ways, Hansen’s latest foray will surprise no one. In a preprint submitted for publication, he claims that “scientific reticence” is seriously underselling the potential danger that climate change poses – specifically, that it “is inhibiting communication of a threat of potentially large sea level rise.” Because disintegration of polar ice sheets is poorly understood, it is very difficult for scientists to make a reliable estimate of the likely future changes in sea level. As a result, Hansen charges, they have put figures on those aspects of sea-level rise they can estimate with some confidence, but have refrained from doing so for this key ingredient of the problem, giving the impression that the probable changes will be much smaller than those Hansen considers likely.

The responsibility for pronouncing on such issues falls primarily on the Intergovernmental Panel on Climate Change (IPCC), which Hansen regards as conservative. This, he admits, contributes to IPCC’s authority and is “probably a necessary characteristic, given that the IPCC document is produced as a consensus among most nations in the world and represents the views of thousands of scientists.” The most recent IPCC report has been characterised as the most strongly worded yet, but its conclusions apparently still required much negotiation and compromise.

And yet Hansen believes that “Given the reticence that IPCC necessarily exhibits, there need to be supplementary mechanisms” for communicating the latest scientific knowledge to the public and policy makers. He calls for a panel of leading scientists to “hear evidence and issue a prompt plain-written report” on the dangers – which clearly he envisages as a much more forceful statement about impending climate catastrophe and the need for immediate action to “get on a fundamentally different energy and greenhouse gas emissions path”.

This is a strange proposal, however. Basically, Hansen is calling on the scientific community to collect their scientific thoughts and then to speak out unscientifically – which is to say, without the caveats and caution that are the stock-in-trade of good science. However, Hansen points out that in fact scientists do this all the time – when they are talking among themselves. He recalls how, challenged by a lawyer acting on behalf of US automobile manufacturers to name a single glaciologist who agreed with his view that ice-sheet break-up would cause sea-level rise of more than a metre by 2100, he could not do so. Even though he had heard plenty of such scientists express deep concerns to this effect in private exchanges, none had said anything definitive in public.

Why wouldn’t they do that, if it’s really what they thought? Hansen posits what he call a “John Mercer effect”. In 1978 Mercer, a glaciologist at Ohio State University, suggested [1] that anthropogenic global warming could cause the West Antarctic ice sheet to disintegrate and sea level to surge by over 5 m within 50 years. Mercer’s paper was disputed by other scientists, who were generally portrayed as the sober and authoritative counterbalance to Mercer’s “alarmism”.

“It seemed to me”, says Hansen, “that the scientists preaching caution and downplaying the dangers of climate change fared better in receipt of research funding.” This reticence, he suggests, is encouraged and rewarded both professionally and financially.

Hansen says he experienced this himself in the early days of climate-change research. He was one of the first to point out, in a paper coauthored in 1981, that rising levels of atmospheric carbon dioxide could be linked to a warming trend throughout the twentieth century [2]. At that time the trend itself wasn’t so clear – the globe was only just emerging from a three-decade cooling spell, now known to be caused by atmospheric aerosol particles that temporarily outweighed the greenhouse-gas contributions.

But by 1989 Hansen was prepared to state with confidence that we could already see the effects of human-induced greenhouse warming in action. His colleagues felt this was jumping the gun – that it was still too early to rule out natural climate variability.

This history is instructive in the face of common claims from ‘climate sceptics’ that climate scientists play up the threat of global warming in order to secure funding. Anyone who witnessed (as I did) the slow and meticulous process that brought climate scientists from this position in the late 1980s to what is effectively a consensus today that human-induced climate change is almost certainly now evident will recognise the nonsense of the sceptics’ claim. The dogged reluctance to commit to that view in the late 1980s [3] looks rather remarkable now; but it was correct, and the community can regard its restraint with pride.

Yet it also means that Hansen was in a sense right back then. Such retrospective vindication, however, is not in itself justification. He could just as easily have been wrong. His views may have been based on sound intuition, but the science wasn’t yet there to support it.

All the same, Hansen is right to say that “scientific reticence” poses problems. He points out that, because the climate system is nonlinear (and in particular, because there are positive feedbacks to ice-sheet melting), excessive caution could end up sounding the alarm too late. Possibly it already has.

The question is what to do about that. But the real issue here is not that scientists are “reticent” – it is that the public, politicians and leaders are not accustomed to reasoning and debating as scientists do. It is within the very grain of science – Popper’s legacy, of course – that it advances by self-doubt. The contemporary culture, on the other hand (and probably it has never been very different), favours dogmatic, absolute statements, unencumbered with caveats. If they prove to be wrong, no matter – another equally definitive statement will blot out memory of the last one. Thus you can say something such as HIV does not cause AIDS, or there is no such thing as society, and still be taken seriously years later as a commentator on current affairs.

The moment it abandons its caution and claims false certainty, science loses its credibility; indeed, it ceases to be true science. This is not to say that scientists should commit to nothing for fear of being proved wrong. Nor is it by any means a call for scientists to step back from making pronouncements that guide public policy – if anything, they should do more of that. But when they are talking about scientific issues, scientists cannot afford to abandon their (public) reticence. It is as individuals, not as community spokespeople, that they should feel free, as Hansen rightly does, to voice views, intuitions and beliefs that reach beyond the strict confines that science permits.

References
1. Mercer, J. Nature 271, 321 – 325 (1978).
2. Hansen, J. et al. Science 213, 957 – 966 (1981).
3. Kerr, R. Science 244, 1041 – 1043 (1989).

Friday, March 16, 2007

More noise from the markets

Those wacky economic analysts are at it again. Since I enjoy Paul Mason’s cheeky-chappie appearances as the business correspondent on BBC2’s Newsnight, and because I am told he is indeed a nice chap, I don’t wish to cast aspersions. But his article on the world economy in New Statesman last week (12 March, p.16) showed the kind of thing that passes as routine in the world of quotidian economics. “When the world’s most powerful people gathered amid the snows of Davos in late January, there was a tangible warm glow being given off by the economic cycle… Six weeks later, the financial markets are in turmoil and what was first shrugged off as a ‘correction’ is being seriously monitored as a potential crash.”

OK, so the forecasts were wrong again. Big news. And so the ‘cycle’ somehow stopped ‘cycling’ (or, as economists would say, the cycle changed earlier than expected, which their ‘cycles’, uniquely in science, are permitted to do). Big news again. But get this as the ‘explanation’ offered by the head of strategy at the consulting firm Accenture: “People had undervalued risk, assuming that because the economy is benign there’s not going to be volatility.” I love it. These impressive words – “undervaluing risk”, overlooking “volatility” – translate to something simple: “people forgot that the economy fluctuates”. People thought that because things were good, they were going to stay good.

Now, the idea that market traders were unrealistically optimistic is not especially shaming for them. This is just Keynes’ old “animal spirits” at work, as ever they are. But what a weird situation it causes when analysts are called upon to explain the consequences. These savants, whose salaries would make your eyes water, sagely pronounce, “ah yes, well the market did something unexpected because traders guessed wrong. They imagined that the market was not going to fluctuate, though it always does.” Ah, thanks for clearing that one up.

At root, this transmutation of the bleeding obvious into lucrative analysis stems yet again from the fact that market agents behave in a way that we all recognize as thoroughly human and natural, but which is not permitted in traditional economics. So to those who monitor and interpret the economy, it looks like wisdom of the highest order.

Tuesday, March 13, 2007


Can you tell true art from fake?

Well, find out. Mikhail Simkin at UCLA (whose work on 'false citations' in the scientific literature is highly revealing about the laxity that exists in checking sources) has put a test online in which you are invited to distinguish between some paintings by Modernist 'greats' such as Klee, Mondrian and Malevich, and "ridiculous fakes" that Simkin has mocked up. So far, over fifty thousand people have taken the test, and Simkin has now revealed the results. Surprise: on average, people identify about 8 out of 12 pictures correctly. In other words, they do better than random guessing, but not by much.

What does that mean? The cynic would say that it shows that 'modern' art is mostly a matter of the Emperor's new clothes: detach the great names and we often can't tell if we're looking at genius or doodling. That, of course, is a very old story.

But it would also be a simplistic one. Actually, I was surprised by the choices Simkin made for the test. Several of the images are obviously computer-generated. And most of not all of the true 'great works' would be recognized by anyone with a reasonable knowledge of 20th century art. I got one wrong, suspecting a 'fake' to be 'real'. But this didn't mean I was particularly impressed by the fake. Nor am I all that impressed by some of the 'reals'.

And it seems Simkin has a curiously old-fashioned notion of 'modern art', appearing to equate it with Modernist painting that is mostly almost a century old. Why not try the same thing with, I don't know, Hirst or Ofili or Gary Hume (if you insist on making art = painting in the first place)? You might find the same results, but at least they'd feel a bit more relevant.

Besides, are you really judging a Malevich by looking at a small and rather low-quality image on a computer screen?

The key point, though, is that underlying Simkin's test seems to be the notion that 'real art' would be instantly identifiable because it would show great skill, which would somehow render it timeless and universal. I'm not going to rehearse the case against that reactionary position, except to say that the galleries are full of paintings from previous ages rendered with consummate skill that seem to us now to be dull, irrelevant, pointless and conservative (which isn't to say that they are – although they might be – but only that times have moved on). Besides, the quality of art isn't something that is decided by democratic vote. Sorry about that seemingly elitist notion, but it has to be true. If it wasn't, artists might as well give up and abandon the stage to people who paint pretty watercolours.

It is true that the pomposity of the art world needs pricking, and often. Contemporary art often now seems to be awarded greatness by media cravenness, self-promotion, and the vagaries of the Matthew principle (the rich get richer). There's a great deal of silliness about, mostly thanks to the sad infatuation with celebrity that Western culture is passing through (well, I'm an optimist). But replacing critical judgement with vox pop ballots seems likely to merely pander to that, not to challenge it.

All the same, Simkin's paper is great fun to read. I only hope it triggers discussion rather than sneering.

Friday, March 09, 2007

If addiction's the problem, prohibition's not the answer

[This is the pre-edited version of my latest muse article for Nature's online news.]

China's ban on new internet cafés raises questions about its online culture

The decision by China to freeze the opening of any new Internet cafés for a year from this July has inevitably been interpreted as a further attempt by the Chinese authorities to control and censor access to politically sensitive information.

China defends the ban on the grounds of protecting susceptible teenagers from becoming addicted to games, chatrooms and online porn. Yu Wen, deputy of the National People's Congress, has been quoted as saying "It is common to see students from primary and middle schools lingering in internet bars overnight, puffing on cigarettes and engrossed in online games."

The restriction on internet cafés will certainly assist the Chinese government's programme of web censorship (although there are already more than 110,000 of these places in China). But to suggest that the move is merely a cynical attempt to dress up state interference as welfare would be to overlook another reason why it should be challenged.

It’s quite possible that the government is genuinely alarmed at the fact that, according to a recent report by the Chinese Academy of Sciences, teenagers in China are becoming addicted to the internet younger and in greater numbers than in other countries. The report claimed that 13 percent of users played or chatted online for more than 38 hours a week – longer than the typical working week of European adults.

Sure, you can try to address this situation (which is disturbing if the figures are right) by limiting users' access to their drug. But anyone involved in treating additive behaviour knows that you'll solve little unless you get to the cause.

Why is the cyberworld so attractive to Chinese teenagers? It doesn't take much insight to see a link between repression in daily life and the liberation (partly but not entirely illusory) offered online.

Yet it would be simplistic to ascribe the desire to escape online with the political oppression that certainly exists in Chinese society. After all, there are more oppressive places in the world. Indeed, it is arguably the liberalization of Chinese society that adds to the factors contributing to its internet habit.

There is in fact a nexus of such factors that might be expected to prime young people in China for addition to the net: among them, the increase in wealth and leisure and the emergence of a middle class, the replacement of a demonized West with a glamorized one (both are dangerous), the conservatism and expectations of a strongly filial tradition, the loneliness of a generation lacking siblings because of China's one-child policy, and the allure and status of new technology in a rapidly modernizing society.

Stephanie Wang, a specialist on Chinese internet regulation at the Berkman Center for Internet and Society at Harvard Law School, suggests that the problems of internet use by young people may also simply be more visible in China than in the West, where it tends to happen behind the closed doors of teenagers’ bedrooms rather than in public cybercafés. Wang adds that the online demographic in Asia is more biased towards young people, and probably more male-dominated.

The Chinese government hardly helps its cause by justifying internet control with puritanical rhetoric: talk of "information purifiers", "online poison" and the need for a "healthy online culture" all too readily suggests the prurient mixture of horror and fascination that characterizes the attitude of many repressive regimes to more liberal cultures. But let's not forget that much the same was once said in the West about the corrupting influence of rock'n'roll.

And anyway, surely youth has always needed an addition. In a culture where alcohol abuse is rare, drug use carries terrifyingly draconian penalties, sexuality is repressed and pop culture is sanitized, getting your kicks online might seem your only option. As teenage vices go, it is pretty mild.

As with all new technologies, from television to cell phones, the antisocial behaviour they can elicit is all too easily blamed on the technology itself. That's far safer than examining the latent social traits that the technology has made apparent. In this regard, China is perhaps only reacting as other cultures have done previously.

So rather than adding more bricks to its Great Firewall, or fretting about youngsters chain-smoking their way through the mean streets of Grand Theft Auto, China might benefit from thinking about why it has the addition-prone youth cyberculture that it claims to have.

Wednesday, February 28, 2007



Roll on the robots


This is the pre-edited version of my Materials Witness column for the April issue of Nature Materials.

Spirit, the redoubtable Martian rover, has spent the past year driving on just five of its six wheels. In February the Rover’s handling team said it had perfected the art of manoeuvring with one wheel missing, but the malfunction raises the question of whether there are better ways for robots to get around. Walking robots are becoming more efficient thanks to a better understanding of the ‘passive’ mechanism of human locomotion; but a single tumble might put such a robot out of action permanently in remote or extraterrestrial environments.

So a recent survey of rolling robots provided by Rhodri Armour and Julian Vincent of the University of Bath (J. Bionic Eng. 3, 195-208; 2006) is timely. They point out that spherical robots have several advantages: for example, they’ll never ‘fall over’, the mechanics can all be enclosed in a protective hard shell, the robot can move in any direction and can cope with collisions, uneven and soft surfaces.

But how do you make a sphere roll from the inside? Several answers have been explored in designs for spherical robots. One developed at the Politecnico di Bari in Italy aims to use an ingenious internal driver, basically a sprung rod with wheels at each end. It’s a tricky design to master, and so far only a cylindrical prototype exists. Other designs include spheres with ‘cars’ inside (the treadwheel principle), pairs of hemispherical wheels, moving internal ballast masses – the Roball made at the Université de Sherbrooke in Québec, and the Rotundus of Uppsala University in Sweden – and gyroscopic rollers like Carnegie Mellon’s Gyrover.

But Armour and Vincent suggest that one of the best designs is that in which masses inside the sphere can be moved independently along radial arms to shift the centre of gravity in any direction. The Spherobot under development at Michigan State University, and the August robot designed in Iran use this method, as does the wheel-shaped robot made at Ritsumeikan University in Kyoto, which is a deformable rubber hoop with ‘smart’ spokes that can crawl up a shallow incline and even jump into the air.

Although rolling robots clearly have a lot going for them, it might give us pause for thought that nature seems very rarely to employ rolling. There are a few organisms that make ‘intentional’ use of passive rolling, being able to adopt spherical shapes that are blown by the wind or carried along by gravity: tumbleweed is perhaps the most familiar example, but the Namib golden wheel spider cartwheels down sand dunes to escape wasps, and woodlice, when attacked, curl into balls and roll away. Active rollers are rarer still: Armour and Vincent can identify only the caterpillar of the Mother-of-Pearl moth and a species of shrimp, both of which perform somersaults.

Is this nature’s way of telling us that rolling has limited value for motion? That might be jumping to conclusions; after all, wheels are equally scarce in nature, but they serve engineering splendidly.

Tuesday, February 27, 2007

Science on Stage: two views

Carl Djerassi has struck back at Kirsten Shepherd-Barr’s rather stinging critique of his plays in a review of Kirsten’s book Science on Stage in Physics Today. I think his comments are a little unfair; Carl has his own agenda of using theatre to smuggle some science into culture, which is a defensible aim but doesn’t acknowledge that the first question must be: is this good theatre? Or as Kirsten asks, does it have ‘theatricality’? Here is my own take on her book, published in the July issue of Nature Physics last year.

Science on Stage: From Doctor Faustus to Copenhagen
Kirsten Shepherd-Barr
Princeton University Press, 2006
Cloth $29.95
ISBN 0-691-12150-8
264 pages

Over the past decade or so, science has been on stage as never before. Michael Frayn’s Copenhagen (1998), which dramatized the wartime meeting between Werner Heisenberg and Niels Bohr, is perhaps the most celebrated example; but Tom Stoppard had been exploring scientific themes for some time in Hapgood (1988) and Arcadia (1993), while Margaret Edison’s Wit (1998) and David Auburn’s Proof (2001) were both Pulitzer prize-winning Broadway hits, the latter now also a Hollywood movie. There are plenty of other examples.

While this ‘culturization’ of science has largely been welcomed by scientists – it certainly suggests that theatre has a more sophisticated relationship with science than that typified by the ‘mad scientist’ of cinematic tradition – there has been a curious lack of insightful discussion of the trend. Faced with ‘difficult’ scientific concepts, theatre critics tend to seek recourse in bland clichés about ‘mind-boggling ideas’. Scientists, meanwhile, all too often betray an artistic conservatism by revealing that their idea of theatre is an entertaining night out watching a bunch of actors behind a proscenium arch.

Thank goodness, then, for Kirsten Shepherd-Barr’s book. It represents the first sustained, serious attempt that I have seen to engage with the questions posed by science in theatre. In particular, while there has been plenty of vague talk about pedagogical opportunities, about Snow’s two cultures and about whether the ‘facts are right’, Shepherd-Barr explores what matters most about ‘science plays’: how they work (or not) as theatre.

Despite the book’s subtitle, it does not really try to offer a comprehensive historical account of science in theatre. All the same, one can hardly approach the topic without acknowledging several landmark plays of the past that have had a strong scientific content. It is arguably stretching the point to include Marlowe’s Dr Faustus (c.1594), despite its alchemical content, since this retelling of a popular folk legend is largely a morality tale which can be understood fully only in the context of its times. But while that is equally true of Ben Jonson’s The Alchemist (c.1610), both plays are important in terms of the archetypes they helped establish for the dramatic scientist: as arrogant Promethean man and as wily charlatan. There are echoes of both in the doctors of Ibsen’s plays, for example.

More significant for the modern trend is Bertolt Brecht’s Life of Galileo (1938/45) a far more nuanced look at the moral dilemmas that scientists face. Like Copenhagen, Galileo has drawn criticism from some scientists and science historians over the issue of historical accuracy. Some of these criticisms simply betray an infantile need to sustain Galileo as the heroic champion of rationalism in the face of church dogma. That is bad history too, but then, scientists are notorious (or should be) for their lack of real interest in history, as opposed to anecdote. Here Shepherd-Barr is admirably clear and patient, explaining that Copenhagen “takes history simply as material for creating theatre that does what art in general does: poses questions.”

Yet this is something scientists and historians seem to feel uncomfortable about. Writing about Copenhagen, historian Robert Marc Friedman has said “regardless of the playwright's intentions and even extreme care in creating his characters, audiences may leave the theatre with a wide range of impressions. In the case of the London production of Copenhagen on the evening that I attended, members of the audience with whom I spoke came away believing Bohr to be no better morally than Heisenberg; perhaps even less sympathetic. I am not sure, however, that this was the playwright's intention… I felt uncomfortable.” There is something chillingly Stalinist about this view of theatre and art. Should we also worry whether we have correctly divined the playwright’s “intentions” in Hamlet or King Lear?

Shepherd-Barr negotiates admirably around these lacunae between the worlds of science and art. Perhaps her key insight is that the most successful science plays are those that don’t just talk about their themes but embody them, as when the action of Arcadia reveals the thermodynamic unidirectionality of time. But most importantly, she reminds us that theatre is primarily not about words or ideas, but performance. That’s why theatre is so much stronger and more exciting a vehicle for dealing with scientific themes than film (which almost always does it miserably) or even literature. Good theatre, whatever its topic, doesn’t just engage but involves its audience: it is an experiment in which the presence of the observer is critical. Brecht pointed that out; but it is perhaps in theatre’s experimental forms, such as those pioneered by Jacques Lecoq and Peter Brook (who staged Oliver Sack’s The Man Who Mistook His Wife for a Hat in 1991) and exemplified in John Barrow and Luca Ronconi’s Infinities and Theatre de Complicite’s Mnemonic, that we see how much richer it can be than the remote, ponderous literalness of film. What could be more scientific-spirited than this experimental approach? When science has given us such extraordinary new perspectives on the world, surely theatre should be able to do more than simply show us people talking about it.
Don’t censor the state climatologists

Aware that I will no doubt be dismissed as the yes-man of the ‘climate-change consensus’ for my critique of climate sceptics in Prospect (see below), I want to say that I am dismayed at the news that two US state climatologists are being given some heat for disagreeing with the idea that global warming is predominantly anthropogenic. First, it seems that state climatologists have many concerns, of which global climate change is just one (and a relatively minor one at that). But more importantly, it is absurd to expect any scientist to determine their position by fiat so that it is aligned with state policy or any other political position. The matter is quite simple: if the feeling is that a scientist’s position on an issue undermines their credentials as a scientist, they should not be given this kind of status in the first place. If it is true that, as Mike Hopkins says in his Nature story (and Mike gets things right) “Oregon governor Ted Kulongoski said that he wants to strip Oregon's climatologist George Taylor of his title for not agreeing that global warming is predominantly caused by humans”, then Kulongoski is wrong. The only reason Taylor ought to be stripped of his title is that he has been found to be a demonstrably bad climatologist. The same with Pat Michaels at Virginia. As it happens, my impression of Michaels is that he is no longer able to be very objective on the issue of climate change – in other words, he doesn’t seem to be very trustworthy as a scientist on that score. But I’m prepared to believe that he says what he does in good faith, and of course should be allowed to argue his case. Trying to force these two guys to fall in line with the state position is simply going to fan the conspiracy theorists’ flames (I’m awaiting Benny Peiser’s inevitable take on this). But even if these paranoid sceptics did not exist, the demands would be wrong.
The more voices, the better the result in Wiki world
Here's the pre-edited version of my latest article for news@nature…

The secret to the quality of Wikipedia entries is lots of edits by lots of people

Why is Wikipedia so good? While the debate about just how good it is has been heated, the free online encyclopaedia offers a better standard of information than we might have any right to expect from a resource that absolutely anyone can write and edit.

Three groups of researchers claim now to have untangled the process by which many Wikipedia entries achieve an impressive accuracy [1-3]. They say that the best articles are those that are highly edited by many different contributors.

Listening to lots of voices rather than a few doesn't always guarantee the success that Wikipedia enjoys – just think of all those rotten movies written by committee. Collaborative product design in commerce and industry also often generates indifferent results. So why does Wiki work where others have failed?

Wikipedia was created by Jimmy Wales in January 2001, since when it has grown exponentially both in terms of the number of users and the information content. In 2005, a study of its content by Nature [4] concluded that the entries were of a comparable standing to those generated by experts for the Encyclopaedia Britannica (a claim that the EB quickly challenged).

The idea behind Wikipedia is encapsulated in writer James Surowiecki's influential book is The Wisdom of Crowds[5]: the aggregate knowledge of a wide enough group of people will always be superior to that of any single expert. In this sense, Wikipedia challenges the traditional notion that an elite of experts knows best. This democratic, open-access philosophy has been widely imitated, particularly in online resources.

At face value, it might seem obvious that the wider the community you consult, the better your information will be – that simply increases your chances of finding a real expert on Mozart or mud wrestling. But how do you know that the real experts will be motivated to contribute, and that their voices will not be drowned out or edited over by other less-informed ones?

The crucial question, say Dennis Wilkinson and Bernardo Huberman of Hewlett Packard's research laboratories in Palo Alto, California, is: how do the really good articles get to be that way? The idea behind Wikipedia is that entries are iterated to near-perfection by a succession of edits. But do edits by a (largely) unregulated crowd really make an entry better?

Right now there are around 6.4 million articles on Wikipedia, generated by over 250 million edits from 5.77 million contributors. Wilkinson and Huberman is have studied the editing statistics, and say that they don't simply follow the statistical pattern expected from a random process in which each edit is made independently of the others [1].

Instead, there are an abnormally high number of very highly edited entries. The researchers say this is just what is expected if the number of new edits to an article is proportional to the number of previous edits. In other words, edits attract more edits. The disproportionately highly edited articles, the researchers say, are those that deal with very topical issues.

And does this increased attention make them better? Yes, it does. Although the quality of an entry is not easy to assess automatically, Wilkinson and Huberman assume that those articles selected as the 'best' by the Wikipedia user community are indeed in some sense superior. These, they say, are more highly edited, and by a greater number of users, than less visible entries.

Who is making these edits, though? Some have claimed that Wikipedia articles don't truly draw on the collective wisdom of its users, but are put together mostly by a small, select elite, including the system's administrators. Wales himself has admitted that he spends "a lot of time listening to four or five hundred" top users.

Aniket Kittur of the University of California at Los Angeles and coworkers have set out to discover who really does the editing [2]. They have looked at 4.7 million pages from the English-language Wikipedia, subjected to a total of about 58 million revisions, to see who was making the changes, and how.

The results were striking. In effect, the Wiki community has mutated since 2001 from an oligarchy to a democracy. The percentage of edits made by the Wikipedia 'elite' of administrators increased steadily up to 2004, when it reached around 50 per cent. But since then it has steadily declined, and is now just 10 per cent (and falling).

Even though the edits made by this elite are generally more substantial than those made by the 'masses', their overall influence has clearly waned. Wikipedia is now dominated by users who are much more numerous than the elite but individually less active. Kittur and colleagues compare this to the rise of a powerful bourgeoisie within an oligarchic society.

This diversification of contributors is beneficial, Ofer Arazy and coworkers at the University of Alberta in Canada have found [3]. They say that, of the 42 Wikipedia entries assessed in the 2005 Nature study, the number of errors decreased as the number of different editors increased.

The main lesson for tapping effectively into the 'wisdom of the crowd', then, is that the crowd should be diverse: represented by many different views and interests. In fact, in 2004 Lu Hong and Scott Page of the University of Michigan showed that a problem-solving team selected at random from a diverse collection of individuals will usually perform better than a team made up of those who individually perform best – because the latter tend to be too similar, and so draw on too narrow a range of options [6]. For crowds, wisdom depends on variety.

Reference
1. Wilkinson, D. M. & Huberman, B. A. preprint http://xxx.arxiv.org/abs/cs.DL/0702140 (2007).
2. Kittur, A. et al. preprint (2007).
3. Arazy, O. et al. Paper presented at 16th Workshop on Information Technologies and Systems, Milwaukee, 9-10 December 2006.
4. Giles, J. Nature 438, 900-901 (2005).
5. Surowiecki, J. The Wisdom of Crowds (Random House, 2004).
6. Hong, L. & Page, S. E. Proc. Natl Acad. Sci. USA 101, 16385-16389 (2004).