Friday, May 25, 2007

Does this mean war?
[This is my latest article for muse@nature.com]

Cyber-attacks in the Baltic raise difficult questions about the threat of state-sponsored information warfare.

Is Estonia at war? Even the country’s leaders don’t seem sure. Over the past several weeks the Baltic nation has suffered serious attacks, but no one has been killed and it isn’t even clear who the enemy is.

That’s because the attacks have taken place in cyberspace. The websites of the Estonian government and political parties, as well as its media and banks, have been paralysed by tampering. Access to the sites has now been blocked to users outside the country.

This is all part of a bigger picture in which Estonia and its neighbour Russia are locked in bitter dispute sparked by the Soviet legacy. But the situation could provoke a reappraisal of what cyber-warfare might mean for international relations.

In particular, could it ever constitute a genuine act of war? “Not a single Nato defence minister would define a cyber-attack as a clear military action at present,” says the Estonian defence minister Jaak Aaviksoo — but he seems to doubt whether things should remain that way, adding that “this matter needs to be resolved in the near future.”

The changing face of war


When the North Atlantic Treaty was drafted in 1949, cementing the military alliance of NATO, it seemed clear enough what constituted an act of war, and how to respond. “An armed attack against one or more [member states] shall be considered an attack against them all,” the treaty declared. It was hard at that time to imagine any kind of effective attack that did not involve armed force. Occupation of sovereign territory was one thing (as the Suez crisis soon showed), but no one was going to mobilize troops in response to, say, economic sanctions or verbal abuse.

Now, of course, ‘war’ is itself a debased and murky term. Nation states seem ready to declare war on anything: drugs, poverty, disease, terrorism. Co-opting military jargon for quotidian activities is an ancient habit, but by doing so with such zeal, state leaders have blurred the distinctions.

Cyber-war is, however, something else again. Terrorists had already recognized the value of striking at infrastructures rather than people, as was clear from the IRA bombings of London’s financial district in the early 1990s, before the global pervasion of cyberspace. But now that computer networks are such an integral part of most political and economic systems, the potential effects of ‘virtual attack’ are vastly greater.

And these would not necessarily be ‘victimless’ acts of aggression. Disabling health networks, communications or transport administration could easily have fatal consequences. It is not scaremongering to say that cyberwar could kill without a shot being fired. And the spirit, if not currently the letter, of the NATO treaty must surely compel it to protect against deaths caused by acts of aggression.

Access denied

The attacks on Estonia websites, triggered by the government’s decision to relocate a Soviet-era war memorial, consisted of massed, repeated requests for information that overwhelmed servers and caused sites to freeze — an effect called distributed denial of service. Estonian officials claimed that many of the requests came from computers in Russia, some of them in governmental institutions.

Russia has denied any state involvement, and so far European Union and NATO officials, while denouncing the attacks as “unacceptable” and “very serious”, have not accused the Kremlin of orchestrating the campaign.

The attack is particularly serious for Estonia because of its intense reliance on computer networks for government and business. It boasts a ‘paperless government’ and even its elections are held electronically. Indeed, information technology is one of Estonia’s principal strengths – which is why it was able to batten down the hatches so quickly in response to the attack. In late 2006, Estonia even proposed to set up a cyber-defence centre for NATO.

There is nothing very new about cyber-warfare. In 2002 NATO recognized it as a potential threat, declaring an intention to “strengthen our capabilities to defend against cyber attacks”. In the United States, the CIA, the FBI, the Secret Service and the Air Force all have their own anti-cyber-terrorism squads.

But most of the considerable attention given to cyber-attack by military and defence experts has so far focused on the threat posed by individual aggressors, from bored teenage hackers to politically motivated terrorists. This raises challenges of how to make the web secure, but does not really pose new questions for international law.

The Estonia case may change that, even if (as it seems) there was no official Russian involvement. Military attacks often now focus on the use of armaments to disable communications infrastructure, and it is hard to see how cyber-attacks are any different. The United Nations Charter declares its intention to prevent ‘acts of aggression’, but doesn’t define what those are — an intentional decision so as not to leave loopholes for aggressors, which now looks all the more shrewd.
Irving Lachow, a specialist on information warfare at the National Defense University in Washington, DC, agrees that the issue is unclear at present. “One of the challenges here is figuring out how to classify a cyber-attack”, he says. “Is it a criminal act, a terrorist act, or an act of war? It is hard to make these determinations but important because different laws apply.” He says that the European Convention on Cyber Crime probably wouldn’t apply to a state-sponsored attack, and that while there are clear UN policies regarding ‘acts of war’, it’s not clear what kind of cyber-attack would qualify. “In my mind, the key issues here are intent and scope”, he says. “An act of war would try to achieve a political end through the direct use of force, via cyberspace in this case.”

And what would be the appropriate response to state-sanctioned cyber-attack? The use of military force seems excessive, and could in any case be futile. Some think that the battle will have to be joined online – but with no less a military approach than in the flesh-and-blood world. Computer security specialist Winn Schwartau, has called for the creation of a ‘Fourth Force’, in addition to the army, navy, and air force, to handle cyberspace.

That would be to regard cyberspace as just another battleground. But perhaps instead this should be seen as further reason to abandon traditional notions about what warfare is, and to reconsider what, in the twenty-first century, it is now becoming.

Wednesday, May 16, 2007

There’s no such thing as a free fly
[This is the pre-edited version of my latest article for muse@nature.com]

Neuroscience can’t show us the source of free will, because it’s not a scientific concept.

Gluing a fly’s head to a wire and watching it trying to fly sounds more like the sort of experiment a naughty schoolboy would conduct than one that turns out to have philosophical and legal implications.

But that’s the way it is for the work reported this week by a team of neurobiologists in the online journal PLoS One [1]. They say their study of the ‘flight’ of a tethered fly reveals that the fly’s brain has the ability to be spontaneous – to make decisions that aren’t predictable responses to environmental stimuli.

The researchers think this might be what underpins the notorious cussedness of laboratory animals, wryly satirized in the so-called Harvard Law of Animal Behavior: “Under carefully controlled experimental circumstances, an animal will behave as it damned well pleases.”

But in humans, this apparently volitional behaviour seems all but indistinguishable from what we have traditionally called free will. In other words, the work seems to imply that even fruit-fly brains are hard-wired to display something we might as well denote free will.

The flies are tethered inside a blank white cylinder, devoid of all environmental clues about which direction to take. If the fly is nothing but an automaton impelled hither and thither by external inputs, then it would in this circumstance be expected to fly in purely random directions. Although the wire stops the fly from actually moving, its attempts to do so create a measurable tug on the wire that reveals its ‘intentions’.

Björn Brembs of the Free University of Berlin and his colleagues found found that these efforts aren’t random. Instead, they reveal a pattern that, for an unhindered fly, would alternate localized buzzing around with occasional big hops.

This kind of behaviour has been seen in other animals (and in humans too), where it has been interpreted as a good foraging strategy: if a close search of one place doesn’t bring result, you’re better off moving far afield and starting afresh.

But this was thought to rely on feedback from the environment, and not to be intrinsic to the animals’ brains. Brembs and colleagues say that in contrast there exists a ‘spontaneity generator’ in the flies’ brains which does not depend on external information in a determinate way.

Is that really ‘free will’, though? No one is suggesting that the flies are making conscious choices; the idea is simply that this neural ‘spontaneity circuit’ is useful in evolutionary terms, and so has become hard-wired into the brain.

But it could, the researchers say, be a kind of precursor to the mental wiring of humans that would enable us to evade the prompts of our own environmentally conditioned responses and ‘make up our own minds’ – to exercise what is commonly interpreted as free will. “If such circuits exist in flies, it would be unlikely that they do not exist in humans, as this would entail that humans were more robot-like than flies”, Brembs says.

These neural circuits mean that you can know everything about an organism’s genes and environment yet still be unable to anticipate its caprices. If that’s so – and the researchers now intend to search for the neural machinery involved – this adds a new twist to the current debate that neuroscience has provoked about human free will.

Some neuroscientists have argued that, as we become increasingly informed about the way our behaviour is conditioned by the physical and chemical makeup of our brains, the notion of legal responsibility will be eroded. Criminals will be able to argue their clack of culpability on the grounds that “my brain made me do it”.

While right-wing and libertarian groups fulminate at the idea that this will hinder the law’s ability to punish and will strip the backbone from the penal system, some neuroscientists feel that it will merely change its rationale, making it concerned less with retribution and more with utilitarian prevention and social welfare. According to psychologists Joshua Greene and Jonathan Cohen of Princeton University, “Neuroscience will challenge and ultimately reshape our intuitive sense(s) of justice” [2].

If neuroscience indeed threatens free will, some of the concerns of the traditionalists are understandable. It’s hard to see how notions of morality could survive a purely deterministic view of human nature, in which our actions are simply automatic responses to external stimuli and free will is an illusion spun from our ignorance about cause and effect. And it is a short step from such determinism to the pre-emptive totalitarianism depicted in the movie Minority Report, where people are arrested for crimes they have yet to commit.

But while this ‘hard’ mechanical determinism may have made sense to political philosophers of the Enlightenment – it was the basis of Thomas Hobbes’ theory of government, for example – it is merely silly today, and for a number of reasons.

First, it places its trust in a linear, Cartesian mechanics of cogs and levers that clearly has nothing to do with the way the brain works. If nothing else, the results of Brembs and colleagues show that even the fly’s brain is highly nonlinear, like the weather system, and not susceptible to precise prediction.

Second, this discussion of ‘free will’ repeats the old canard, apparently still dear to the hearts of many neuroscientists, evolutionary biologists and psychologists, that our behaviour is governed by the way our minds work in isolation. But as neuroscientists Michael Gazzaniga and Megan Steven have pointed out [3], we act in a social context. “Responsibility is a social construct and exists in the rules of society”, they say. “It does not exist in the neuronal structures of the brain”.

This should be trivially obvious, but is routinely overlooked. Other things being equal, violent crime is frequently greater where there is socioeconomic deprivation. This doesn’t make it a valid defence to say ‘society made me do it’, but it shows that the interactions between environment, neurology and behaviour are complex and ill-served by either neurological determinism or a libertarian insistence on untrammelled ‘free will’ as the basis of responsibility and penal law.

The fact is that ‘free will’ is (like life and love) one of those culturally useful notions that turn into shackles when we try to make them ‘scientific’. That’s why it is unhelpful to imply that the brains of flies or humans might contain a ‘free will’ module simply because they have a capacity to scramble the link between cause and effect. Free will is a concept for poets and novelists, and, if it keeps them happy, for philosophers and moralists. In science and politics, it deserves no place.

Reference
1. Maye, A. et al. PLoS ONE May, e443 (2007).
2. Greene, J. & Cohen, J. Phil. Trans. R. Soc. Lond. B 359, 1775 - 1785 (2004).
3. Gazzaniga, M. S. & Steven, M. S. Sci. Am. MIND April 2005.


Philosophers, scientists and writers on free will

“The will cannot be called a free cause, but only necessary…. Things could have been produced by God in no other manner and in no other order than that in which they have been produced.”
Baruch Spinoza, Ethics

“Whatever concept one may hold, from a metaphysical point of view, concerning the freedom of the will, certainly its appearances, which are human actions, like every other natural event are determined by universal laws.”
Immanuel Kant, On History

“As a matter of fact, if ever there shall be discovered a formula which shall exactly express our wills and whims; if there ever shall be discovered a formula which shall make it absolutely clear what those wills depend upon, and what laws they are governed by, and what means of diffusion they possess, and what tendencies they follow under given circumstances; if ever there shall be discovered a formula which shall be mathematical in its precision, well, gentlemen, whenever such a formula shall be found, man will have ceased to have a will of his own—he will have ceased even to exist.”
Fyodor Dostoevsky, Notes from the Underground

“Free will is for history only an expression connoting what we do not know about the laws of human life.”
Leo Tolstoy, War and Peace

“There once was a man who said ‘Damn!’
It is borne in upon me I am
An engine that moves
In predestinate grooves
I’m not even a bus, I’m a tram.”
Maurice Evan Hare,1905

“We cannot prove… that human behaviour… is fully determined, but the position becomes more plausible as facts accumulate.”
B. F. Skinner, About Behaviorism

“Free will, as we ordinarily understand it, is an illusion. However, it does not follow… that there is no legitimate place for responsibility.”
Joshua Greene & Jonathan Cohen, 2004

Monday, May 14, 2007

Should we get engaged?
[This is the pre-edited version of my Crucible column for the June issue of Chemistry World.]

In 2015 the BBC broadcast a documentary called ‘Whatever happened to nanotechnology?’ Remember the radical predictions being made in 2006, it asked, such as curing blindness? Well, things didn’t turn out to be so simple. On the other hand, nor have the forecasts of nano-doom come to pass. Instead, there’s simply been plenty of solid, incremental science that has laid the groundwork for a brighter technological future.

This scenario, imagined in a European Union working paper, “Strategy for Communication Outreach in Nanotechnology”, sounds a little unlikely, not least because television is increasingly less interested in stories with such anodyne conclusions. But this, the paper suggests, is the optimistic outcome: one where nanotech has not been derailed by inept regulation, industrial mishaps and public disenchantment.

The object of the exercise is to tell the European Commission how to promote “appropriate communication in nanotechnology.” The present working paper explains that “all citizens and stakeholders, in Europe and beyond, are welcome to express comments, opinions and suggestions by end June 2007”, which will inform a final publication. So there’s still time if you feel so inclined.

One of the striking things about this paper is that it implies one now has to work frightfully hard, using anything from theatre to food, to bridge the divide between science and the public – and all, it seems, so that the public doesn’t pull the plug through distrust. If that’s really so, science is in deep trouble. But it may be in the marketplace, not the research lab, that public perception really holds sway.

What, however, is “appropriate communication” of technology?

Previous EU documents have warned that nanotechnology is poorly understood and difficult to grasp, and that its benefits are tempered by risks that need to be openly stated and investigated. “Without a serious communication effort,” one report suggests, “nanotechnology innovations could face an unjust negative public reception. An effective two-way dialogue is indispensable, whereby the general public’s views are taken into account and may be seen to influence [policy] decisions”.

This is, of course, the current mantra of science communication: engagement, not education. The EU paper notes that today’s pubic is “more sceptical and less deferential”, and that therefore “instead of the one-way, top down process of seeking to increase people’s understanding of science, a two-way iterating dialogue must be addressed, where those seeking to communicate the wonders of their science also listen to the perceptions, concerns and expectations of society.”

And so audiences are no longer lectured by a professor but discuss the issues with panels that include representatives from Greenpeace. There’s much that is productive and progressive in that. But in his bracingly polemical book The March of Unreason (OUP, 2005), Lord Dick Taverne challenges its value and points out that ‘democracy’ is a misplaced ideal in science. “Why should science be singled out as needing more democratic control when other activities, which could be regarded as equally ‘elistist’ and dependent on special expertise, are left alone?” he asks. Why not ‘democratic art’?

Taverne’s critique is spot-on. There now seems to be no better sport than knocking ‘experts’ who occasionally get things wrong, eroding the sense that we should recognize expertise at all. This habitual skepticism isn’t always the result of poor education – or rather, it is often the result of an extremely expensive but narrow one. The deference of yore often led to professional arrogance; but today’s universal skepticism makes arrogance everyone’s prerogative.

Another danger with ‘engagement’ is that it tends to provide platforms for a narrow spectrum of voices, especially those with axes to grind. The debate over climate change has highlighted the problems of insisting on ‘balance’ at the expense of knowledge or honesty. Nanotechnology, however, has been one area where ‘public engagement’ has often been handled rather well. A three-year UK project called Small Talk hosted effective public debates and discussions on nanotechnology while gathering valuable information about what people really knew and believed. Its conclusions were rather heartening. People’s attitudes to nanotechnology are not significantly different from their attitudes to any new technology, and are generally positive. People are less concerned about specific risks than about the regulatory structures that contain it. The public perception of risk, however, continues to be a pitfall: many now think that a ‘safe’ technology is one for which all risks have been identified and eliminated. But as Taverne points out, such a zero-risk society “would be a paradise only for lawyers.”

The EU’s project is timely, however, for the UK’s Council for Science and Technology, an independent advisory body to the government, has just pronounced in rather damning terms on the government’s efforts to ‘engage’ with the social and ethical aspects of nanotech. Their report looks at progress on this issue since publication of a nanotech review in 2004 prepared for the government by the Royal Society and Royal Academy of Engineering “The report led to the UK being seen as a world leader in its engagement with nanotechnologies”, it say. “However, today the UK is losing that leading position.”

It attributes this mainly to a failure to institute a coherent approach to the study of nano-toxicology, the main immediate hazard highlighted by the 2004 review. “In the past five years, only £3m was spent on toxicology and the health and environmental impacts of nanomaterials”, it says, and “there is as yet little conclusive data concerning the long-term environmental fate and toxicity of nanomaterials.”

Mark Welland, one of the expert advisers on this report, confirms that view. “The 2004 recommendations have been picked up internationally”, he says, “but the UK government has one almost nothing towards toxicology.” Like others, he fears that inaction could open the doors to a backlash like that against genetically modified organisms or the MMR vaccine.

If that’s so, maybe we do need good ideas about how to communicate. But that’s only part of an equation that must also include responsible industrial practice, sound regulation, broad vision, and not least, good research
Prospects for the LHC
[This is my pre-edited Lab Report column for the June issue of Prospect.]

Most scientific instruments are doors to the unknown; that’s been clear ever since Robert Hooke made exquisite drawings of what he saw through his microscope. They are invented not to answer specific questions – what does a flea look like up close? – but for open-ended study of a wide range of problems. This is as true of the mercury thermometer as it is of the Hubble Space Telescope.

But the Large Hadron Collider (LHC), under construction at the European centre for high-energy physics (CERN) in Geneva, is different. Particle physicists rightly argue that, because it will smash subatomic particles into one another with greater energy than ever before, it will open a window on a whole new swathe of reality. But the only use of the LHC that anyone ever hears or cares about is the search for the Higgs boson.

This is pretty much the last missing piece of the so-called Standard Model of fundamental physics: the suite of particles and their interactions that explains all known events in the subatomic world. The Higgs boson is the particle associated with the Higgs field, which pervades all space and, by imposing a ‘drag’ on other particles, gives them their mass. (In the Standard Model all the fields that create forces have associated particles: electromagnetic fields have photons, the strong nuclear force has gluons.)

To make a Higgs boson, you need to release more energy in a particle collision than has so far been possible with existing colliders. But the Tevatron accelerator at Fermilab in Chicago comes close, and could conceivably still glimpse the Higgs before it is shut down in 2009. While no one wants to admit that this is a race, that can’t be doubted – and Fermilab would love to spot the Higgs first.

Which makes it all the more awkward that components supplied by Fermilab for the LHC have proven to be faulty – most recently, a huge magnet that shifted and ruptured a pipe. Fermilab admits to embarrassment at the ‘oversight’, but it has set the rumour mills grinding. For this and (primarily) other reasons, the LHC now seems unlikely to make its first test run at the end of this year. Among other things, it needs to be refrigerated to close to absolute zero, which can’t be done in a hurry.

Extravagant promises can only be sustained for so long without delivery, and so the delays could test public sympathy, which has so far been very indulgent of the LHC. As a multi-million instrument that has only one really big question in sight, the supercollider is already in a tight spot: everyone thinks they know the answer already (the Higgs exists), and that may yet be confirmed before the LHC comes online. But this is a universal problem for high-energy physics today, where all the major remaining questions demand unearthly energies. There’s a chance that the LHC may turn up some surprises – evidence of extra dimensions, say, or of particles that lie outside the Standard Model. But the immense and expensive technical challenges involved in exploring every theoretical wrinkle means that new ideas cannot be broached idly. And arguably science does not flourish where the agenda must be set by consensus and there is no room left for play.

*****

The idea that the UK has lost a ‘world lead’ in nanotechnology, suggested recently in the Financial Times, begged the question of when the UK ever had it. The headline was sparked by a report released in March by the Council for Science and Technology, a government advisory body. But Mark Welland, a nanotech specialist at Cambridge University and one of the report’s expert contributors, says that wires got crossed: the report’s criticisms were concerned primarily with the social, environmental and ethical aspects of nanotech. These were explored at depth in an earlier review of nanotechnology, the science of the ultrasmall, conducted by the Royal Society and the Royal Academy of Engineering and published in 2004.

That previous report highlighted the potential toxicity of nanoparticles – tiny grains of matter, which are already being used in consumer products – as one of the most pressing concerns, and recommended that the government establish and fund a coherent programme to study it. Welland says that some of those suggestions have been picked up internationally, but “nothing has happened here.” The 2004 report created an opportunity for the UK to lead the field in nano-toxicology, he says, and this is what has now been squandered.

What of the status of UK nanotech more generally? Welland agrees that it has never been impressive. “There’s no joined-up approach, and a lack of focus and cohesion between the research councils. Other European countries have much closer interaction between research and commercial exploitation. And the US and Japan have stuck their necks out a lot further. Here we have just a few pockets of stuff that’s really good.”

The same problems hamstrung the UK’s excellence in semiconductor technology in the 1970s. But there are glimmers of hope: Nokia has just set up its first nanotech research laboratory in Cambridge.

*****

As the zoo of extrasolar planets expands – well over 100 are now known – some oddballs are bound to appear. Few will be odder than HD 149026b, orbiting its star in the Hercules constellation 260 light years away. Its surface temperature of 2050 degC is about as hot as a small star, while it is blacker than charcoal and may glow like a giant ember. Both quirks are unexplained. One possibility is that the pitch-black atmosphere absorbs every watt of starlight and then instantly re-emits it – strange, but feasible. At any rate, the picture of planetary diversity gleaned from our own solar system is starting to look distinctly parochial.

Wednesday, May 02, 2007

PS This is all wrong

So there you are: your paper is written, and you’ve got it accepted in the world’s leading physics journal, and it has something really interesting to say. You’ve done the calculations and they just don’t match the observations. What this implies is dramatic: we’re missing a crucial part of the puzzle, some new physics, namely a fifth fundamental force of nature. Wow. OK, so that’s a tentative conclusion, but it’s what the numbers suggest, and you’ve been suitably circumspect in reporting it, and the referees have given the go-ahead.

Then, with the page proofs in hand, you decide to just go back and check the observations, which need a bit of number-crunching before the quantitative result drops out. And you find that the people who reported this originally haven’t been careful enough, and their number was wrong. When you recalculate, the match with conventional theory is pretty good: there’s no need to invoke any new physics after all.

So what do you do?

I’d suggest that what you don’t do is what an author has just done: add a cryptic ‘note in proof’ and publish anyway. Cryptic in that what it doesn’t say is ‘ignore all that had gone before: my main result, as described in the abstract, is simply invalid’. Cryptic in that it refers to the revision of the observed value, but says this is in good agreement ‘with the predictions above’ – by which you mean, not the paper’s main conclusions, but the ‘predictions’ using standard theory that the paper claims are way off beam. Cryptic in that this (possibly dense) science writer had to read it several times before sensing something was badly wrong.

In fact, I’d contend that you should ideally withdraw the paper. Who gains from publishing a paper that, if reported accurately, ends with a PS admitting it is wrong?

True, this is all a little complex. For one thing, it could be a postgrad’s thesis work at stake. But no one gets denied a PhD because perfectly good theoretical work turns out to be invalidated by someone else’s previous mistake. And what does a postgrad really gain by publishing a paper making bold claims in a prominent journal that ends by admitting it is wrong?

True, the work isn’t useless – as the researcher concerned argued when (having written the story and just needed to add some quotes) I contacted him, the discrepancy identified in the study is what prompted a re-analysis of the data that brought the previous error to light. But you have a preprint written that reports the new analysis; surely you can just add to that a comment alluding to this false trail and the impetus it provided. In fact, your current paper is itself already on the preprint server – you just need to cite that. The whole world no longer needs to know.

No, this is a rum affair. I’m not sure that the journal in question really knew what it was publishing – that the ‘note added in proof’ invalidated the key finding. If it did, I’m baffled by the decision. And while I’m miffed at my wasted time, the issue has much more to do with propriety. Null results are one thing, but this is just clutter. I realize it must be terribly galling to find that your prized paper has been rendered redundant on the eve of publication. But that’s science for you.