Friday, June 29, 2007


Designs for life


[More matters arising from the Greenland conference: in this case, a paper that John Glass of the Venter Institute discussed, and which is now published in Science. It has had a lot of press, and rightly so. Here is the article I have written for Nature's News & Views section, which will appear in next week's issue.]

The genome of one bacterium has been successfully replaced with that of a different bacterium, transforming one species into another. This development is a harbinger of whole-genome engineering for practical ends.

If your computer doesn’t do the things you want, give it a new operating system. As they describe in Science [1], Carole Lartigue and colleagues at the J. Craig Venter Institute in Rockville, Maryland, have now demonstrated that the same idea will work for living cells. In an innovation that presages the dawn of organisms redesigned from scratch, the authors report the transplantation of an entire genome between species. They have moved the genome from one bacterium, Mycoplasma mycoides, to another, Mycoplasma capricolum, and have shown that the recipient cells can be ‘booted up’ with the new genome — in effect, a transplant that converts one species into another.

This is likely to be a curtain-raiser for the replacement of an organism’s genome with a wholly synthetic one, made by DNA-synthesis technology. The team at the Venter Institute hopes to identify the ‘minimal’ Mycoplasma genome: the smallest subset of genes that will sustain a viable organism [2]. The group currently has a patent application for a minimal bacterial genome of 381 genes identified in Mycoplasma genitalium, the remainder of the organism’s 485 protein-coding genes having been culled as non-essential.

This stripped-down genome would provide a ‘chassis’ on which organisms with new functions might be designed by combining it with genes from other organisms — for example, those encoding cellulase and hydrogenase enzymes, for making cells that respectively break down plant matter and generate hydrogen. Mycoplasma genitalium is a candidate platform for this kind of designer-genome synthetic biology because of its exceptionally small genome [2]. But it has drawbacks, particularly a relatively slow growth rate and a requirement for complex growth media: it is a parasite of the primate genital tract, and is not naturally ‘competent’ on its own. Moreover, its genetic proof-reading mechanisms are sloppy, giving it a rapid rate of mutation and evolution. The goat pathogens M. mycoides and M. capricolum are somewhat faster-growing, dividing in less than two hours.

Incorporation of foreign DNA into cells happens naturally, for example when viruses transfer DNA between bacteria. And in biotechnology, artificial plasmids (circular strands of DNA) a few kilobases in size are routinely transferred into microorganisms using techniques such as electroporation to get them across cell walls. In these cases, the plasmids and host-cell chromosomes coexist and replicate independently. It has remained unclear to what extent transfected DNA can cause a genuine phenotypic change in the host cells — that is, a full transformation in a species’ characteristics. Two years ago, Itaya et al. [3] transferred almost an entire genome of the photosynthetic bacterium Synechocystis PCC6803 into the bacterium Bacillus subtilis. But most of the added genes were silent and the cells remained phenotypically unaltered.

Genome transplantation in Mycoplasma is relatively easy because these organisms lack a bacterial cell wall, having only a lipid bilayer membrane. Lartigue et al. extracted the genome of M. mycoides by suspending the bacterial cells in agarose gel before breaking them open, then digesting the proteinaceous material with proteinase enzymes. This process leaves circular chromosomes, virtually devoid of protein and protected from shear stress by the agarose encasement. This genetic material was transferred to M. capricolum cells in the presence of polyethylene glycol, a compound known to cause fusion of eukaryotic cells (those with genomes contained in a separate organelle, the nucleus). Lartigue et al. speculate that some M. capricolum cells may have fused around the naked M. mycoides genomes.

The researchers did not need to remove the recipient’s DNA before adding that of the donor; instead, they added an antibiotic-resistance gene to the M. mycoides donor genome. With two genomes already present, no replication was needed before the recipient cells could divide: one daughter cell had the DNA of M. capricolum, the other that of M. mycoides. But in the presence of the antibiotic, only the latter survived. Some M. capricolum colonies did develop in the transplanted cells after about ten days, perhaps because their genomes recombined with the antibiotic-resistant M. mycoides. But most of the cells, and all of those that formed in the first few days, seemed to be both genotypically and phenotypically M. mycoides, as assessed by means of specific antibodies and proteomic analysis.

The main question raised by this achievement is how much difference a transplant will tolerate. That is, how much reprogramming is possible? The DNA sequences of M. mycoides and M. capricolum are only about 76% the same, and so it was by no means obvious that the molecular machinery of one would be able to operate on the genome of the other. Yet synthetic biology seems likely to make possible many new cell functions, not by whole-genome transplants but by fusing existing ones. When John I. Glass, a member of the Venter Institute’s team, presented the transplant results at a recent symposium on the merging of synthetic biology and nanotechnology [4], he also described the institute’s work on genome fusion (further comments on matters arising from the symposium appeared in last week’s issue of Nature [5].

One target is to develop a species of an aerobic Clostridium bacterium that will digest plant cellulose into ethanol, thus generating a fuel from biomass. Cellulose is difficult to break down — which is why trees remain standing for so long — but it can be done by Clostridium cellulolyticum. However, this creates glucose. Clostridium acetobutylicum, meanwhile, makes butanol and other alcohols, but not from cellulose. So a combination of genes from both organisms might do the trick. For such applications, it remains to be seen whether custom-built vehicles or hybrids will win the race.

1. Lartigue, C. et al. Science Express doi:10.1126.1144622 (2007).
2. Fraser, C. M. et al. Science 270, 397–403 (1995).
3. Itaya, M. et al. Proc. Natl Acad. Sci. USA 102, 15971–15976 (2005).
4. Kavli Futures Symposium The Merging of Bio and Nano: Towards Cyborg Cells 11–15 June 2007, Ilulissat, Greenland.
5. Editorial Nature 447, 1031–1032 (2007).

Tuesday, June 26, 2007

What is life? A silly question

[This will appear as a leader in next week's Nature, but not before having gone through an editorial grinder...]

While there is probably no technology that has not at some time been deemed an affront to God, none invites the accusation to the same degree as synthetic biology. Only a deity predisposed to cut-and-paste would suffer any serious challenge from genetic engineering as it has been practised in the past. But the efforts to redesign living organisms from scratch – either with a wholly artificial genome made by DNA synthesis technology or, more ambitiously, by using non-natural, bespoke molecular machinery – really might seem to justify the suggestion, made recently by the ETC Group, an environmental pressure group based in Ottawa, that “for the first time, God has competition.”

That accusation was levelled at scientists from the J. Craig Venter Institute in Rockville, Maryland, based on the suspicion that they had synthesized an organism with an artificial genome in the laboratory. The suspicion was unfounded – but this feat will surely be achieved in the next few years, judging from the advances reported at a recent meeting in Greenland on the convergence of synthetic biology and nanotechnology and the progress towards artificial cells.*

But one of the views commonly held by participants was that to regard such efforts as ‘creating life’ is more or less meaningless. This trope has such deep cultural roots, travelling via the medieval homunculus and the golem of Jewish legend to the modern Faustian myth written by Mary Shelley, that it will surely be hard to dislodge. Scientific attempts to draw up criteria for what constitutes ‘life’ only bolster the popular notion that it is something that appears when a threshold is crossed – a reminder that vitalism did not die alongside spontaneous generation.

It would be a service to more than synthetic biology if we might now be permitted to dismiss the idea that life is a precise scientific concept. One of the broader cultural benefits of attempts to make artificial cells is that they force us to confront the contextual contingency of the word. The trigger for the ETC Group’s protest was a patent filed by the Venter Institute last October on a ‘minimal bacterial genome’: a subset of genes, identified in Mycoplasma genitalium, required for the organism to be viable ‘in a rich bacterial culture medium’. That last sounds like a detail, but is in fact essential. The minimal requirements depend on the environment – on what the organism does and doesn’t have to synthesize, for example, and what stresses it experiences. And participants at the Greenland meeting added the reminder that cells do not live alone, but in colonies and, in general, in ecosystems. Life is not a solitary pursuit.

Talk of ‘playing God’ will mostly be indulged either as a lazy journalistic cliché or as an alarmist slogan. But synthetic biology’s gradualist and relative view of what life means should perhaps be invoked to challenge equally lazy perspectives on life that are sometimes used to defend religious dogma. If, for example, this view undermines the notion that a ‘spark of humanity’ abruptly animates a fertilized egg – if the formation of a new being is recognized more clearly to be gradual, contingent and precarious – then the role of the term ‘life’ in that debate might acquire the ambiguity it has always warranted.

*Kavli Futures Symposium, 11-15 June, Ilulissat, Greenland.

Monday, June 25, 2007


The Ilulissat Statement

[This is a statement drafted by the participants of the conference in Greenland that I attended two weeks ago. Its release today coincides with the start of the third conference on synthetic biology in Zürich.]

Synthesizing the Future

A vision for the convergence of synthetic biology and nanotechnology

This document expresses the views that emerged from the Kavli Futures Symposium ‘The merging of bio and nano: towards cyborg cells’, 11-15 June 2007, Ilulissat, Greenland.

Approximately fifty years ago, two revolutions began. The invention of the transistor and the integrated circuit paved the way for the modern information society. At the same time, Watson and Crick unlocked the structure of the double helix of DNA, exposing the language of life with stunning clarity. The electronics revolution has changed the way we live and work, while the genetic revolution has transformed the way we think about life and medical science.

But a third innovation contemporaneous with these was the discovery by Miller and Urey that amino acids may be synthesized in conditions thought to exist on the early Earth. This gave us tantalizing hints that we could create life from scratch. That prospect on the one hand, and the ability to manipulate genetic information using the tools of biotechnology on the other, are now combined in the emerging discipline of synthetic biology. How we shape and implement this revolution will have profound effects for humanity in the next fifty years.

It was also almost fifty years ago that the proposal was made by Feynman of engineering matter at the atomic scale – the first intimation of the now burgeoning field of nanotechnology. Since the nanoscale is also the natural scale on which living cells organize matter, we are now seeing a convergence in which molecular biology offers inspiration and components to nanotechnology, while nanotechnology has provided new tools and techniques for probing the fundamental processes of cell biology. Synthetic biology looks sure to profit from this trend.

It is useful to divide synthetic biology, like computer technology, into two parts: hardware and software. The hardware – the molecular machinery of synthetic biology – is rapidly progressing. The ability to sequence and manufacture DNA is growing exponentially, with costs dropping by a factor of two every two years. The construction of arbitrary genetic sequences comparable to the genome size of simple organisms is now possible. Turning these artificial genomes into functioning single-cell factories is probably only a matter of time. On the hardware side of synthetic biology, the train is leaving the station. All we need to do is stoke the engine (by supporting foundational research in synthetic biology technology) and tell the train where to go.

Less clear are the design rules for this remarkable new technology—the software. We have decoded the letters in which life’s instructions are written, and we now understand many of the words – the genes. But we have come to realize that the language is highly complex and context-dependent: meaning comes not from linear strings of words but from networks of interconnections, with its own entwined grammar. For this reason, the ability to write new stories is currently beyond our ability – although we are starting to master simple couplets. Understanding the relative merits of rational design and evolutionary trial-and-error in this endeavor is a major challenge that will take years if not decades. This task will have fundamental significance, helping us to better understand the web of life as expressed in both the genetic code and the complex ecology of living organisms. It will also have practical significance, allowing us to construct synthetic cells that achieve their applied goals (see below) while creating as few problems as possible for the world around them.

These are not merely academic issues. The early twenty first century is a time of tremendous promise and tremendous peril. We face daunting problems of climate change, energy, health, and water resources. Synthetic biology offer solutions to these issues: microorganisms that convert plant matter to fuels or that synthesize new drugs or target and destroy rogue cells in the body. As with any powerful technology, the promise comes with risk. We need to develop protective measures against accidents and abuses of synthetic biology. A system of best practices must be established to foster positive uses of the technology and suppress negative ones. The risks are real, but the potential benefits are truly extraordinary.

Because of the pressing needs and the unique opportunity that now exists from technology convergence, we strongly encourage research on two broad fronts:

Foundational Research
1. Support the development of hardware platforms for synthetic biology.
2. Support fundamental research exploring the software of life, including its interaction with the environment.
3. Support nanotechnology research to assist in the manufacture of synthetic life and its interfacing with the external world.

Societal Impacts and Applications
4. Support programs directed to address the most pressing applications, including energy and health care.
5. Support the establishment of a professional organization that will engage with the broader society to maximize the benefits, minimize the risks, and oversee the ethics of synthetic life.
6. Develop a flexible and sensible approach to ownership, sharing of knowledge, and regulation, that takes into account the needs of all stakeholders.

Fifty years from now, synthetic biology will be as pervasive and transformative as is electronics today. And as with that technology, the applications and impacts are impossible to predict in the field’s nascent stages. Nevertheless, the decisions we make now will have enormous impact on the shape of this future.

The people listed below, participants at the Kavli Futures Symposium ‘The merging of bio and nano: towards cyborg cells’, 11-15 June 2007, Ilulissat, Greenland, agree with the above statement

Robert Austin
Princeton University, Princeton, USA

Philip Ball
Nature, London, United Kingdom

Angela Belcher
Massachusetts Institute of Technology, Cambridge, USA

David Bensimon
Ecole Normale Superieure, Paris, France

Steven Chu
Lawrence Berkeley National Laboratory, Berkeley, USA

Cees Dekker
Delft University of Technology, Delft, The Netherlands

Freeman Dyson
Institute for Advanced Study, Princeton, USA

Drew Endy
Massachusetts Institute of Technology, Cambridge, USA

Scott Fraser
California Institute of Technology, Pasadena, USA

John Glass
J. Craig Venter Institute, Rockville, USA

Robert Hazen
Carnegie Institution of Washington, Washington, USA

Joe Howard
Max Planck Institute of Molecular Cell Biology and Genetics, Dresden, Germany

Jay Keasling
University of California at Berkeley, Berkeley, USA

Hiroaki Kitano
The Systems Biology Institute, and Sony Computer Science Laboratories, Japan

Paul McEuen
Cornell University, Ithaca, USA

Petra Schwille
TU Dresden, Dresden, Germany

Ehud Shapiro
Weizman Institute of Science, Rehovot, Israel

Julie Theriot
Stanford University, Stanford, USA

Thursday, June 21, 2007


Should synthetic biologists patent their components?
[This, a piece for Nature’s online muse column, is the first fruit of the wonderful workshop I attended last week in Greenland on the convergence of synthetic biology and nanotechnology. That explains the apparent non-sequitur of a picture – taken at midnight in Ilulissat, way above the Arctic Circle. Much more on this to follow…(I may run out of pictures). Anyway, this piece surely won’t survive at this length on the Nature web pages, so here it is in full.]

Behind scary talk about artificial organisms lie real questions about the ownership of biological ‘parts’.

“For the first time, God has competition”, claimed environmental pressure organization the ETC Group two weeks ago. With this catchy headline, they aimed to raise the alarm about a patent on “the world’s first-ever human-made species”, a bacterium allegedly created “with synthetic DNA” in the laboratories of the Venter Institute in Rockville, Maryland.

ETC had discovered a US patent application (20070122826) filed last October by the Venter Institute scientists. The institute was established by genomics pioneer Craig Venter, and one of its goals is to make a microorganism stripped of all non-essential genes (a ‘minimal cell’) as a platform for designing living cells from the bottom up.

ETC’s complaint was a little confused, because it could not figure out whether the synthetic bacterium, which the group dubbed ‘Synthia’, has actually been made. On close reading of the patent, however, it becomes pretty apparent that it has not.

Indeed, there was no indication that the state of the art had advanced this far. But it is a lot closer than you might imagine, as I discovered last week at a meeting in Greenland supported by the Kavli Foundation, entitled “The merging of bio and nano – towards cyborg cells.” Sadly, the details are currently under embargo – so watch this space.

However, the ETC Group was also exercised about the fact that Venter Institute scientists had applied for a patent on what it claimed were the set of essential genes needed to make a minimal organism – or as the application puts it, “for replication of a free-living organism in a rich bacterial culture medium.” These are a subset of the genes possessed by the microbe Mycoplasma genitalium, which has a total of just 485 genes that encode proteins.

If the patent were granted, anyone wanting to design an organism from the minimal subset of 381 genes identified by Venter’s team would need to apply for a license. “These monopoly claims signal the start of a high-stakes commercial race to synthesize and privatize synthetic life forms”, claimed ETC’s Jim Thomas. “Will Venter’s company become the ‘Microbesoft’ of synthetic biology?”

Now, that’s a better question (if rather hyperbolically posed). I’m told that this patent application has little chance of success, but it does raise an important issue. Patenting of genes has of course been a controversial matter for many years, but the advent of synthetic biology – of which a major strand involves redesigning living organisms by reconfiguring their genetic wiring – takes the debate to a new level.

“Synthetic biology presents a particularly revealing example of a difficulty that the law has frequently faced over the last 30 years – the assimilation of a new technology into the conceptual limits posed by existing intellectual property rights”, say Arti Rai and James Boyle, professors of law at Duke University in North Carolina in a recent article in the journal PLoS Biology [1]. “There is reason to fear that tendencies in the way that US law has handled software on the one hand and biotechnology on the other could come together in a ‘perfect storm’ that would impede the potential of the technology.”

What is new here is that genes are used in a genuine ‘invention’ mode, to make devices. Researchers have organized ‘cassettes’ of natural genes into modules that can be inserted into microbial genomes, giving the organisms new types of behaviour. One such module acted as an oscillator, prompting regular bursts of synthesis of a fluorescent protein. When added to the bacteria E. coli, it made the cells flash on and off with light [2].

It is arguably a distortion of the notion of ‘invention’ to patent a gene that exists in nature. But if you can start to make new ‘devices’ by arranging these genes in new ways, doesn’t that qualify? And if so, how small and rudimentary a ‘part’ becomes patentable?

At the Greenland conference, Drew Endy of the Massachusetts Institute of Technology admitted that the framework for ownership and sharing of developments in synthetic biology remains wholly unresolved. He and his MIT colleagues are creating a Registry of Standard Biological Parts to be used as the elements of genetic circuitry just like the transistors, capacitors and so forth.in electronics catalogues. This registry places the parts in the public domain, which can provide some protection against attempts to patent them.

Endy helps to organize an annual competition among university students for the design of engineered organisms with new functions. One of the recent entries, from students at the Universities of Texas at Austin and California at San Francisco, was a light-sensitive version of E. coli that could grow into a photographic film [3]. Endy says that these efforts would be impossibly expensive and slow if the intellectual property rights on all the components first had to be cleared.

He compares it to a situation where, in order to write a piece of computer code, you have to apply for licensing on each command, and perhaps on certain combinations of them too.

And in synthetic biology that sort of patenting seems disturbingly easy right now. “You can take any device from the Texas instruments TTL catalogue, put ‘genetically coded’ in front of it without reducing it to practice, and you have a good chance of getting a patent”, says Endy.

“Evidence from virtually every important industry of the twentieth century suggests that broad patents on foundational research can slow growth”, say Rai and Boyle. Bioengineer Jay Keasling of the University of California at Berkeley, who was also at the Greenland meeting, agrees that patenting has been a brake on the useful applications of biotechnology. He has been working for several years to engineer microbes to synthesize a compound called artemisinin, which is currently one of the best drugs available for fighting malaria4. Artemisinin is produced in tiny amounts by as Asian shrub. Extraction from this source is prohibitively expensive, making it impossible to use artemisinin to combat malaria in developing countries, where it kills 1-3 million people each year.

Keasling’s genetically engineered artemisinin could potentially be made at a fraction of the cost. But its synthesis involves orchestrating the activity of over 40 genetic components, which is a greater challenge than any faced previously in genetic engineering. He hopes that an industrial process might be up and running by 2009.

Scientists at the Venter Institute hope to make organisms that can provide cheap fuels from biomass sources, such as bacteria that digest plant matter and turn it into ethanol. When the ETC Group dismisses these efforts to use synthetic biology for addressing global problems as mere marketing strategies, they are grossly misjudging the researchers and their motives.

But might patenting pose more of a threat than twitchy pressure groups? “If you want to have a community sharing useful and good parts, 20 years of patent protection is obviously not helpful”, says Sven Panke of the ETH in Zürich, Switzerland, one of the organizers of the third Synthetic Biology conference being held there next week. “It would be very helpful if we could find a good way to reward but not impede.”

Endy points out that patenting is by no means the only way to protect intellectual property – although it is certainly one of the most costly, and so suits lawyers nicely. Copyright is another way to do it – although even that might now be too binding (thanks to the precedents set by Disney on Mickey Mouse), and it’s not obvious how it might work for synthetic biology anyway.

Tailormade contracts are another option, but Endy says they tend to be ‘leaky’. It may be that some form of novel, bespoke legal framework would work best, but that could be expensive too.

Intellectual property is prominently on the agenda at next week’s Zürich conference. But Panke says “we are going to take a look at the issue, but we will not solve it. In Europe we are just starting to appreciate the problem.”

References
1. Rai, A. & Boyle, J. PLoS Biology 5(3), e58 (2007).
2. Elowitz, M. B. & Leibler, S. Nature 403, 335 - 338 (2000).
3. Levskaya, A. et al. Nature 438, 441 (2005).
4. Ro, D.-K. et al. Nature 440, 940 - 943 (2006).

Wednesday, June 20, 2007

NATO ponders cyberwarfare
[If I were good at kidding myself, I could imagine that NATO officials read my previous muse@nature.com article on the recent cyberattacks on Estonia. In any event, they seem now to be taking seriously the question of how to view such threats within the context of acts of war. Here’s my latest piece for Nature Online News.]

Attacks on Estonian computer networks have prompted high-level discussion.

Recent attacks on the electronic information networks of Estonia have forced NATO to consider the question of whether this form of cyberattack could ever be construed as an act of war.

The attacks on Estonia happened in April in response to the Baltic country’s decision to move a Soviet-era war memorial from the centre of its capital city Tallinn. This was interpreted by some as a snub to the country’s Soviet past and to its ethnic Russian population. The Estonian government claimed that many of the cyberattacks on government and corporate web sites, which were forced to shut down after being swamped by traffic, could be traced to Russian computers.

The Russian government denied any involvement. But NATO spokesperson James Appathurai says of the attacks that “they were coordinated; they were focused, [and] they had clear national security and economic implications for Estonia.”

Estonia is one of the most ‘wired’ countries in Europe, and renowned for its expertise in information technology. Earlier this year it conducted its national elections electronically.

Last week, NATO officials met at the alliance headquarters in Brussels to discuss how such cyberattacks should be dealt with. All 26 of the alliance members agreed that cyberdefence needs to be a top priority. “Urgent work is needed to enhance the ability to protect information systems of critical importance to the Alliance against cyberattacks”, said Appathurai.

The Estonian experience seems to have sounded a wake-up call: previous NATO statements on cyberdefence have amounted to little more than identifying the potential risk. But the officials may now have to wrestle with the problem of how such an attack, if state-sponsored, should be viewed within the framework of international law on warfare.

Irving Lachow, a specialist on information warfare at the National Defense University in Washington, DC, says that, in the current situation, it is unclear whether cyberattack could be interpreted as an act of war.

“My intuition tells me that cyberwarfare is fundamentally different in nature”, he says. “Traditional warfare is predicated on the physical destruction of objects, including human beings. Cyberwarfare is based on the manipulation of digital objects. Of course, cyberattacks can cause physical harm to people, but they must do so through secondary effects.”

But he adds that “things get trickier when one looks at strategic effects. It is quite possible for cyber attacks to impact military operations in a way that is comparable to physical attacks. If you want to shut down an air defense site, it may not matter whether you bomb it or hack its systems as long as you achieve the same result. Thus it is quite conceivable that a cyberattack could be interpreted as an act of war – it depends on the particulars.”

Clearly, then, NATO will have plenty to talk about. “I think it’s great that NATO is focused on this issue”, says Lachow. “Going through the policy development process will be a useful exercise. Hopefully it will produce guidelines that will help with future incidents.”

Thursday, June 07, 2007


Is this Chaucer’s astrolabe?

[This is the pre-edited version of my latest article for Nature’s online news. the original paper is packed with interesting stuff, and makes me want to know a lot more about Chaucer.]

Several astronomical instruments have been misattributed to the medieval English writer

Want to see the astrolabe used for astronomical calculations by Geoffrey Chaucer himself? You’ll be lucky, says Catherine Eagleton, a curator at the British Museum in London.

In a paper soon to be published in the journal Studies in the History and Philosophy of Science [doi:10.1016/j/shpsa.2007.03.006], she suggests that the several astrolabes that have been alleged as ‘Chaucer’s own’ are probably not that at all.

It’s more likely, she says, that these are instruments made after Chaucer’s death according to the design the English scholar set out in his Treatise on the Astrolabe. Such was Chaucer’s reputation in the centuries after his death that instrument-makers may have taken the drawings in his book as a blueprint.

Eagleton thinks that the claims are therefore back to front: the instruments alleged to be Chaucer’s own were not the ones he used for his drawings, but rather, the drawings supplied the design for the instruments.

Born around 1343, Chaucer is famous now for his literary work known as The Canterbury Tales. But he was a man of many interests, including the sciences of his age: the Tales demonstrate a deep knowledge of astronomy, astrology and alchemy, for example.

He was also a courtier and possibly acted as a spy for England’s King Richard II. “He was an intellectual omnivore”, says Eagleton. “There’s no record that he had a formal university education, but he clearly knows the texts that academics were reading.”

There are several ‘Chaucerian astrolabes’ that have characteristic features depicted in Chaucer’s treatise. Sometimes this alone has been used to date the instruments to the fourteenth century and to link them more or less strongly to Chaucer himself.

An astrolabe is an instrument shaped rather like a pocket watch, with movable dials and pointers that enable the user to calculate the positions of the stars and the Sun at a particular place and time: a kind of early astronomical computer. It is simultaneously a device for timekeeping, determining latitude, and making astrological forecasts.

The astrolabe may have been invented by the ancient Greeks or Indians, but Islamic scientists made its construction a fine art. The designs shown in Chaucer’s books have some distinctive features – in particular, capital letters marking the 24 hours of the day, and a symbol of a dog to represent the Dog Star.

Eagleton says that these correspondences have led some collectors and dealers to claim that a particular instrument could be the very one Chaucer held in his hand. “There are probably four or five of these around”, she says, “but no one needs five astrolabes.”

“There’s a real tendency to link any fourteenth-century instrument to him”, she says, adding that museum curators are usually more careful. The British Museum doesn’t make such strong claims for its own ‘Chaucerian astrolabes’, for example. (The one shown above is called the ‘Chaucer astrolabe’, and is unusual in being inscribed with a pre-Chaucerian date of 1326 – but the museum is suitably cautious about what it infers from that.)

But Eagleton says that an instrument held at Merton College in Oxford University is generally just called ‘Chaucer’s astrolabe’, with the implication that it was his.

She says that none of the ‘Chaucerian astrolabes’ can in fact be definitively dated to the fourteenth century, and that all four of those she studied closely, including another Chaucerian model at the British Museum and one at the Museum of the History of Science in Oxford called the Painswick astrolabe, have features that suggest they were made after Chaucer’s treatise was written. For example, the brackets holding the rings of these two astrolabes have unusual designs that could be attempts to copy the awkward drawing of a bracket in Chaucer’s text, which merges views from different angles. The treatise, in other words, came before the instruments.

“It is extremely unlikely that any of the surviving instruments were Chaucer’s own astrolabe”, she concludes.

So why have others thought they were? “There is this weird celebrity angle, where people get a bit carried away”, she says. “It is always tempting to attach an object to a famous name – it’s a very human tendency, which lets us tell stories about them. But it winds me up when it’s done on the basis of virtually no evidence.”

This isn’t a new trend, however. Chaucer was already a celebrity by the sixteenth century, so that a whole slew of texts and objects became attributed to him. This was common for anyone who became renowned for their scholarship in the Middle Ages.

Of course, whether or not an astrolabe was ‘Chaucer’s own’ would be likely to affect the price it might fetch. “This association with Chaucer probably boosts the value”, says Eagleton. “I might be making myself unpopular with dealers and collectors.”

Monday, June 04, 2007


Tendentious tilings

[This is my Materials Witness column for the July issue of Nature Materials]

Quasicrystal enthusiasts may have been baffled by a rather cryptic spate of comments and clarifications following in the wake of a recent article claiming that medieval Islamic artists had the tools needed to construct quasicrystalline patterns. That suggestion was made by Peter Lu at Harvard University and Paul Steinhardt at Princeton (Science 315, 1106; 2007). [See my previous post on 23 February 2007] But in a news article in the same issue, staff writer John Bohannon explained that these claims had already caused controversy, being allegedly anticipated in the work of crystallographer Emil Makovicky at the University of Copenhagen (Science 315, 1066; 2007).

The central thesis of Lu and Steinhardt is that Islamic artists used a series of tile shapes, which they call girih tiles, to construct their complex patterns. They can be used to make patterns of interlocking pentagons and decagons with the ‘forbidden’ symmetries characteristic of quasicrystalline metal alloys, in which these apparent symmetries, evident in diffraction patterns, are permitted by a lack of true periodicity.

Although nearly all of the designs evident on Islamic buildings of this time are periodic, Lu and Steinhardt founds that those on a fifteenth-century shrine in modern-day Iran can be mapped almost perfectly onto another tiling scheme, devised by mathematician Roger Penrose, which does generate true quasicrystals.

But in 1992 Makovicky made a very similar claim for a different Islamic tomb dating from 1197. Some accused Lu and Steinhardt of citing Makovicky’s work in a way that did not make this clear. The authors, meanwhile, admitted that they were unconvinced by Makovicky’s analysis and didn’t want to get into an argument about it.

The dispute has ruffled feathers. Science subsequently published a ‘clarification’ that irons out barely perceptible wrinkles in Bohannon’s article, while Lu and Steinhardt attempted to calm the waters with a letter in which they ‘gladly acknowledge’ earlier work (Science 316, 982; 2007). It remains to be seen whether that will do the trick, for Makovicky wasn’t the only one upset by their paper. Design consultant Jay Bonner in Santa Fe has also made previous links between Islamic patterns and quasicrystals.

Most provocatively, Bonner discusses the late-fifteenth-century Topkapi architectural scroll that furnishes the key evidence for Lu and Steinhardt’s girih scheme. Bonner points out how this scroll reveals explicitly the ‘underlying polygonal sub-grid’ used to construct the pattern it depicts. He proposes that the artists commonly used such a polygonal matrix, composed of tile-like elements, and demonstrates how these can create aperiodic space-filling designs.

Bonner does not mention quasicrystals, and his use of terms such as self-similarity and even symmetry do not always fit easily with that of physicists and mathematicians. But there’s no doubting that his work deepens the ‘can of worms’ that Bohannon says Lu and Steinhardt have opened.

All this suggests that the satellite conference of the forthcoming European Crystallographic Meeting in Marrakech this August, entitled ‘The enchanting crystallography of Moroccan ornaments’, might be more stormy than enchanting – for it includes back-to-back talks by Makovicky and Bonner.

Friday, May 25, 2007

Does this mean war?
[This is my latest article for muse@nature.com]

Cyber-attacks in the Baltic raise difficult questions about the threat of state-sponsored information warfare.

Is Estonia at war? Even the country’s leaders don’t seem sure. Over the past several weeks the Baltic nation has suffered serious attacks, but no one has been killed and it isn’t even clear who the enemy is.

That’s because the attacks have taken place in cyberspace. The websites of the Estonian government and political parties, as well as its media and banks, have been paralysed by tampering. Access to the sites has now been blocked to users outside the country.

This is all part of a bigger picture in which Estonia and its neighbour Russia are locked in bitter dispute sparked by the Soviet legacy. But the situation could provoke a reappraisal of what cyber-warfare might mean for international relations.

In particular, could it ever constitute a genuine act of war? “Not a single Nato defence minister would define a cyber-attack as a clear military action at present,” says the Estonian defence minister Jaak Aaviksoo — but he seems to doubt whether things should remain that way, adding that “this matter needs to be resolved in the near future.”

The changing face of war


When the North Atlantic Treaty was drafted in 1949, cementing the military alliance of NATO, it seemed clear enough what constituted an act of war, and how to respond. “An armed attack against one or more [member states] shall be considered an attack against them all,” the treaty declared. It was hard at that time to imagine any kind of effective attack that did not involve armed force. Occupation of sovereign territory was one thing (as the Suez crisis soon showed), but no one was going to mobilize troops in response to, say, economic sanctions or verbal abuse.

Now, of course, ‘war’ is itself a debased and murky term. Nation states seem ready to declare war on anything: drugs, poverty, disease, terrorism. Co-opting military jargon for quotidian activities is an ancient habit, but by doing so with such zeal, state leaders have blurred the distinctions.

Cyber-war is, however, something else again. Terrorists had already recognized the value of striking at infrastructures rather than people, as was clear from the IRA bombings of London’s financial district in the early 1990s, before the global pervasion of cyberspace. But now that computer networks are such an integral part of most political and economic systems, the potential effects of ‘virtual attack’ are vastly greater.

And these would not necessarily be ‘victimless’ acts of aggression. Disabling health networks, communications or transport administration could easily have fatal consequences. It is not scaremongering to say that cyberwar could kill without a shot being fired. And the spirit, if not currently the letter, of the NATO treaty must surely compel it to protect against deaths caused by acts of aggression.

Access denied

The attacks on Estonia websites, triggered by the government’s decision to relocate a Soviet-era war memorial, consisted of massed, repeated requests for information that overwhelmed servers and caused sites to freeze — an effect called distributed denial of service. Estonian officials claimed that many of the requests came from computers in Russia, some of them in governmental institutions.

Russia has denied any state involvement, and so far European Union and NATO officials, while denouncing the attacks as “unacceptable” and “very serious”, have not accused the Kremlin of orchestrating the campaign.

The attack is particularly serious for Estonia because of its intense reliance on computer networks for government and business. It boasts a ‘paperless government’ and even its elections are held electronically. Indeed, information technology is one of Estonia’s principal strengths – which is why it was able to batten down the hatches so quickly in response to the attack. In late 2006, Estonia even proposed to set up a cyber-defence centre for NATO.

There is nothing very new about cyber-warfare. In 2002 NATO recognized it as a potential threat, declaring an intention to “strengthen our capabilities to defend against cyber attacks”. In the United States, the CIA, the FBI, the Secret Service and the Air Force all have their own anti-cyber-terrorism squads.

But most of the considerable attention given to cyber-attack by military and defence experts has so far focused on the threat posed by individual aggressors, from bored teenage hackers to politically motivated terrorists. This raises challenges of how to make the web secure, but does not really pose new questions for international law.

The Estonia case may change that, even if (as it seems) there was no official Russian involvement. Military attacks often now focus on the use of armaments to disable communications infrastructure, and it is hard to see how cyber-attacks are any different. The United Nations Charter declares its intention to prevent ‘acts of aggression’, but doesn’t define what those are — an intentional decision so as not to leave loopholes for aggressors, which now looks all the more shrewd.
Irving Lachow, a specialist on information warfare at the National Defense University in Washington, DC, agrees that the issue is unclear at present. “One of the challenges here is figuring out how to classify a cyber-attack”, he says. “Is it a criminal act, a terrorist act, or an act of war? It is hard to make these determinations but important because different laws apply.” He says that the European Convention on Cyber Crime probably wouldn’t apply to a state-sponsored attack, and that while there are clear UN policies regarding ‘acts of war’, it’s not clear what kind of cyber-attack would qualify. “In my mind, the key issues here are intent and scope”, he says. “An act of war would try to achieve a political end through the direct use of force, via cyberspace in this case.”

And what would be the appropriate response to state-sanctioned cyber-attack? The use of military force seems excessive, and could in any case be futile. Some think that the battle will have to be joined online – but with no less a military approach than in the flesh-and-blood world. Computer security specialist Winn Schwartau, has called for the creation of a ‘Fourth Force’, in addition to the army, navy, and air force, to handle cyberspace.

That would be to regard cyberspace as just another battleground. But perhaps instead this should be seen as further reason to abandon traditional notions about what warfare is, and to reconsider what, in the twenty-first century, it is now becoming.

Wednesday, May 16, 2007

There’s no such thing as a free fly
[This is the pre-edited version of my latest article for muse@nature.com]

Neuroscience can’t show us the source of free will, because it’s not a scientific concept.

Gluing a fly’s head to a wire and watching it trying to fly sounds more like the sort of experiment a naughty schoolboy would conduct than one that turns out to have philosophical and legal implications.

But that’s the way it is for the work reported this week by a team of neurobiologists in the online journal PLoS One [1]. They say their study of the ‘flight’ of a tethered fly reveals that the fly’s brain has the ability to be spontaneous – to make decisions that aren’t predictable responses to environmental stimuli.

The researchers think this might be what underpins the notorious cussedness of laboratory animals, wryly satirized in the so-called Harvard Law of Animal Behavior: “Under carefully controlled experimental circumstances, an animal will behave as it damned well pleases.”

But in humans, this apparently volitional behaviour seems all but indistinguishable from what we have traditionally called free will. In other words, the work seems to imply that even fruit-fly brains are hard-wired to display something we might as well denote free will.

The flies are tethered inside a blank white cylinder, devoid of all environmental clues about which direction to take. If the fly is nothing but an automaton impelled hither and thither by external inputs, then it would in this circumstance be expected to fly in purely random directions. Although the wire stops the fly from actually moving, its attempts to do so create a measurable tug on the wire that reveals its ‘intentions’.

Björn Brembs of the Free University of Berlin and his colleagues found found that these efforts aren’t random. Instead, they reveal a pattern that, for an unhindered fly, would alternate localized buzzing around with occasional big hops.

This kind of behaviour has been seen in other animals (and in humans too), where it has been interpreted as a good foraging strategy: if a close search of one place doesn’t bring result, you’re better off moving far afield and starting afresh.

But this was thought to rely on feedback from the environment, and not to be intrinsic to the animals’ brains. Brembs and colleagues say that in contrast there exists a ‘spontaneity generator’ in the flies’ brains which does not depend on external information in a determinate way.

Is that really ‘free will’, though? No one is suggesting that the flies are making conscious choices; the idea is simply that this neural ‘spontaneity circuit’ is useful in evolutionary terms, and so has become hard-wired into the brain.

But it could, the researchers say, be a kind of precursor to the mental wiring of humans that would enable us to evade the prompts of our own environmentally conditioned responses and ‘make up our own minds’ – to exercise what is commonly interpreted as free will. “If such circuits exist in flies, it would be unlikely that they do not exist in humans, as this would entail that humans were more robot-like than flies”, Brembs says.

These neural circuits mean that you can know everything about an organism’s genes and environment yet still be unable to anticipate its caprices. If that’s so – and the researchers now intend to search for the neural machinery involved – this adds a new twist to the current debate that neuroscience has provoked about human free will.

Some neuroscientists have argued that, as we become increasingly informed about the way our behaviour is conditioned by the physical and chemical makeup of our brains, the notion of legal responsibility will be eroded. Criminals will be able to argue their clack of culpability on the grounds that “my brain made me do it”.

While right-wing and libertarian groups fulminate at the idea that this will hinder the law’s ability to punish and will strip the backbone from the penal system, some neuroscientists feel that it will merely change its rationale, making it concerned less with retribution and more with utilitarian prevention and social welfare. According to psychologists Joshua Greene and Jonathan Cohen of Princeton University, “Neuroscience will challenge and ultimately reshape our intuitive sense(s) of justice” [2].

If neuroscience indeed threatens free will, some of the concerns of the traditionalists are understandable. It’s hard to see how notions of morality could survive a purely deterministic view of human nature, in which our actions are simply automatic responses to external stimuli and free will is an illusion spun from our ignorance about cause and effect. And it is a short step from such determinism to the pre-emptive totalitarianism depicted in the movie Minority Report, where people are arrested for crimes they have yet to commit.

But while this ‘hard’ mechanical determinism may have made sense to political philosophers of the Enlightenment – it was the basis of Thomas Hobbes’ theory of government, for example – it is merely silly today, and for a number of reasons.

First, it places its trust in a linear, Cartesian mechanics of cogs and levers that clearly has nothing to do with the way the brain works. If nothing else, the results of Brembs and colleagues show that even the fly’s brain is highly nonlinear, like the weather system, and not susceptible to precise prediction.

Second, this discussion of ‘free will’ repeats the old canard, apparently still dear to the hearts of many neuroscientists, evolutionary biologists and psychologists, that our behaviour is governed by the way our minds work in isolation. But as neuroscientists Michael Gazzaniga and Megan Steven have pointed out [3], we act in a social context. “Responsibility is a social construct and exists in the rules of society”, they say. “It does not exist in the neuronal structures of the brain”.

This should be trivially obvious, but is routinely overlooked. Other things being equal, violent crime is frequently greater where there is socioeconomic deprivation. This doesn’t make it a valid defence to say ‘society made me do it’, but it shows that the interactions between environment, neurology and behaviour are complex and ill-served by either neurological determinism or a libertarian insistence on untrammelled ‘free will’ as the basis of responsibility and penal law.

The fact is that ‘free will’ is (like life and love) one of those culturally useful notions that turn into shackles when we try to make them ‘scientific’. That’s why it is unhelpful to imply that the brains of flies or humans might contain a ‘free will’ module simply because they have a capacity to scramble the link between cause and effect. Free will is a concept for poets and novelists, and, if it keeps them happy, for philosophers and moralists. In science and politics, it deserves no place.

Reference
1. Maye, A. et al. PLoS ONE May, e443 (2007).
2. Greene, J. & Cohen, J. Phil. Trans. R. Soc. Lond. B 359, 1775 - 1785 (2004).
3. Gazzaniga, M. S. & Steven, M. S. Sci. Am. MIND April 2005.


Philosophers, scientists and writers on free will

“The will cannot be called a free cause, but only necessary…. Things could have been produced by God in no other manner and in no other order than that in which they have been produced.”
Baruch Spinoza, Ethics

“Whatever concept one may hold, from a metaphysical point of view, concerning the freedom of the will, certainly its appearances, which are human actions, like every other natural event are determined by universal laws.”
Immanuel Kant, On History

“As a matter of fact, if ever there shall be discovered a formula which shall exactly express our wills and whims; if there ever shall be discovered a formula which shall make it absolutely clear what those wills depend upon, and what laws they are governed by, and what means of diffusion they possess, and what tendencies they follow under given circumstances; if ever there shall be discovered a formula which shall be mathematical in its precision, well, gentlemen, whenever such a formula shall be found, man will have ceased to have a will of his own—he will have ceased even to exist.”
Fyodor Dostoevsky, Notes from the Underground

“Free will is for history only an expression connoting what we do not know about the laws of human life.”
Leo Tolstoy, War and Peace

“There once was a man who said ‘Damn!’
It is borne in upon me I am
An engine that moves
In predestinate grooves
I’m not even a bus, I’m a tram.”
Maurice Evan Hare,1905

“We cannot prove… that human behaviour… is fully determined, but the position becomes more plausible as facts accumulate.”
B. F. Skinner, About Behaviorism

“Free will, as we ordinarily understand it, is an illusion. However, it does not follow… that there is no legitimate place for responsibility.”
Joshua Greene & Jonathan Cohen, 2004

Monday, May 14, 2007

Should we get engaged?
[This is the pre-edited version of my Crucible column for the June issue of Chemistry World.]

In 2015 the BBC broadcast a documentary called ‘Whatever happened to nanotechnology?’ Remember the radical predictions being made in 2006, it asked, such as curing blindness? Well, things didn’t turn out to be so simple. On the other hand, nor have the forecasts of nano-doom come to pass. Instead, there’s simply been plenty of solid, incremental science that has laid the groundwork for a brighter technological future.

This scenario, imagined in a European Union working paper, “Strategy for Communication Outreach in Nanotechnology”, sounds a little unlikely, not least because television is increasingly less interested in stories with such anodyne conclusions. But this, the paper suggests, is the optimistic outcome: one where nanotech has not been derailed by inept regulation, industrial mishaps and public disenchantment.

The object of the exercise is to tell the European Commission how to promote “appropriate communication in nanotechnology.” The present working paper explains that “all citizens and stakeholders, in Europe and beyond, are welcome to express comments, opinions and suggestions by end June 2007”, which will inform a final publication. So there’s still time if you feel so inclined.

One of the striking things about this paper is that it implies one now has to work frightfully hard, using anything from theatre to food, to bridge the divide between science and the public – and all, it seems, so that the public doesn’t pull the plug through distrust. If that’s really so, science is in deep trouble. But it may be in the marketplace, not the research lab, that public perception really holds sway.

What, however, is “appropriate communication” of technology?

Previous EU documents have warned that nanotechnology is poorly understood and difficult to grasp, and that its benefits are tempered by risks that need to be openly stated and investigated. “Without a serious communication effort,” one report suggests, “nanotechnology innovations could face an unjust negative public reception. An effective two-way dialogue is indispensable, whereby the general public’s views are taken into account and may be seen to influence [policy] decisions”.

This is, of course, the current mantra of science communication: engagement, not education. The EU paper notes that today’s pubic is “more sceptical and less deferential”, and that therefore “instead of the one-way, top down process of seeking to increase people’s understanding of science, a two-way iterating dialogue must be addressed, where those seeking to communicate the wonders of their science also listen to the perceptions, concerns and expectations of society.”

And so audiences are no longer lectured by a professor but discuss the issues with panels that include representatives from Greenpeace. There’s much that is productive and progressive in that. But in his bracingly polemical book The March of Unreason (OUP, 2005), Lord Dick Taverne challenges its value and points out that ‘democracy’ is a misplaced ideal in science. “Why should science be singled out as needing more democratic control when other activities, which could be regarded as equally ‘elistist’ and dependent on special expertise, are left alone?” he asks. Why not ‘democratic art’?

Taverne’s critique is spot-on. There now seems to be no better sport than knocking ‘experts’ who occasionally get things wrong, eroding the sense that we should recognize expertise at all. This habitual skepticism isn’t always the result of poor education – or rather, it is often the result of an extremely expensive but narrow one. The deference of yore often led to professional arrogance; but today’s universal skepticism makes arrogance everyone’s prerogative.

Another danger with ‘engagement’ is that it tends to provide platforms for a narrow spectrum of voices, especially those with axes to grind. The debate over climate change has highlighted the problems of insisting on ‘balance’ at the expense of knowledge or honesty. Nanotechnology, however, has been one area where ‘public engagement’ has often been handled rather well. A three-year UK project called Small Talk hosted effective public debates and discussions on nanotechnology while gathering valuable information about what people really knew and believed. Its conclusions were rather heartening. People’s attitudes to nanotechnology are not significantly different from their attitudes to any new technology, and are generally positive. People are less concerned about specific risks than about the regulatory structures that contain it. The public perception of risk, however, continues to be a pitfall: many now think that a ‘safe’ technology is one for which all risks have been identified and eliminated. But as Taverne points out, such a zero-risk society “would be a paradise only for lawyers.”

The EU’s project is timely, however, for the UK’s Council for Science and Technology, an independent advisory body to the government, has just pronounced in rather damning terms on the government’s efforts to ‘engage’ with the social and ethical aspects of nanotech. Their report looks at progress on this issue since publication of a nanotech review in 2004 prepared for the government by the Royal Society and Royal Academy of Engineering “The report led to the UK being seen as a world leader in its engagement with nanotechnologies”, it say. “However, today the UK is losing that leading position.”

It attributes this mainly to a failure to institute a coherent approach to the study of nano-toxicology, the main immediate hazard highlighted by the 2004 review. “In the past five years, only £3m was spent on toxicology and the health and environmental impacts of nanomaterials”, it says, and “there is as yet little conclusive data concerning the long-term environmental fate and toxicity of nanomaterials.”

Mark Welland, one of the expert advisers on this report, confirms that view. “The 2004 recommendations have been picked up internationally”, he says, “but the UK government has one almost nothing towards toxicology.” Like others, he fears that inaction could open the doors to a backlash like that against genetically modified organisms or the MMR vaccine.

If that’s so, maybe we do need good ideas about how to communicate. But that’s only part of an equation that must also include responsible industrial practice, sound regulation, broad vision, and not least, good research
Prospects for the LHC
[This is my pre-edited Lab Report column for the June issue of Prospect.]

Most scientific instruments are doors to the unknown; that’s been clear ever since Robert Hooke made exquisite drawings of what he saw through his microscope. They are invented not to answer specific questions – what does a flea look like up close? – but for open-ended study of a wide range of problems. This is as true of the mercury thermometer as it is of the Hubble Space Telescope.

But the Large Hadron Collider (LHC), under construction at the European centre for high-energy physics (CERN) in Geneva, is different. Particle physicists rightly argue that, because it will smash subatomic particles into one another with greater energy than ever before, it will open a window on a whole new swathe of reality. But the only use of the LHC that anyone ever hears or cares about is the search for the Higgs boson.

This is pretty much the last missing piece of the so-called Standard Model of fundamental physics: the suite of particles and their interactions that explains all known events in the subatomic world. The Higgs boson is the particle associated with the Higgs field, which pervades all space and, by imposing a ‘drag’ on other particles, gives them their mass. (In the Standard Model all the fields that create forces have associated particles: electromagnetic fields have photons, the strong nuclear force has gluons.)

To make a Higgs boson, you need to release more energy in a particle collision than has so far been possible with existing colliders. But the Tevatron accelerator at Fermilab in Chicago comes close, and could conceivably still glimpse the Higgs before it is shut down in 2009. While no one wants to admit that this is a race, that can’t be doubted – and Fermilab would love to spot the Higgs first.

Which makes it all the more awkward that components supplied by Fermilab for the LHC have proven to be faulty – most recently, a huge magnet that shifted and ruptured a pipe. Fermilab admits to embarrassment at the ‘oversight’, but it has set the rumour mills grinding. For this and (primarily) other reasons, the LHC now seems unlikely to make its first test run at the end of this year. Among other things, it needs to be refrigerated to close to absolute zero, which can’t be done in a hurry.

Extravagant promises can only be sustained for so long without delivery, and so the delays could test public sympathy, which has so far been very indulgent of the LHC. As a multi-million instrument that has only one really big question in sight, the supercollider is already in a tight spot: everyone thinks they know the answer already (the Higgs exists), and that may yet be confirmed before the LHC comes online. But this is a universal problem for high-energy physics today, where all the major remaining questions demand unearthly energies. There’s a chance that the LHC may turn up some surprises – evidence of extra dimensions, say, or of particles that lie outside the Standard Model. But the immense and expensive technical challenges involved in exploring every theoretical wrinkle means that new ideas cannot be broached idly. And arguably science does not flourish where the agenda must be set by consensus and there is no room left for play.

*****

The idea that the UK has lost a ‘world lead’ in nanotechnology, suggested recently in the Financial Times, begged the question of when the UK ever had it. The headline was sparked by a report released in March by the Council for Science and Technology, a government advisory body. But Mark Welland, a nanotech specialist at Cambridge University and one of the report’s expert contributors, says that wires got crossed: the report’s criticisms were concerned primarily with the social, environmental and ethical aspects of nanotech. These were explored at depth in an earlier review of nanotechnology, the science of the ultrasmall, conducted by the Royal Society and the Royal Academy of Engineering and published in 2004.

That previous report highlighted the potential toxicity of nanoparticles – tiny grains of matter, which are already being used in consumer products – as one of the most pressing concerns, and recommended that the government establish and fund a coherent programme to study it. Welland says that some of those suggestions have been picked up internationally, but “nothing has happened here.” The 2004 report created an opportunity for the UK to lead the field in nano-toxicology, he says, and this is what has now been squandered.

What of the status of UK nanotech more generally? Welland agrees that it has never been impressive. “There’s no joined-up approach, and a lack of focus and cohesion between the research councils. Other European countries have much closer interaction between research and commercial exploitation. And the US and Japan have stuck their necks out a lot further. Here we have just a few pockets of stuff that’s really good.”

The same problems hamstrung the UK’s excellence in semiconductor technology in the 1970s. But there are glimmers of hope: Nokia has just set up its first nanotech research laboratory in Cambridge.

*****

As the zoo of extrasolar planets expands – well over 100 are now known – some oddballs are bound to appear. Few will be odder than HD 149026b, orbiting its star in the Hercules constellation 260 light years away. Its surface temperature of 2050 degC is about as hot as a small star, while it is blacker than charcoal and may glow like a giant ember. Both quirks are unexplained. One possibility is that the pitch-black atmosphere absorbs every watt of starlight and then instantly re-emits it – strange, but feasible. At any rate, the picture of planetary diversity gleaned from our own solar system is starting to look distinctly parochial.

Wednesday, May 02, 2007

PS This is all wrong

So there you are: your paper is written, and you’ve got it accepted in the world’s leading physics journal, and it has something really interesting to say. You’ve done the calculations and they just don’t match the observations. What this implies is dramatic: we’re missing a crucial part of the puzzle, some new physics, namely a fifth fundamental force of nature. Wow. OK, so that’s a tentative conclusion, but it’s what the numbers suggest, and you’ve been suitably circumspect in reporting it, and the referees have given the go-ahead.

Then, with the page proofs in hand, you decide to just go back and check the observations, which need a bit of number-crunching before the quantitative result drops out. And you find that the people who reported this originally haven’t been careful enough, and their number was wrong. When you recalculate, the match with conventional theory is pretty good: there’s no need to invoke any new physics after all.

So what do you do?

I’d suggest that what you don’t do is what an author has just done: add a cryptic ‘note in proof’ and publish anyway. Cryptic in that what it doesn’t say is ‘ignore all that had gone before: my main result, as described in the abstract, is simply invalid’. Cryptic in that it refers to the revision of the observed value, but says this is in good agreement ‘with the predictions above’ – by which you mean, not the paper’s main conclusions, but the ‘predictions’ using standard theory that the paper claims are way off beam. Cryptic in that this (possibly dense) science writer had to read it several times before sensing something was badly wrong.

In fact, I’d contend that you should ideally withdraw the paper. Who gains from publishing a paper that, if reported accurately, ends with a PS admitting it is wrong?

True, this is all a little complex. For one thing, it could be a postgrad’s thesis work at stake. But no one gets denied a PhD because perfectly good theoretical work turns out to be invalidated by someone else’s previous mistake. And what does a postgrad really gain by publishing a paper making bold claims in a prominent journal that ends by admitting it is wrong?

True, the work isn’t useless – as the researcher concerned argued when (having written the story and just needed to add some quotes) I contacted him, the discrepancy identified in the study is what prompted a re-analysis of the data that brought the previous error to light. But you have a preprint written that reports the new analysis; surely you can just add to that a comment alluding to this false trail and the impetus it provided. In fact, your current paper is itself already on the preprint server – you just need to cite that. The whole world no longer needs to know.

No, this is a rum affair. I’m not sure that the journal in question really knew what it was publishing – that the ‘note added in proof’ invalidated the key finding. If it did, I’m baffled by the decision. And while I’m miffed at my wasted time, the issue has much more to do with propriety. Null results are one thing, but this is just clutter. I realize it must be terribly galling to find that your prized paper has been rendered redundant on the eve of publication. But that’s science for you.

Friday, April 20, 2007

Physicists start saying farewell to reality
Quantum mechanics just got even stranger
[This is my pre-edited story for Nature News on a paper published this week, which even this reserved Englishman must acknowledge to be deeply cool.]

There’s only one way to describe the experiment performed by physicist Anton Zeilinger and his colleagues: it’s unreal, dude.

Measuring the quantum properties of pairs of light particles (photons) pumped out by a laser has convinced Zeilinger that “we have to give up the idea of realism to a far greater extent than most physicists believe today.”

By realism, what he means is the idea that objects have specific features and properties: that a ball is red, that a book contains the works of Shakespeare, that custard tastes of vanilla.

For everyday objects like these, realism isn’t a problem. But for objects governed by the laws of quantum mechanics, such as photons or subatomic particles, it may make no sense to think of them as having well defined characteristics. Instead, what we see may depend on how we look.

Realism in this sense has been under threat ever since the advent of quantum mechanics in the early twentieth century. This seemed to show that, in the quantum world, objects are defined only fuzzily, so that all we can do is to adduce the probabilities of their possessing particular characteristics.

Albert Einstein, one of the chief architects of quantum theory, could not believe that the world was really so indeterminate. He supposed that there was a deeper level of reality yet to be uncovered: so-called ‘hidden variables’ that specified any object’s properties precisely.

Allied to this assault on reality was the apparent prediction of what Einstein called ‘spooky action at a distance’: disturbing one particle could instantaneously determine the properties of another particle, no matter how far away it is. Such interdependent particles are said to be entangled, and this action at a distance would violate the principle of locality: the idea that only local events govern local behaviour.

In the 1960s the Irish physicist John Bell showed how to put locality and realism to the test. He deduced that they required two experimentally measurable quantities of entangled quantum particles such as photons to be equal. The experiments were carried out in the ensuing two decades, and they showed that Bell’s equality is violated.

This means that either realism or locality, or both, fails to apply in the quantum world. But which of these cases is it? That’s what Zeilinger, based at the University of Vienna, and his colleagues have set out to test [1].

They have devised another ‘equality’, comparable to Bell’s, that should hold up if quantum mechanics is non-local but ‘realistic’. “It’s known that you can save realism if you kick out locality”, Zeilinger says.

The experiment involves making pairs of entangled photons and measuring a quantum property of each of them called the polarization. But whereas the tests of Bell’s equality measured the so-called ‘linear’ polarization – crudely, whether the photons’ electromagnetic fields oscillate in one direction or the opposite – Zeilinger’s experiment looks at a different sort of polarization, called elliptical polarization, for one of the photons.

If the quantum world can be described by non-local realism, quantities derived from these polarization measurements should be equal. But Zeilinger and colleagues found that they weren’t.

This doesn’t rule out all possible non-local realistic models, but it does exclude an important subset of them. Specifically, it shows that if you have a group of photons all with independent polarizations, then you can’t ascribe specific polarizations to each. It’s rather like saying that in a car park it is meaningless to imagine that particular cars are blue, white or silver.

If the quantum world is not realistic in this sense, then how does it behave? Zeilinger says that some of the alternative non-realist possibilities are truly weird. For example, it may make no sense to imagine ‘counterfactual determinism’: what would happen if we’d made a different measurement. “We do this all the time in daily life”, says Zeilinger – for example, imagining what would happen if we’d tried to cross the road when that truck was coming.

Or we might need to allow the possibility of present actions affecting the past, as though choosing to read a letter or not affects what it says.

Zeilinger hopes his work will stimulate others to test such possibilities. “I’m sure our paper is not the end of the road”, he says. “But we have a little more evidence that the world is really strange.”

Reference
1. Gröblacher, S. et al. Nature 446, 871 – 875 (2007).

Tuesday, April 17, 2007


Tales of the expected

[This is the pre-edited version of my latest Muse article for Nature online news.]

A recent claim of water on an extrasolar planet raises broader questions about how science news is reported.

“Scientists discover just what they expected” is not, for obvious reasons, a headline you see very often. But it could serve for probably a good half of the stories reported in the public media, and would certainly have been apt for the recent reports of water on a planet outside our solar system.

The story is this: astronomer Travis Barman of the Lowell Observatory in Flagstaff, Arizona, has claimed to find a fingerprint of water vapour in the light from a Sun-like star 150 light years away as it passes through the atmosphere of the star’s planet HD 209458b [T. Barman, Astrophys. J. in press (2007); see the paper here].

The claim is tentative and may be premature. But more to the point, at face value it confirms precisely what was expected for HD 209458b. Earlier observations of this Jupiter-sized planet had failed to see signs of water – but if it were truly absent, something would be seriously wrong with our understanding of planetary formation.

The potential interest of the story is that water is widely considered by planetary scientists to be the prerequisite for life. But if it’s necessary, it is almost certainly not sufficient. There is water on most of the other planets in our solar system, as well as several of their moons and indeed in the atmosphere of the Sun itself. But as yet there is no of sign of life on any of them.

The most significant rider is that to support life as we know it, water must be in the liquid state, not ice or vapour. That may be the case on Jupiter’s moons Europa and Callisto, as it surely once was (and may still be, sporadically) on Mars. But in fact we don’t even know for sure that water is a necessary condition for life: there is no reason to think, apart from our unique experience of terrestrial life, that other liquid solvents could not sustain living systems.

All of this makes Barman’s discovery – which he reported with such impeccable restraint that it could easily have gone unnoticed – intriguing, but very modestly so. Yet it has been presented as revelatory. “There may be water beyond our solar system after all”, exclaimed the New York Times. “First sign of water found on an alien world”, said New Scientist (nice to know that, in defiance of interplanetary xenophobia, Martians are no longer aliens).

As science writers are dismayingly prone to saying sniffily “oh, we knew that already”, I’m hesitant to make too much of this. It’s tricky to maintain a perspective on science stories without killing their excitement. But the plain fact is that there is water in the universe almost everywhere we look – certainly, it is a major component of the vast molecular clouds from which stars and planets condense.

And so it should be, given that its component atoms hydrogen and oxygen are respectively the most abundant and the third most common in the cosmos. Relatively speaking, ours is a ‘wet’ universe (though yes, liquid water is perhaps rather rare).

The truth is that scientists work awfully hard to verify what lazier types might be happy to take as proven. Few doubted that Arthur Eddington would see, in his observations of a solar eclipse in 1919, the bending of light predicted by Einstein’s theory of general relativity. But it would seem churlish in the extreme to begrudge the headlines that discovery generated.

Similarly, it would be unfair to suggest that we should greet the inevitable sighting of the Higgs boson (the so-called ‘God’ particle thought to give other particles their mass) with a shrug of the shoulders, once it turns up at the billion-dollar particle accelerator constructed at CERN in Geneva.

These painstaking experiments are conducted not so that their ‘success’ produces startling front-page news but because they test how well, or how poorly, we understand the universe. Both relativity and quantum mechanics emerged partly out of a failure to find the expected.

In the end, the interest of science news so often resides not in discovery but in context: not in what the experiment found, but in why we looked. Barman’s result, if true, tells us nothing we did not know before, except that we did not know it. Which is why it is still worth knowing.

Wednesday, April 04, 2007


Violin makers miss the best cuts
[This is the pre-edited version of my latest article for Nature’s online news. For more on the subject, I recommend Ulrike Wegst’s article “Wood for Sound” in the American Journal of Botany 93, 1439 (2006).]

Traditional techniques fail to select wood for its sound


Despite their reputation as master craftspeople, violin makers don’t choose the best materials. According to research by a team based in Austria, they tend to pick their wood more for its looks than for its acoustic qualities.

Christoph Buksnowitz of the University of Natural Resources and Applied Life Sciences in Vienna and his coworkers tested wood selected by renowned violin makers (luthiers) to see how beneficial it was to the violin’s sound. They found that the luthiers were generally unable to identify the woods that performed best in laboratory acoustic tests [C. Buksnowitz et al. J. Acoust. Soc. Am. 121, 2384 - 2395 (2007)].

That was admittedly a tall order, since the luthiers had to make their selections just by visual and tactile inspection, without measuring instruments. But this is normal practice in the trade: the instrument-makers tend to depend on rules of thumb and subjective impressions when deciding which pieces of wood to use. “Some violin makers develop their instruments in very high-tech ways, but most seem to go by design criteria optimized over centuries of trial and error”, says materials scientist Ulrike Wegst of the Max Planck Institute for Metals Research in Stuttgart, Germany.

Selecting wood for musical instruments has been made a fine art over the centuries. For a violin, different types of wood are traditionally employed for the different parts of the instrument: ebony and rosewood for the fingerboard, maple for the bridge, and spruce for the soundboard of the body. The latter amplifies the resonance of the strings, and accounts for much of an instrument’s tonal qualities.

Buksnowitz and colleagues selected 84 samples of instrument-quality Norway spruce, one of the favourite woods for violin soundboards. They presented these to 14 top Austrian violin makers in the form of boards measuring 40 by 15 cm. The luthiers were asked to grade the woods according to acoustics, appearance, and overall suitability for making violins.

While the luthiers had to rely on their senses and experience, using traditional techniques such as tapping the woods to assess their sound, the researchers then conducted detailed lab tests of the strength, hardness and acoustic properties.

Comparing the professional and scientific ratings, the researchers found that there was no relation between the gradings of the instrument-makers and the properties that would give the wood a good sound. Even testing the wood’s acoustics by knocking is a poor guide when the wood is still in the form of a plank.

The assessments, they concluded, were being made primarily on visual characteristics such as colour and grain. That’s not as superficial as it might seem; some important properties, such as density, do match with things that can be seen by eye. “Visual qualities can tell us a lot about the performance of a piece of wood”, says Buksnowitz.

He stresses that the inability of violin makers to identify the best wood shouldn’t be seen as a sign of incompetence. “I admire their handiwork and have an honest respect for their skills”, he says. “It is still the talent of the violin maker that creates a master’s violin.”

Indeed, it is a testament to these skills that a luthier can make a first-class instrument from less than perfect wood. They can shape and pare it to meet the customer’s needs, fitting the intrinsic properties of the wood to the taste of the musician. “There are instrument-makers who would say they can build a good instrument from any piece of wood”, Buksnowitz says. “The experienced maker can allow for imperfections in the material and compensate for them”, Wegst agrees.

But Buksnowitz points out that the most highly skilled makers, such as Amati and Stradivari, are not limited by their technique, and so their only hope of making even better instruments is to find better wood.

At the other end of the scale, when violins are mass-produced and little skill enters the process at all, then again the wood could be the determining factor in how good the instrument sounds.

Instrument-makers themselves recognize that there is no general consensus on what is meant by ‘quality’. They agree that they need a more objective way of assessing this, the researchers say. “We want to cooperate with craftsmen to identify the driving factors behind this vague term”, says Buksnowitz.

Wegst agrees that this would be valuable. “As in wine-making, a more systematic approach could make instrument-making more predictable”, she says.

Thursday, March 29, 2007

Prospect - a response

David Whitehouse, once a science reporter for the BBC, has responded to my denunciation of ‘climate sceptics’ in Prospect. Here are his comments – I don’t find them very compelling, but you can make up your own mind:

"Philip Ball veers into inconsistent personal opinion in the global warming debate. He says the latest IPCC report comes as close to blaming humans for global warming as scientists are likely to. True, its summary replaced “likely to be caused by humans” with “very likely”, but that is hardly a great stride towards certainty, especially when deeper in the report is says that it is only “likely” that current global temperatures are the highest they’ve been in the past 1,300 years.
As for “sceptics” saying false and silly things, Ball should look to the alarmist reports about global warming so common in the media. These “climate extremists” are obviously saying false, silly things, as even scientists who adhere to the consensus have begun to notice. And it’s data, not economics, that will be the future battleground. The current period of warming began in 1975, yet the very data the IPCC uses shows that since 2002 there has been no upward trend. If this trend does not re-establish itself with force, and soon, we will shortly be able to judge who has been silliest.”

The first point kind of defeats itself: by implying that the IPCC’s move towards a stronger statement is rather modest, Whitehouse illustrates my point, which is that the IPCC is (rightly) inherently conservative (see my last entry below) and so this is about as committed a position as we could expect to get. If they had jumped ahead of the science and claimed 100% certainty, you can guess who’d be the first to criticize them for it.

Then Whitehouse points out that climate extremists say silly and false things too. Indeed they do. The Royal Society, who Whitehouse has falsely accused of trying to suppress research that casts doubt on anthropogenic climate change, has spent a lot of time and energy criticizing groups who do that, such as Greenpeace. I condemn climate alarmism too. Yes, the Independent has been guilty of that – and is balanced out by the scepticism of the right-wing press, such as the Daily Telegraph. But Whitehouse’s point seems to be essentially that the sceptics’ false and silly statements are justified by those of their opponents. I suspect that philosophers have got a name for this piece of sophistry. Personally, I would rather than everyone try harder not to say false and silly things.

I don’t know whether Whitehouse’s next comment, about the ‘current warming’ beginning in 1975 is false and/or silly, or just misinformed. But if it’s the latter, that would be surprising for a science journalist. There was a warming trend throughout the 20th century, which was interrupted between 1940 and 1970. It has been well established that this interruption is reproduced in climate models that take account of the changes in atmospheric aerosol levels (caused by human activities): aerosols, which have a cooling influence, temporarily masked the warming. So the warming due to CO2 was continuous for at least a century, but was modified for part of that time by aerosols. The trend since 1975 was thus not the start of anything new. This is not obscure knowledge, and one can only wonder at why sceptics continue to suppress it.

As for the comment that the warming has levelled off since 2002: well, the sceptics make a huge deal of how variable the climate system is when they want to imply that the current warming may be just a natural fluctuation, but clearly they like to cherry-pick their variations. They argue that the variability is too great to see a trend reliably over many decades, but now here’s Whitehouse arguing for a ‘trend’ over a few years. Just look at the graphs and tell me whether the period from 2002 to 2006 can possibly be attributed to variability or to a change in trend. Can you judge? As any climatologist will tell you, it is utterly meaningless to judge such things on the basis of a few years. Equally, we can’t attach too much significance, in terms of assessing trends, to the fact that the last Northern Hemisphere winter was the warmest since records began. (Did Whitehouse forget to mention that?) But that fact hardly suggests that we’re starting to see the end of global warming.

“Who has been silliest” – OK, this is a rhetorical flourish, but writers should pick their rhetoric carefully. If the current consensus on a warming trend generated by human activity proves to be wrong, or counteracted by some unforeseen negative feedback, that will not make the scientists silly. It will mean simply that they formed the best judgement based on the data available. Yes, there are other possible explanations, but at this point none of them looks anywhere near as compelling, or even likely.

My real point is that it would be refreshing if, just once, a climate sceptic came up with an argument that gave me pause and forced me to go and look at the literature and see if it was right. But their arguments are always so easily refuted with information that I can take straight off the very narrow shelves of my knowledge about climate change. That’s the tiresome thing. I suppose this may sound immodest, but truly my intention is just the opposite: if I, as a jobbing science writer, can so readily see why these arguments are wrong or why they omit crucial factors – or at the very least, why the climate community would reject them – then why do these sceptics, all of them smart people, not see this too? I am trying hard to resist the suspicion of intellectual dishonesty; but how much resistance am I expected to sustain?