Friday, June 29, 2007


Designs for life


[More matters arising from the Greenland conference: in this case, a paper that John Glass of the Venter Institute discussed, and which is now published in Science. It has had a lot of press, and rightly so. Here is the article I have written for Nature's News & Views section, which will appear in next week's issue.]

The genome of one bacterium has been successfully replaced with that of a different bacterium, transforming one species into another. This development is a harbinger of whole-genome engineering for practical ends.

If your computer doesn’t do the things you want, give it a new operating system. As they describe in Science [1], Carole Lartigue and colleagues at the J. Craig Venter Institute in Rockville, Maryland, have now demonstrated that the same idea will work for living cells. In an innovation that presages the dawn of organisms redesigned from scratch, the authors report the transplantation of an entire genome between species. They have moved the genome from one bacterium, Mycoplasma mycoides, to another, Mycoplasma capricolum, and have shown that the recipient cells can be ‘booted up’ with the new genome — in effect, a transplant that converts one species into another.

This is likely to be a curtain-raiser for the replacement of an organism’s genome with a wholly synthetic one, made by DNA-synthesis technology. The team at the Venter Institute hopes to identify the ‘minimal’ Mycoplasma genome: the smallest subset of genes that will sustain a viable organism [2]. The group currently has a patent application for a minimal bacterial genome of 381 genes identified in Mycoplasma genitalium, the remainder of the organism’s 485 protein-coding genes having been culled as non-essential.

This stripped-down genome would provide a ‘chassis’ on which organisms with new functions might be designed by combining it with genes from other organisms — for example, those encoding cellulase and hydrogenase enzymes, for making cells that respectively break down plant matter and generate hydrogen. Mycoplasma genitalium is a candidate platform for this kind of designer-genome synthetic biology because of its exceptionally small genome [2]. But it has drawbacks, particularly a relatively slow growth rate and a requirement for complex growth media: it is a parasite of the primate genital tract, and is not naturally ‘competent’ on its own. Moreover, its genetic proof-reading mechanisms are sloppy, giving it a rapid rate of mutation and evolution. The goat pathogens M. mycoides and M. capricolum are somewhat faster-growing, dividing in less than two hours.

Incorporation of foreign DNA into cells happens naturally, for example when viruses transfer DNA between bacteria. And in biotechnology, artificial plasmids (circular strands of DNA) a few kilobases in size are routinely transferred into microorganisms using techniques such as electroporation to get them across cell walls. In these cases, the plasmids and host-cell chromosomes coexist and replicate independently. It has remained unclear to what extent transfected DNA can cause a genuine phenotypic change in the host cells — that is, a full transformation in a species’ characteristics. Two years ago, Itaya et al. [3] transferred almost an entire genome of the photosynthetic bacterium Synechocystis PCC6803 into the bacterium Bacillus subtilis. But most of the added genes were silent and the cells remained phenotypically unaltered.

Genome transplantation in Mycoplasma is relatively easy because these organisms lack a bacterial cell wall, having only a lipid bilayer membrane. Lartigue et al. extracted the genome of M. mycoides by suspending the bacterial cells in agarose gel before breaking them open, then digesting the proteinaceous material with proteinase enzymes. This process leaves circular chromosomes, virtually devoid of protein and protected from shear stress by the agarose encasement. This genetic material was transferred to M. capricolum cells in the presence of polyethylene glycol, a compound known to cause fusion of eukaryotic cells (those with genomes contained in a separate organelle, the nucleus). Lartigue et al. speculate that some M. capricolum cells may have fused around the naked M. mycoides genomes.

The researchers did not need to remove the recipient’s DNA before adding that of the donor; instead, they added an antibiotic-resistance gene to the M. mycoides donor genome. With two genomes already present, no replication was needed before the recipient cells could divide: one daughter cell had the DNA of M. capricolum, the other that of M. mycoides. But in the presence of the antibiotic, only the latter survived. Some M. capricolum colonies did develop in the transplanted cells after about ten days, perhaps because their genomes recombined with the antibiotic-resistant M. mycoides. But most of the cells, and all of those that formed in the first few days, seemed to be both genotypically and phenotypically M. mycoides, as assessed by means of specific antibodies and proteomic analysis.

The main question raised by this achievement is how much difference a transplant will tolerate. That is, how much reprogramming is possible? The DNA sequences of M. mycoides and M. capricolum are only about 76% the same, and so it was by no means obvious that the molecular machinery of one would be able to operate on the genome of the other. Yet synthetic biology seems likely to make possible many new cell functions, not by whole-genome transplants but by fusing existing ones. When John I. Glass, a member of the Venter Institute’s team, presented the transplant results at a recent symposium on the merging of synthetic biology and nanotechnology [4], he also described the institute’s work on genome fusion (further comments on matters arising from the symposium appeared in last week’s issue of Nature [5].

One target is to develop a species of an aerobic Clostridium bacterium that will digest plant cellulose into ethanol, thus generating a fuel from biomass. Cellulose is difficult to break down — which is why trees remain standing for so long — but it can be done by Clostridium cellulolyticum. However, this creates glucose. Clostridium acetobutylicum, meanwhile, makes butanol and other alcohols, but not from cellulose. So a combination of genes from both organisms might do the trick. For such applications, it remains to be seen whether custom-built vehicles or hybrids will win the race.

1. Lartigue, C. et al. Science Express doi:10.1126.1144622 (2007).
2. Fraser, C. M. et al. Science 270, 397–403 (1995).
3. Itaya, M. et al. Proc. Natl Acad. Sci. USA 102, 15971–15976 (2005).
4. Kavli Futures Symposium The Merging of Bio and Nano: Towards Cyborg Cells 11–15 June 2007, Ilulissat, Greenland.
5. Editorial Nature 447, 1031–1032 (2007).

Tuesday, June 26, 2007

What is life? A silly question

[This will appear as a leader in next week's Nature, but not before having gone through an editorial grinder...]

While there is probably no technology that has not at some time been deemed an affront to God, none invites the accusation to the same degree as synthetic biology. Only a deity predisposed to cut-and-paste would suffer any serious challenge from genetic engineering as it has been practised in the past. But the efforts to redesign living organisms from scratch – either with a wholly artificial genome made by DNA synthesis technology or, more ambitiously, by using non-natural, bespoke molecular machinery – really might seem to justify the suggestion, made recently by the ETC Group, an environmental pressure group based in Ottawa, that “for the first time, God has competition.”

That accusation was levelled at scientists from the J. Craig Venter Institute in Rockville, Maryland, based on the suspicion that they had synthesized an organism with an artificial genome in the laboratory. The suspicion was unfounded – but this feat will surely be achieved in the next few years, judging from the advances reported at a recent meeting in Greenland on the convergence of synthetic biology and nanotechnology and the progress towards artificial cells.*

But one of the views commonly held by participants was that to regard such efforts as ‘creating life’ is more or less meaningless. This trope has such deep cultural roots, travelling via the medieval homunculus and the golem of Jewish legend to the modern Faustian myth written by Mary Shelley, that it will surely be hard to dislodge. Scientific attempts to draw up criteria for what constitutes ‘life’ only bolster the popular notion that it is something that appears when a threshold is crossed – a reminder that vitalism did not die alongside spontaneous generation.

It would be a service to more than synthetic biology if we might now be permitted to dismiss the idea that life is a precise scientific concept. One of the broader cultural benefits of attempts to make artificial cells is that they force us to confront the contextual contingency of the word. The trigger for the ETC Group’s protest was a patent filed by the Venter Institute last October on a ‘minimal bacterial genome’: a subset of genes, identified in Mycoplasma genitalium, required for the organism to be viable ‘in a rich bacterial culture medium’. That last sounds like a detail, but is in fact essential. The minimal requirements depend on the environment – on what the organism does and doesn’t have to synthesize, for example, and what stresses it experiences. And participants at the Greenland meeting added the reminder that cells do not live alone, but in colonies and, in general, in ecosystems. Life is not a solitary pursuit.

Talk of ‘playing God’ will mostly be indulged either as a lazy journalistic cliché or as an alarmist slogan. But synthetic biology’s gradualist and relative view of what life means should perhaps be invoked to challenge equally lazy perspectives on life that are sometimes used to defend religious dogma. If, for example, this view undermines the notion that a ‘spark of humanity’ abruptly animates a fertilized egg – if the formation of a new being is recognized more clearly to be gradual, contingent and precarious – then the role of the term ‘life’ in that debate might acquire the ambiguity it has always warranted.

*Kavli Futures Symposium, 11-15 June, Ilulissat, Greenland.

Monday, June 25, 2007


The Ilulissat Statement

[This is a statement drafted by the participants of the conference in Greenland that I attended two weeks ago. Its release today coincides with the start of the third conference on synthetic biology in Zürich.]

Synthesizing the Future

A vision for the convergence of synthetic biology and nanotechnology

This document expresses the views that emerged from the Kavli Futures Symposium ‘The merging of bio and nano: towards cyborg cells’, 11-15 June 2007, Ilulissat, Greenland.

Approximately fifty years ago, two revolutions began. The invention of the transistor and the integrated circuit paved the way for the modern information society. At the same time, Watson and Crick unlocked the structure of the double helix of DNA, exposing the language of life with stunning clarity. The electronics revolution has changed the way we live and work, while the genetic revolution has transformed the way we think about life and medical science.

But a third innovation contemporaneous with these was the discovery by Miller and Urey that amino acids may be synthesized in conditions thought to exist on the early Earth. This gave us tantalizing hints that we could create life from scratch. That prospect on the one hand, and the ability to manipulate genetic information using the tools of biotechnology on the other, are now combined in the emerging discipline of synthetic biology. How we shape and implement this revolution will have profound effects for humanity in the next fifty years.

It was also almost fifty years ago that the proposal was made by Feynman of engineering matter at the atomic scale – the first intimation of the now burgeoning field of nanotechnology. Since the nanoscale is also the natural scale on which living cells organize matter, we are now seeing a convergence in which molecular biology offers inspiration and components to nanotechnology, while nanotechnology has provided new tools and techniques for probing the fundamental processes of cell biology. Synthetic biology looks sure to profit from this trend.

It is useful to divide synthetic biology, like computer technology, into two parts: hardware and software. The hardware – the molecular machinery of synthetic biology – is rapidly progressing. The ability to sequence and manufacture DNA is growing exponentially, with costs dropping by a factor of two every two years. The construction of arbitrary genetic sequences comparable to the genome size of simple organisms is now possible. Turning these artificial genomes into functioning single-cell factories is probably only a matter of time. On the hardware side of synthetic biology, the train is leaving the station. All we need to do is stoke the engine (by supporting foundational research in synthetic biology technology) and tell the train where to go.

Less clear are the design rules for this remarkable new technology—the software. We have decoded the letters in which life’s instructions are written, and we now understand many of the words – the genes. But we have come to realize that the language is highly complex and context-dependent: meaning comes not from linear strings of words but from networks of interconnections, with its own entwined grammar. For this reason, the ability to write new stories is currently beyond our ability – although we are starting to master simple couplets. Understanding the relative merits of rational design and evolutionary trial-and-error in this endeavor is a major challenge that will take years if not decades. This task will have fundamental significance, helping us to better understand the web of life as expressed in both the genetic code and the complex ecology of living organisms. It will also have practical significance, allowing us to construct synthetic cells that achieve their applied goals (see below) while creating as few problems as possible for the world around them.

These are not merely academic issues. The early twenty first century is a time of tremendous promise and tremendous peril. We face daunting problems of climate change, energy, health, and water resources. Synthetic biology offer solutions to these issues: microorganisms that convert plant matter to fuels or that synthesize new drugs or target and destroy rogue cells in the body. As with any powerful technology, the promise comes with risk. We need to develop protective measures against accidents and abuses of synthetic biology. A system of best practices must be established to foster positive uses of the technology and suppress negative ones. The risks are real, but the potential benefits are truly extraordinary.

Because of the pressing needs and the unique opportunity that now exists from technology convergence, we strongly encourage research on two broad fronts:

Foundational Research
1. Support the development of hardware platforms for synthetic biology.
2. Support fundamental research exploring the software of life, including its interaction with the environment.
3. Support nanotechnology research to assist in the manufacture of synthetic life and its interfacing with the external world.

Societal Impacts and Applications
4. Support programs directed to address the most pressing applications, including energy and health care.
5. Support the establishment of a professional organization that will engage with the broader society to maximize the benefits, minimize the risks, and oversee the ethics of synthetic life.
6. Develop a flexible and sensible approach to ownership, sharing of knowledge, and regulation, that takes into account the needs of all stakeholders.

Fifty years from now, synthetic biology will be as pervasive and transformative as is electronics today. And as with that technology, the applications and impacts are impossible to predict in the field’s nascent stages. Nevertheless, the decisions we make now will have enormous impact on the shape of this future.

The people listed below, participants at the Kavli Futures Symposium ‘The merging of bio and nano: towards cyborg cells’, 11-15 June 2007, Ilulissat, Greenland, agree with the above statement

Robert Austin
Princeton University, Princeton, USA

Philip Ball
Nature, London, United Kingdom

Angela Belcher
Massachusetts Institute of Technology, Cambridge, USA

David Bensimon
Ecole Normale Superieure, Paris, France

Steven Chu
Lawrence Berkeley National Laboratory, Berkeley, USA

Cees Dekker
Delft University of Technology, Delft, The Netherlands

Freeman Dyson
Institute for Advanced Study, Princeton, USA

Drew Endy
Massachusetts Institute of Technology, Cambridge, USA

Scott Fraser
California Institute of Technology, Pasadena, USA

John Glass
J. Craig Venter Institute, Rockville, USA

Robert Hazen
Carnegie Institution of Washington, Washington, USA

Joe Howard
Max Planck Institute of Molecular Cell Biology and Genetics, Dresden, Germany

Jay Keasling
University of California at Berkeley, Berkeley, USA

Hiroaki Kitano
The Systems Biology Institute, and Sony Computer Science Laboratories, Japan

Paul McEuen
Cornell University, Ithaca, USA

Petra Schwille
TU Dresden, Dresden, Germany

Ehud Shapiro
Weizman Institute of Science, Rehovot, Israel

Julie Theriot
Stanford University, Stanford, USA

Thursday, June 21, 2007


Should synthetic biologists patent their components?
[This, a piece for Nature’s online muse column, is the first fruit of the wonderful workshop I attended last week in Greenland on the convergence of synthetic biology and nanotechnology. That explains the apparent non-sequitur of a picture – taken at midnight in Ilulissat, way above the Arctic Circle. Much more on this to follow…(I may run out of pictures). Anyway, this piece surely won’t survive at this length on the Nature web pages, so here it is in full.]

Behind scary talk about artificial organisms lie real questions about the ownership of biological ‘parts’.

“For the first time, God has competition”, claimed environmental pressure organization the ETC Group two weeks ago. With this catchy headline, they aimed to raise the alarm about a patent on “the world’s first-ever human-made species”, a bacterium allegedly created “with synthetic DNA” in the laboratories of the Venter Institute in Rockville, Maryland.

ETC had discovered a US patent application (20070122826) filed last October by the Venter Institute scientists. The institute was established by genomics pioneer Craig Venter, and one of its goals is to make a microorganism stripped of all non-essential genes (a ‘minimal cell’) as a platform for designing living cells from the bottom up.

ETC’s complaint was a little confused, because it could not figure out whether the synthetic bacterium, which the group dubbed ‘Synthia’, has actually been made. On close reading of the patent, however, it becomes pretty apparent that it has not.

Indeed, there was no indication that the state of the art had advanced this far. But it is a lot closer than you might imagine, as I discovered last week at a meeting in Greenland supported by the Kavli Foundation, entitled “The merging of bio and nano – towards cyborg cells.” Sadly, the details are currently under embargo – so watch this space.

However, the ETC Group was also exercised about the fact that Venter Institute scientists had applied for a patent on what it claimed were the set of essential genes needed to make a minimal organism – or as the application puts it, “for replication of a free-living organism in a rich bacterial culture medium.” These are a subset of the genes possessed by the microbe Mycoplasma genitalium, which has a total of just 485 genes that encode proteins.

If the patent were granted, anyone wanting to design an organism from the minimal subset of 381 genes identified by Venter’s team would need to apply for a license. “These monopoly claims signal the start of a high-stakes commercial race to synthesize and privatize synthetic life forms”, claimed ETC’s Jim Thomas. “Will Venter’s company become the ‘Microbesoft’ of synthetic biology?”

Now, that’s a better question (if rather hyperbolically posed). I’m told that this patent application has little chance of success, but it does raise an important issue. Patenting of genes has of course been a controversial matter for many years, but the advent of synthetic biology – of which a major strand involves redesigning living organisms by reconfiguring their genetic wiring – takes the debate to a new level.

“Synthetic biology presents a particularly revealing example of a difficulty that the law has frequently faced over the last 30 years – the assimilation of a new technology into the conceptual limits posed by existing intellectual property rights”, say Arti Rai and James Boyle, professors of law at Duke University in North Carolina in a recent article in the journal PLoS Biology [1]. “There is reason to fear that tendencies in the way that US law has handled software on the one hand and biotechnology on the other could come together in a ‘perfect storm’ that would impede the potential of the technology.”

What is new here is that genes are used in a genuine ‘invention’ mode, to make devices. Researchers have organized ‘cassettes’ of natural genes into modules that can be inserted into microbial genomes, giving the organisms new types of behaviour. One such module acted as an oscillator, prompting regular bursts of synthesis of a fluorescent protein. When added to the bacteria E. coli, it made the cells flash on and off with light [2].

It is arguably a distortion of the notion of ‘invention’ to patent a gene that exists in nature. But if you can start to make new ‘devices’ by arranging these genes in new ways, doesn’t that qualify? And if so, how small and rudimentary a ‘part’ becomes patentable?

At the Greenland conference, Drew Endy of the Massachusetts Institute of Technology admitted that the framework for ownership and sharing of developments in synthetic biology remains wholly unresolved. He and his MIT colleagues are creating a Registry of Standard Biological Parts to be used as the elements of genetic circuitry just like the transistors, capacitors and so forth.in electronics catalogues. This registry places the parts in the public domain, which can provide some protection against attempts to patent them.

Endy helps to organize an annual competition among university students for the design of engineered organisms with new functions. One of the recent entries, from students at the Universities of Texas at Austin and California at San Francisco, was a light-sensitive version of E. coli that could grow into a photographic film [3]. Endy says that these efforts would be impossibly expensive and slow if the intellectual property rights on all the components first had to be cleared.

He compares it to a situation where, in order to write a piece of computer code, you have to apply for licensing on each command, and perhaps on certain combinations of them too.

And in synthetic biology that sort of patenting seems disturbingly easy right now. “You can take any device from the Texas instruments TTL catalogue, put ‘genetically coded’ in front of it without reducing it to practice, and you have a good chance of getting a patent”, says Endy.

“Evidence from virtually every important industry of the twentieth century suggests that broad patents on foundational research can slow growth”, say Rai and Boyle. Bioengineer Jay Keasling of the University of California at Berkeley, who was also at the Greenland meeting, agrees that patenting has been a brake on the useful applications of biotechnology. He has been working for several years to engineer microbes to synthesize a compound called artemisinin, which is currently one of the best drugs available for fighting malaria4. Artemisinin is produced in tiny amounts by as Asian shrub. Extraction from this source is prohibitively expensive, making it impossible to use artemisinin to combat malaria in developing countries, where it kills 1-3 million people each year.

Keasling’s genetically engineered artemisinin could potentially be made at a fraction of the cost. But its synthesis involves orchestrating the activity of over 40 genetic components, which is a greater challenge than any faced previously in genetic engineering. He hopes that an industrial process might be up and running by 2009.

Scientists at the Venter Institute hope to make organisms that can provide cheap fuels from biomass sources, such as bacteria that digest plant matter and turn it into ethanol. When the ETC Group dismisses these efforts to use synthetic biology for addressing global problems as mere marketing strategies, they are grossly misjudging the researchers and their motives.

But might patenting pose more of a threat than twitchy pressure groups? “If you want to have a community sharing useful and good parts, 20 years of patent protection is obviously not helpful”, says Sven Panke of the ETH in Zürich, Switzerland, one of the organizers of the third Synthetic Biology conference being held there next week. “It would be very helpful if we could find a good way to reward but not impede.”

Endy points out that patenting is by no means the only way to protect intellectual property – although it is certainly one of the most costly, and so suits lawyers nicely. Copyright is another way to do it – although even that might now be too binding (thanks to the precedents set by Disney on Mickey Mouse), and it’s not obvious how it might work for synthetic biology anyway.

Tailormade contracts are another option, but Endy says they tend to be ‘leaky’. It may be that some form of novel, bespoke legal framework would work best, but that could be expensive too.

Intellectual property is prominently on the agenda at next week’s Zürich conference. But Panke says “we are going to take a look at the issue, but we will not solve it. In Europe we are just starting to appreciate the problem.”

References
1. Rai, A. & Boyle, J. PLoS Biology 5(3), e58 (2007).
2. Elowitz, M. B. & Leibler, S. Nature 403, 335 - 338 (2000).
3. Levskaya, A. et al. Nature 438, 441 (2005).
4. Ro, D.-K. et al. Nature 440, 940 - 943 (2006).

Wednesday, June 20, 2007

NATO ponders cyberwarfare
[If I were good at kidding myself, I could imagine that NATO officials read my previous muse@nature.com article on the recent cyberattacks on Estonia. In any event, they seem now to be taking seriously the question of how to view such threats within the context of acts of war. Here’s my latest piece for Nature Online News.]

Attacks on Estonian computer networks have prompted high-level discussion.

Recent attacks on the electronic information networks of Estonia have forced NATO to consider the question of whether this form of cyberattack could ever be construed as an act of war.

The attacks on Estonia happened in April in response to the Baltic country’s decision to move a Soviet-era war memorial from the centre of its capital city Tallinn. This was interpreted by some as a snub to the country’s Soviet past and to its ethnic Russian population. The Estonian government claimed that many of the cyberattacks on government and corporate web sites, which were forced to shut down after being swamped by traffic, could be traced to Russian computers.

The Russian government denied any involvement. But NATO spokesperson James Appathurai says of the attacks that “they were coordinated; they were focused, [and] they had clear national security and economic implications for Estonia.”

Estonia is one of the most ‘wired’ countries in Europe, and renowned for its expertise in information technology. Earlier this year it conducted its national elections electronically.

Last week, NATO officials met at the alliance headquarters in Brussels to discuss how such cyberattacks should be dealt with. All 26 of the alliance members agreed that cyberdefence needs to be a top priority. “Urgent work is needed to enhance the ability to protect information systems of critical importance to the Alliance against cyberattacks”, said Appathurai.

The Estonian experience seems to have sounded a wake-up call: previous NATO statements on cyberdefence have amounted to little more than identifying the potential risk. But the officials may now have to wrestle with the problem of how such an attack, if state-sponsored, should be viewed within the framework of international law on warfare.

Irving Lachow, a specialist on information warfare at the National Defense University in Washington, DC, says that, in the current situation, it is unclear whether cyberattack could be interpreted as an act of war.

“My intuition tells me that cyberwarfare is fundamentally different in nature”, he says. “Traditional warfare is predicated on the physical destruction of objects, including human beings. Cyberwarfare is based on the manipulation of digital objects. Of course, cyberattacks can cause physical harm to people, but they must do so through secondary effects.”

But he adds that “things get trickier when one looks at strategic effects. It is quite possible for cyber attacks to impact military operations in a way that is comparable to physical attacks. If you want to shut down an air defense site, it may not matter whether you bomb it or hack its systems as long as you achieve the same result. Thus it is quite conceivable that a cyberattack could be interpreted as an act of war – it depends on the particulars.”

Clearly, then, NATO will have plenty to talk about. “I think it’s great that NATO is focused on this issue”, says Lachow. “Going through the policy development process will be a useful exercise. Hopefully it will produce guidelines that will help with future incidents.”

Thursday, June 07, 2007


Is this Chaucer’s astrolabe?

[This is the pre-edited version of my latest article for Nature’s online news. the original paper is packed with interesting stuff, and makes me want to know a lot more about Chaucer.]

Several astronomical instruments have been misattributed to the medieval English writer

Want to see the astrolabe used for astronomical calculations by Geoffrey Chaucer himself? You’ll be lucky, says Catherine Eagleton, a curator at the British Museum in London.

In a paper soon to be published in the journal Studies in the History and Philosophy of Science [doi:10.1016/j/shpsa.2007.03.006], she suggests that the several astrolabes that have been alleged as ‘Chaucer’s own’ are probably not that at all.

It’s more likely, she says, that these are instruments made after Chaucer’s death according to the design the English scholar set out in his Treatise on the Astrolabe. Such was Chaucer’s reputation in the centuries after his death that instrument-makers may have taken the drawings in his book as a blueprint.

Eagleton thinks that the claims are therefore back to front: the instruments alleged to be Chaucer’s own were not the ones he used for his drawings, but rather, the drawings supplied the design for the instruments.

Born around 1343, Chaucer is famous now for his literary work known as The Canterbury Tales. But he was a man of many interests, including the sciences of his age: the Tales demonstrate a deep knowledge of astronomy, astrology and alchemy, for example.

He was also a courtier and possibly acted as a spy for England’s King Richard II. “He was an intellectual omnivore”, says Eagleton. “There’s no record that he had a formal university education, but he clearly knows the texts that academics were reading.”

There are several ‘Chaucerian astrolabes’ that have characteristic features depicted in Chaucer’s treatise. Sometimes this alone has been used to date the instruments to the fourteenth century and to link them more or less strongly to Chaucer himself.

An astrolabe is an instrument shaped rather like a pocket watch, with movable dials and pointers that enable the user to calculate the positions of the stars and the Sun at a particular place and time: a kind of early astronomical computer. It is simultaneously a device for timekeeping, determining latitude, and making astrological forecasts.

The astrolabe may have been invented by the ancient Greeks or Indians, but Islamic scientists made its construction a fine art. The designs shown in Chaucer’s books have some distinctive features – in particular, capital letters marking the 24 hours of the day, and a symbol of a dog to represent the Dog Star.

Eagleton says that these correspondences have led some collectors and dealers to claim that a particular instrument could be the very one Chaucer held in his hand. “There are probably four or five of these around”, she says, “but no one needs five astrolabes.”

“There’s a real tendency to link any fourteenth-century instrument to him”, she says, adding that museum curators are usually more careful. The British Museum doesn’t make such strong claims for its own ‘Chaucerian astrolabes’, for example. (The one shown above is called the ‘Chaucer astrolabe’, and is unusual in being inscribed with a pre-Chaucerian date of 1326 – but the museum is suitably cautious about what it infers from that.)

But Eagleton says that an instrument held at Merton College in Oxford University is generally just called ‘Chaucer’s astrolabe’, with the implication that it was his.

She says that none of the ‘Chaucerian astrolabes’ can in fact be definitively dated to the fourteenth century, and that all four of those she studied closely, including another Chaucerian model at the British Museum and one at the Museum of the History of Science in Oxford called the Painswick astrolabe, have features that suggest they were made after Chaucer’s treatise was written. For example, the brackets holding the rings of these two astrolabes have unusual designs that could be attempts to copy the awkward drawing of a bracket in Chaucer’s text, which merges views from different angles. The treatise, in other words, came before the instruments.

“It is extremely unlikely that any of the surviving instruments were Chaucer’s own astrolabe”, she concludes.

So why have others thought they were? “There is this weird celebrity angle, where people get a bit carried away”, she says. “It is always tempting to attach an object to a famous name – it’s a very human tendency, which lets us tell stories about them. But it winds me up when it’s done on the basis of virtually no evidence.”

This isn’t a new trend, however. Chaucer was already a celebrity by the sixteenth century, so that a whole slew of texts and objects became attributed to him. This was common for anyone who became renowned for their scholarship in the Middle Ages.

Of course, whether or not an astrolabe was ‘Chaucer’s own’ would be likely to affect the price it might fetch. “This association with Chaucer probably boosts the value”, says Eagleton. “I might be making myself unpopular with dealers and collectors.”

Monday, June 04, 2007


Tendentious tilings

[This is my Materials Witness column for the July issue of Nature Materials]

Quasicrystal enthusiasts may have been baffled by a rather cryptic spate of comments and clarifications following in the wake of a recent article claiming that medieval Islamic artists had the tools needed to construct quasicrystalline patterns. That suggestion was made by Peter Lu at Harvard University and Paul Steinhardt at Princeton (Science 315, 1106; 2007). [See my previous post on 23 February 2007] But in a news article in the same issue, staff writer John Bohannon explained that these claims had already caused controversy, being allegedly anticipated in the work of crystallographer Emil Makovicky at the University of Copenhagen (Science 315, 1066; 2007).

The central thesis of Lu and Steinhardt is that Islamic artists used a series of tile shapes, which they call girih tiles, to construct their complex patterns. They can be used to make patterns of interlocking pentagons and decagons with the ‘forbidden’ symmetries characteristic of quasicrystalline metal alloys, in which these apparent symmetries, evident in diffraction patterns, are permitted by a lack of true periodicity.

Although nearly all of the designs evident on Islamic buildings of this time are periodic, Lu and Steinhardt founds that those on a fifteenth-century shrine in modern-day Iran can be mapped almost perfectly onto another tiling scheme, devised by mathematician Roger Penrose, which does generate true quasicrystals.

But in 1992 Makovicky made a very similar claim for a different Islamic tomb dating from 1197. Some accused Lu and Steinhardt of citing Makovicky’s work in a way that did not make this clear. The authors, meanwhile, admitted that they were unconvinced by Makovicky’s analysis and didn’t want to get into an argument about it.

The dispute has ruffled feathers. Science subsequently published a ‘clarification’ that irons out barely perceptible wrinkles in Bohannon’s article, while Lu and Steinhardt attempted to calm the waters with a letter in which they ‘gladly acknowledge’ earlier work (Science 316, 982; 2007). It remains to be seen whether that will do the trick, for Makovicky wasn’t the only one upset by their paper. Design consultant Jay Bonner in Santa Fe has also made previous links between Islamic patterns and quasicrystals.

Most provocatively, Bonner discusses the late-fifteenth-century Topkapi architectural scroll that furnishes the key evidence for Lu and Steinhardt’s girih scheme. Bonner points out how this scroll reveals explicitly the ‘underlying polygonal sub-grid’ used to construct the pattern it depicts. He proposes that the artists commonly used such a polygonal matrix, composed of tile-like elements, and demonstrates how these can create aperiodic space-filling designs.

Bonner does not mention quasicrystals, and his use of terms such as self-similarity and even symmetry do not always fit easily with that of physicists and mathematicians. But there’s no doubting that his work deepens the ‘can of worms’ that Bohannon says Lu and Steinhardt have opened.

All this suggests that the satellite conference of the forthcoming European Crystallographic Meeting in Marrakech this August, entitled ‘The enchanting crystallography of Moroccan ornaments’, might be more stormy than enchanting – for it includes back-to-back talks by Makovicky and Bonner.