Thursday, August 09, 2007


Chemistry in pictures

Joachim Schummer and Tami Spector have just published in Hyle a paper based on their presentation at the 2004 meeting ‘The Public Images of Chemistry’ in Paris. This was one of the most interesting talks of the conference, looking at how the images used to portray chemists and their profession both by themselves and by others over the past several centuries have influenced and been influenced by public perceptions. They look at tropes drawn (often subconsciously) from aesthetics in the visual arts, and at how the classic ‘brochure’ photos of today often still allude to the images of flask-gazing cranks found in depictions of alchemists and derived from uroscopy. (See, for example, the logo for my ‘Lab Report’ column in Prospect.) I shamelessly plagiarize these ideas at every opportunity. Recommended reading.

Monday, August 06, 2007


A wardrobe for Mars

[This is my Material Witness column for the September issue of Nature Materials.]

No one has a date booked for a party on the moon or on Mars, but that hasn’t stopped some from thinking about what to wear. One thing is clear: there is nothing fashionably retro about the Apollo look. If, as seems to be the plan, we are going out there this time to do some serious work, the bulky gas bags in which Alan Shepard and his buddies played golf and rode around in buggies aren’t up to the job. Pressurized with oxygen, the suits could be bent at the arm and leg joints only with considerable effort. A few hours’ of lunar hiking and you’d be exhausted.

In comparison, the fetching silver suits worn for the pre-Apollo Mercury missions look almost figure-hugging. But that’s because they were worn ‘soft’ – the astronauts didn’t venture outside their pressurized cabins, and the suits would have inflated only in the event of a pressure loss. In the vacuum of space, high pressure is needed to prevent body fluids from boiling.

But pressurization is only a part of the challenge. Space-suit design presents a formidable, multi-faceted materials challenge. The solution has to involve a many-layered skin – sometimes more than a dozen layers, each with a different function. This makes the suit inevitably bulky and expensive.

While the Mercury suits were basically souped-up high-altitude pilots’ suits, made from Neoprene-coated and aluminized nylon, today’s spacewear tends to follow the Apollo principle of several distinct garments worn in layers. A liquid cooling and ventilation garment (LCVG) offers protection from temperatures that can reach 135 oC in the Sun’s glare, while allowing body moisture to escape; a pressure suit (PS) acts as a gas-filled balloon; and a thermomechanical garment (TMG) protects against heat loss, energetic radiation, puncture by micrometeoroids, and abrasion.

These suits initially made use of the materials to hand, but inevitably this resulted in some ‘lock-in’ whereby ‘tradition’ dominated materials choices rather than this being reconsidered with each redesign. Some Apollo materials, such as the polyimide Kapton and the polyamide Kevlar, are still used – Kapton’s rigidity and low gas permeability recommends it for containing the ballooning PS, while Kevlar’s strength is still hard to beat for the TMG. But not all the choices are ideal: a Spandex LCVG has rather poor wicking and ventilation properties. Indeed, a reanalysis from scratch suggests superior replacements for most of the ‘traditional’ materials (J. L. Marcy et al., J. Mat. Eng. Perf. 13, 208; 2004; see paper here).

Space suits have increased in mass since Apollo, because they are now used in zero rather than lunar gravity. But martian gravity is a third that of Earth. To improve suit flexibility and reduce mass, Dava Newman and coworkers at the Massachusetts Institute of Technology are reconsidering the basic principles: using tight-fitting garments rather than gas to exert pressure, while strengthening with a stiff skeleton along lines of non-extension. These BioSuits are several years away from being ready for Mars – but there’s plenty of time yet to prepare for that party.

Friday, August 03, 2007

A bad memory

I have just read all the papers on ‘the memory of water’ published in a special issue of the journal Homeopathy, which will be released in print on 10 August. Well, someone had to do it. I rather fear that my response, detailed below, will potentially make some enemies of people with whom I’ve been on friendly terms. I hope not, however. I hope they will respect my right to present my views as much as I do theirs to present theirs. But I felt my patience being eroded as I waded through this stuff. Might we at least put to rest now the tedious martyred rhetoric about ‘scientific heresy’, which, from years of unfortunate experience, I can testify to being the badge of the crank? I once tried to persuade Jacques Benveniste of how inappropriate it was to portray a maverick like John Maddox as a pillar of the scientific establishment – but he wouldn’t have it, I suppose because that would have undermined his own platform. Ah well, here’s the piece, a much shortened version of which will appear in my Crucible column in the September issue of Chemistry World.

************************************************************

I met Jacques Benveniste in 2004, shortly before he died. He had tremendous charm and charisma, and I rather liked him. But I felt then, and still feel now, that in ‘discovering’ the so-called memory of water he lost his way as a scientist and was sucked into a black hole of pseudoscience that was just waiting for someone like him to come along.

This particular hole is, of course, homeopathy. In 1988, Benveniste published a paper in Nature that seemed to offer an explanation for how homeopathic remedies could retain their biological activity even after being diluted so much that not a single molecule of the original ‘active’ ingredients remains [1]. It is common for homeopathic remedies to have undergone up to 200 tenfold dilutions of the original ‘mother tincture’, which is quite sufficient to wash away even the awesome magnitude of Avogadro’s constant.

Benveniste and his coworkers studied the effect of dilution of an antibody that stimulates human immune cells called basophils to release histamine – a response that can provoke an allergic reaction. In effect, the antibody mimics an allergen. The researchers reported that the antibody retains its ability to provoke this response even when diluted by 10**60 – and, even more oddly, that this activity rises and falls more or less periodically with increasing dilution.

The paper’s publication in Nature inevitably sparked a huge controversy, which turned into a media circus when Nature’s then editor John Maddox led an investigation into Benveniste’s laboratory techniques. Several laboratories tried subsequently to repeat the experiment, but never with unambiguous results. The experiment proved irreproducible, and came to be seen as a classic example of what US chemist Irving Langmuir christened ‘pathological science’. (The details are discussed in my book on water [2], or you can read Michel Schiff’s book [3] for a deeply partisan view from the Benveniste camp.)

Benveniste remained convinced of his results, however, and continued working on them in a privately funded lab. He eventually claimed that he could ‘programme’ specific biological activity into pure water using electromagnetic radiation. He predicted a forthcoming age of ‘digital biology’, in which the electromagnetic signatures of proteins and other biological agents would be digitally recorded and programmed into water from information sent down phone lines.

Homeopaths have persistently cited Benveniste’s results as evidence that their treatments do not necessarily lack scientific credibility. Such claims have now culminated in a special issue of the journal Homeopathy [4] that presents a dozen scientific papers on the ‘memory of water.’

In at least one sense, this volume is valuable. The memory of water is an idea that refuses to go away, and so it is good to have collected together all of the major strands of work that purport to explain or demonstrate it. The papers report some intriguing and puzzling experimental results that deserve further attention. Moreover, the issue does not duck criticism, including a paper from renowned water expert José Teixeira of CEA Saclay in France that expresses the sceptic’s viewpoint. Teixeira points out that any explanation based on the behaviour of pure water “is totally incompatible with our present knowledge of liquid water.”

But perhaps the true value of the collection is that it exposes this field as an intellectual shambles. Aware that I might hereby be making enemies of some I have considered friends, I have to say that the cavalier way in which ‘evidence’ is marshalled and hypotheses are proposed with disregard for the conventions of scientific rigour shocked even me – and I have been following this stuff for far too long.

Trying to explain homeopathy through some kind of aqueous ‘memory’ effect has plenty of problems created by the traditions of the field itself, in which ‘remedies’ are prepared by serial dilution and vigorous shaking, called succussion. For example, it is necessary not only that the memory exists but that it is amplified during dilution. In his overview paper, guest editor Martin Chaplin, a chemist at South Bank University in London whose web site on water is a mine of valuable information, points to the surprising recent observation that some molecules form clusters of increasing size as they get more dilute. But this, as he admits, would imply that most homeopathic solutions would be totally inactive, and only a tiny handful would be potent.

Another problem, pointed out by David Anick of the Harvard Medical School and John Ives of the Samueli Institute for Information Biology in Virginia, is that if we are to suppose the ‘memory’ to be somehow encoded in water’s structure, then we must accept that there should be many thousands of such stable structures, each accounting for a specific remedy – for several thousand distinct remedies are marketed by homeopathic companies, each allegedly distinct in its action.

Yet another difficulty, seldom admitted by homeopaths, is that the dilutions of the mother tincture must allegedly be made by factors of ten and not any other amount. This is not mentioned in the papers here, presumably because it is too absurd even for these inventive minds to find an explanation. A related issue that is addressed by Anick is the tradition of using only certain dilution factors, such as 10**6, 10**12, 10**30 and 10**200. He offers a mathematical model for why this should be so that masquerades as an explanation but is in fact tantamount to a refutation: “it would be inconceivable”, he says, “that one number sequence would work in an ideal manner for every mother tincture.” Still, he concludes, the convention might be ‘good enough’. So why not perhaps test if it makes any difference at all?

One of the challenges in assessing these claims is that they tend to play fast and loose with original sources, which obliges you to do a certain amount of detective work. For example, Chaplin states that the ability of enzymes to ‘remember’ the pH of their solvent even when the water is replaced by a non-aqueous solvent implies that the hydrogen ions seem to have an effect in their absence, “contrary to common sense at the simplistic level.” But the paper from 1988 in which this claim is made [5] explains without great ceremony that the ionizable groups in the enzyme simply retain their same ionization state when withdrawn from the aqueous solvent and placed in media that lack the capacity to alter it. There’s no mysterious ‘memory’ here.

Similarly, Chaplin’s comment that “nanoparticles may act in combination with nanobubbles to cause considerable ordering within the solution, thus indicating the possibility of solutions forming large-scale coherent domains [in water]” is supported by a (mis-)citation to a paper that proposes, without evidence, the generally discredited idea of ‘ice-like’ ordering of water around hydrophobic surfaces.

One of the hypotheses for water’s ‘memory’, worked out in some detail by Anick and Ives, invokes the dissolution of silicate anions from the glass walls of the vessel used for dilution and succussion, followed by polymerization of these ions into a robust nanostructured particle around the template of the active ingredient initially present. Certainly, silicate does get added, in minute quantities, to water held in glass (this seemed to be one of the possible explanations for another piece of water pathological science, polywater [6]). But how to progress beyond there, particularly when such a dilute solution favours hydrolysis of polysilicates over their condensation?

Well, say Anick and Ives, there are plenty of examples of silicate solutions being templated by solutes. That’s how ordered mesoporous forms of silica are synthesized in the presence of surfactants, which aggregate into micelles around which the silica condenses [7]. This, then, wraps up that particular part of the problem.

But it does nothing of the sort. This templating has been seen only at high silicate concentrations. It happens when the template is positively charged, complementary to the charge on the silicate ions. The templating gives a crude cast, very different from a biologically active replica of an enzyme or an organic molecule. Indeed, why on earth would a ‘negative’ cast act like the ‘positive’ mold anyway? The template is in general encapsulated by the silica, and so doesn’t act as a catalyst for the formation of many replicas. And for this idea to work, the polysilicate structure has to be capable of reproducing itself once the template has been diluted away – and at just the right level of replicating efficiency to keep its concentration roughly constant on each dilution.

The last of these requirements elicits the greatest degree of fantastical invention from the authors: during the momentary high pressures caused by succussion, the silicate particles act as templates that impose a particular clathrate structure on water, which then itself acts as a template for the formation of identical silicate particles, all in the instant before water returns to atmospheric pressure. (Elsewhere the authors announce that “equilibrium of dissolved [silicate] monomers with a condensed silica phase can take months to establish.”) None of this is meanwhile supported by the slightest experimental evidence; the section labelled ‘Experiments to test the silica hypothesis’ instead describes experiments that could be done.

Another prominent hypothesis for water’s memory draws on work published in 1988 by Italian physicists Giuliano Preparata and Emilio Del Guidice [8]. They claimed that water molecules can form long-ranged ‘quantum coherent domains’ by quantum entanglement, a phenomenon that makes the properties of quantum particles co-dependent over long ranges. Entanglement certainly exists, and it does do some weird stuff – it forms the basis of quantum computing, for example. But can it make water organize itself into microscopic or even macroscopic information-bearing domains? Well, these ‘quantum coherent domains’ have never been observed, and the theory is now widely disregarded. All the same, this idea has become the deus ex machina of pathological water science, a sure sign that the researchers who invoke it have absolutely no idea what is going on in their experiments (although one says such things at one’s peril, since these researchers demonstrated a litigious tendency when their theory was criticized in connection with cold fusion).

Such quantum effects on water’s memory are purportedly discussed in the special issue by Otto Weingärtner of Dr Reckeweg & Co. in Bensheim, Germany – although the paper leaves us none the wiser, for it contains neither experiments nor theory that demonstrate any connection with water. The role of entanglement is made more explicit by Lionel Milgrom of Imperial College in London, who says that “the homeopathic process is regarded as a set of non-commuting complementary observations made by the practitioner… Patient, practitioner, and remedy comprise a three-way entangled therapeutic entity, so that attempting to isolate any of them ‘collapses’ the entangled state.” In other words, this notion is not really about quantum mechanics at all, but quantum mysticism.

Benveniste’s long-term collaborator Yolène Thomas of the Institut Andre Lwoff in Villejuif argues, reasonably enough, that in the end experiment, not theory, should be the arbiter. And at face value, the ‘digital biology’ experiments that she reports are deeply puzzling. She claims that Benveniste and his collaborators accumulated many examples of biological responses being triggered by the digitized radiofrequency ‘fingerprints’ of molecular substances – for example, tumour growth being inhibited by the ‘Taxol signal’, the lac operon genetic switch of bacteria being flipped by the signal from the correct enantiomeric form of arabinose, and vascular dilation in a guinea pig heart being triggered by the signal from the classic vasodilator acetylcholine. What should one make of this? Well, first, it is not clear why it has anything to do with the ‘memory of water’, nor with homeopathy. But second, I can’t help thinking that these experiments, however sincere, have an element of bad faith about them. If you truly believe that you can communicate molecular-recognition information by electromagnetic means, there is no reason whatsoever to study the effect using biological systems as complex as whole cells, let alone whole hearts. Let’s see it work for a simple enzymatic reaction, or better still, an inorganic catalyst, where there is far less scope for experimental artefacts. It is hard to imagine any reason why such experiments have not been attempted, except for the reason that success or failure would be less ambiguous.

What emerges from these papers is an insight into the strategy adopted more or less across the board by those sympathetic to the memory of water. They begin with the truism that it is ‘unscientific’ to simply dismiss an effect a priori because it seems to violate scientific laws. They cite papers which purportedly show effects suggestive of a ‘memory’, but which often on close inspection do nothing of the kind. They weave a web from superficially puzzling but deeply inconclusive experiments and ‘plausibility arguments’ that dissolve the moment you start to think about them, before concluding with the humble suggestion that of course all this doesn’t provide definitive evidence but proves there is something worth further study.

One has to conclude, after reading this special issue, that you can find an ‘explanation’ at this level for water’s memory from just about any physical phenomenon you care to imagine – dissipative non-equilibrium structures, nanobubbles, epitaxial ordering, gel-like thixotropy, oxygen free radical reactions… In each case the argument leaps from vague experiments (if any at all) to sweeping conclusions that typically take no account whatsoever of what is known with confidence about water’s molecular-scale structure, and which rarely address themselves even to any specific aspect of homeopathic practice. The tiresome consequence is that dissecting the idea of the memory of water is like battling the many-headed Hydra, knowing that as soon as you lop off one head, another will sprout.

In his original paper in Nature, Jacques Benveniste offered a hypothesis for how the memory effect works: “specific information must have been transmitted during the dilution/shaking process. Water could act as a template for the [antibody] molecule, for example by an infinite hydrogen-bonded network or electric and magnetic fields.” Read these sentences carefully and you will perhaps decide that Benveniste missed his calling as a post-modernist disciple of his compatriot Jacques Derrida. It has no objective meaning that I can discern. It sounds like science, but only because it copies the contours of scientific prose. This, I would submit, is a fair metaphor for the state of ‘water memory’ studies today.

I once read a book supposedly about the philosophy of religion which was in fact an attempt to make a logical case for God’s existence. Having stepped through all of the traditional arguments – the ontological, the argument from design and so forth – the author admitted that all of them had significant flaws, but concluded that collectively they made a persuasive case. This group of papers is similar, implying that a large enough number of flimsy arguments add up to a single strong one. It leaves me feeling about homeopathy much as I do about religion: those who find it genuinely helpful are right to use it, but they shouldn’t try to use scientific reason to support their decision.


1. E. Davenas et al., Nature 333, 816 (1988).
2. P. Ball, H2O: A Biography of Water (Weidenfeld & Nicolson, 1999).
3. M. Schiff, The Memory of Water (Thorsons, 1995).
4. Homeopathy 96, 141-226 (2007).
5. A. Zaks & A. Klibanov, J. Biol. Chem. 263, 3194 (1988).
6. F. Franks, Polywater (MIT Press, Cambridge, MA, 1981).
7. C. T. Kresge et al., Nature 359, 710 (1992).
8. E. Del Guidice et al. Phys. Rev. Lett. 61, 1085 (1988).

Wednesday, August 01, 2007

Pay your money and take your chances
[This is the pre-edited version of my latest muse article for news@nature.com.]

Fatalities are an inevitable part of human spaceflight, and space tourism companies will have to face up to it.

The tragic deaths of three workers in an explosion at the Mojave Air and Space Port in California, while testing a rocket propulsion system for a privately funded spacecraft, shouldn’t be seen as the first fatalities of commercial spaceflight. This was an industrial accident, not a failure of aerospace engineering.

All the same, the accident will surely provoke questions about the safety of space tourism. The victims worked for Scaled Composites, a company that has been commissioned to make a new spacecraft for Richard Branson’s Virgin Galactic space-tourism enterprise. Virgin has announced the intention of launching the first commercial space flights in 2009.

Scaled Composites is run by entrepreneur Burt Rutan, whose SpaceShipOne became the first privately funded craft to reach space in 2004, winning the $5 m Ansari X Prize created to stimulate private manned spaceflight technology. Virgin Galactic aims to use a successor, SpaceShipTwo, to take space tourists 62 miles up into sub-orbital space at a cost of around £100,000 ($200,000) each.

Other aerospace engineers have been keen to emphasize that the accident (which seems to have been caused by a component of rocket fuel) does not reflect on the intrinsic safety of space flight. They are right in a sense, although the incident seems likely to set back Virgin’s plans. Nevertheless, it is a reminder that rocket science is potentially lethal – and not just in flight. Three US astronauts died in a fire during supposedly routine launch-pad tests for the Apollo 1 mission in 1967.

Virgin insists that “safety is at the heart of the design” of their space tourism programme. Perhaps it is now time to ask what this might mean – or more precisely, how the issue of safety in commercial space travel can be reconciled with its economic viability, accessibility, and projected traffic volume.

These factors make up a complex equation, and it is fair to say that no one yet has shown clearly how it might be solved. What, in short, is the business model for space tourism?

So far, the marketing strategy has relied on rhetoric that sounds stirring but which makes it just as well these companies do not need to seek a start-up bank loan. The vision simply isn’t coherent.

On the one hand, there is the pretence of democratizing space. While governments have jealously kept spaceflight in the hands of a closed elite, says the X Prize Foundation, commercial spacecraft will make it available to everyone. Virgin Galactic is not motivated by quite the same anti-government libertarianism, but does suggest that “safety and cost issues [have] previously made space travel the preserve of the privileged few.”

All of this, of course, sits uneasily with the fact that the only space tourists so far have been multi-millionaires, and that a $200,000-per-head ticket price does not exactly fall within the range of your average family holiday.

Ah, but that will change as the industry grows, says Peter Diamandis, chairman of the X Prize Foundation. “Over the next decade we’ll see the price of seats drop from $200 K to $50 K, and perhaps as low as $25 K per person”, he says. That’s more expensive than a luxury cruise, admittedly, but many might consider it for a once-in-a-lifetime experience.

I’ve yet to see a convincing explanation of the economics, however. Diamandis has outlined the sums on the basis that “the cost of operating a mature transportation system (car, train, plane) is typically three times the cost of the fuel.” But one of the reasons the Space Shuttle is so cripplingly expensive is that the inspections and repairs needed after each flight are on a quite different scale from those of airlines. And, one has sadly to add, even then they are evidently flawed.

Even if the business model can be made to work, it will clearly need to depend initially on rich thrill-seekers. But the early days of every new transportation technology have been hazardous, aviation especially so. Safety has tended to be a luxury afforded only once the industry is established.

The current history of manned spaceflight bears this out. As of 2003, 18 of the 430 humans who had flown in space died in accidents: a fatality rate of about 4 per cent (although the precise figures can be debated because of multiple flights by individuals). That’s comparable to the risk of dying in an Everest expedition. The odds haven’t stopped (mostly rich) people from scaling Everest, but former US astronaut Rick Hauck says that he wouldn’t have flown if he’d known what his chances of coming back alive were.

Looked at another way, manned spaceflight has so far proved to be 45,000 times more dangerous than taking a commercial air flight. It is perhaps unfair to compare craft like SpaceShipTwo with Apollo missions – SpaceShipOne has been compared instead to the US’s experimental X-15 rocket plane, which had only one crash in 199 flights. But however you look at it, Virgin Galactic is inventing a new technology, while Virgin Atlantic had decades of experience to draw on.

Who cares, advocates of human space travel will respond. Without risk, we’d never achieve anything. “It’s the dreamers, it’s the doers, it’s the furry mammals who are evolved, take the risks, or die”, says Diamandis. “That’s what we stand for.”

But wait a minute. Are you saying that space tourism will put safety first, or that it depends on the bravery of do-or-die pioneers? Either will play well to a particular audience, but you can’t have it both ways. If the argument is that a few foolhardy fat cats must put their lives on the line so that the industry can ultimately become cheap and safe enough to reach a mass market, so be it. But somehow, I can’t see that sales pitch working.

Wednesday, July 25, 2007


Cold is hot

Three cheers for BBC4’s Absolute Zero, a two-part series on ‘cold’ that began last night. I thought this was one of the best science programmes I’ve seen for a long time. It made me very happy to see the programme explaining – gasp – thermodynamics, including the Carnot cycle. Even the now-obligatory dramatic reconstructions were fairly unobtrusive and a little bit inventive. This was startlingly old-fashioned TV, in a good way: a story told chronologically, with good contributors (Simon Shaffer is always good value, and it was nice to see that they’d found Hasok Chang), minimal flashy graphics, and a sober commentary.

Nothing’s perfect – in particular, I suspect that many people will have found the explanations of Carnot and refrigeration hard to follow without having been given some essential concepts such as the gas laws and latent heat. But full marks for trying. And I learnt some stuff, such as Michael Faraday’s discovery of the principle of the compression-condensation cycle of fridges, and Joule’s work on the relationship of heat and energy. I look forward to tonight’s episode.

Saturday, July 21, 2007


Liquids bounce again
[I can’t resist putting up this little item, written for news@nature, because it is a classic kitchen experiment you can do yourself – the recipe is below.]

Jumping jets move from the bathroom to the kitchen

After bouncing shampoo, physicists now bring you bouncing cooking oil. A team in Texas has found that the trampolining of a liquid jet falling onto a bath of the same liquid is more common than expected.

Last year, a group in the Netherlands studied this bouncing effect for a jet of shampoo. The bounce, which was first reported over 40 years ago, happens because of the peculiar nature of shampoo, which gets thinner (less viscous) as it flows. A jet of it hitting a liquid surface is therefore lubricated by a thin layer at the interface, enabling it to bounce off rather than merge.

But the liquids now studied by Matthew Thrasher and colleagues at the University of Texas at Austin don’t have this property – they are viscous, but have ‘normal’ flow behaviour, like water.

The researchers directed a jet of oil vertically onto the surface of a tank of the same oil. They found that the jet could undergo both a ‘leaping’ rebound and a bizarre ‘flat’ bounce in which it sprang horizontally across the liquid surface.

The bounce here is due to a thin layer of air that separates the two liquid surfaces, the researchers say.

They point out that the effect can easily be recreated in a kitchen experiment with cooking oil. Just fill a glass pie dish with about 4 cm of oil and pour onto it a thin stream from a cup about 3-6 cm above the surface. While pouring, move the stream in a circle about once every 2 seconds (or rotate the dish on a Lazy Susan). The bounce can be encouraged by passing a small rod like a chopstick through the stream every now and then.

Wednesday, July 18, 2007


Chartres on film

Late last year I nearly froze my tender parts off standing on the turret of a ruined Tudor mansion in Sussex taking about Chartres cathedral. (Tudor? Well, fortunately it was shrouded in icy fog.) I was being filmed for programme being made for the National Geographic channel on the building of the cathedral, the subject of my forthcoming book Universe of Stone (published next May by Random House and Harper Collins, since you ask).

I’ve just been sent a DVD of the result, which you can watch here. It is not terrible. That doesn’t sound much of a recommendation, but frankly it has become hard to expect very much of history programmes now. There is, of course, a lot of footage of people running around dressed a peasants or bishops, looking respectively angry or devout. For what was presumably a very modest budget, some of the footage looks rather impressive: in particular, nice graphics of the building (which I’d love to get my hands on for the book, instead of slaving away pencil in hand). But I’d have liked rather more of the real cathedral, without all the soft focus and computer enhancement – it doesn’t need tarting up like that.

Needless to say, every last bit of drama is wrung from the story, often at the cost of stepping way beyond what we can confidently say. I found myself several times saying “Are you sure?”, or “How do you know that?” We get the standard line on issues such as the Cult of the Carts and the donation of the Windows of the Trades, which have now been more or less debunked. We get the usual mystical speculation about the labyrinth (walk it if you like, but don’t assume that medieval pilgrims did the same). Some things, such as the novelty of the flying buttresses and of the bleu de Chartres in the windows, are just plain wrong, in an attempt to make Chartres seem more innovative than it was. I sensed some influence of John James’ dodgy ‘contractors of Chartres’ idea. So I’m afraid this is very much a ‘guide book’ picture of what went on, which treats the history in a rather cavalier fashion. But there are some nice remarks by the contributors (I liked the sceptical angle on the miraculous survival of the Sacred Tunic), and I suppose all one can really hope for is that a programme like this makes you consider going to Chartres, which you certainly should.

As for the British guy who tells us solemnly that we know nothing about the master builder of Chartres except that “he must have been very experienced” – well, duh. It must have been the cold squeezing the blood from my brain.

Tuesday, July 10, 2007


It could only happen in the movies
[This is the pre-edited version of my latest article for muse@nature.com]

Real science can’t compete at the movies with bad science. But perhaps that’s how it is meant to be.

“I’m arresting you for breaking the laws of physics”, says the policeman to the levitating man in a cartoon that speaks volumes, not least about the curiously legalistic terminology that science has adopted for its fundamental principles. In this spirit, two physicists at the University of Central Florida appear intent on making a citizen’s arrest of the entire Hollywood movie industry. In a preprint, they examine some egregious errors of physics in recent blockbusters [1].

The contempt that Hollywood shows for science is notorious. From loud explosions in deep space to genetically engineered spiders that transmute man into semi-arachnid, the movies are littered with scientific nonsense. But how much does this matter?

A lot, say Costas Efthimiou and Ralph Llewellyn. They argue that science bloopers in movies “contribute to science illiteracy.” Hollywood, they say, “is reinforcing (or even creating) incorrect scientific attitudes that can have negative results for society”.

If that is true, I suspect it’s not in the way they think it is. For decades, Hollywood has endorsed the archetype of the mad scientist [2] – an image that vastly predates the advent of cinema [3]. But recent portrayals of scientists in big movies are more nuanced: they are sometimes saviours (Armageddon), sometimes tortured geniuses (A Beautiful Mind), and most extraordinary of all, sometimes sexy (Jeff Goldblum in Jurassic Park).

Efthimiou and Llewellyn are less concerned with the image of science, however, than with its veracity. They explain (with equations) why the bus in Speed couldn’t possibly jump over a gap in a horizontal bridge segment of highway, why the Green Goblin in Spiderman couldn’t hold up the cable of the tramway between Manhattan and Roosevelt island, and why Magneto in X-Men: The Last Stand would have to glow like a lightbulb and lose 1350 pounds of body weight in order to shift the Golden Gate bridge by 5 km.

At face value, their analysis tilts at windmills. Let’s suppose, say, that the makers of X Men appreciated that by generating the power needed to achieve his feat, Magneto would need to emit about 18 million watts per square metre. Are we to suppose that viewers, seeing him become incandescent, would think “ah – blackbody radiation”. Or might they instead think “ah – superpowers make you glow”?

But these explorations of Hollywood’s scientific absurdities do raise some interesting questions. For example, how readily can we intuit cases of physics abuse? When we see superheroes perform impossible feats of gymnastics, do we innately sense that laws are being broken?

That question is often turned around in discussions of sporting prowess. No one supposes that baseball fielders or football players are predicting trajectories using Newtonian mechanics; rather, they seem to have a superior intuitive sense of its dynamical consequences.

The interesting question for movie makers (although I doubt that they formulate it this way) is not “how can I respect physical laws?”, but rather, “how can I break physical laws without shattering an illusion of plausibility?” The answer to that question might imply interesting things about how much our senses have been evolutionarily honed to appreciate the laws of physics. British biologist Lewis Wolpert has argued persuasively that, on the contrary, much of science depends on subverting intuitive reasoning about the world [4].

But should we endorse the violations of physics routinely perpetrated by Hollywood? Efthimiou and Llewellyn clearly think not. I would argue that you might as well complain about such ‘errors’ in the Greek myths or in fairy tales, or for that matter in Warner Brothers cartoons. Blockbuster movies are in many ways the modern equivalents of classical myths, their scenarios so unashamedly fantastic that we have no illusions about what we’re getting. I suspect movie-makers and movie-goers both understand this unspoken contract.

In fact, the unworldly exploits of superheroes have been used to good effect in physics education [5]. And Efthimiou and Llewellyn have themselves already acknowledged that Hollywood’s bad physics provides a superb vehicle for cultivating students’ skills at making back-of-the-envelope estimates, or what are now known as Fermi problems (because of Enrico Fermi’s talent for finding ways to make estimates about seemingly obscure or intractable quantitative questions) [6,7].

One might argue that scenarios such as the bus jump in Speed are more troubling, as they tend to be presented in an apparently realistic mode. I’m not so sure; this scene is merely replaying an old movie convention (the death-defying leap – see John Wayne in Brannigan), and indeed the more serious shortcoming is its lack of imagination.

The main message people take away from movies, however, isn’t concerned with how the physical world works; it is about narratives. Bad physics is far less dismaying than, say, the militaristic bravado of Independence Day or the xenophobia of True Lies.

Nonetheless, if you crave an antidote to Hollywood’s bad science, you can find it on YouTube, where sixteen scientists working on a project to make gas sensors from carbon nanotubes are posting video diaries of their progress.

In this project, called Nano2Hybrids and supported by the UK-based Vega Science Trust, the Belgium-based scientists are receiving video training from an experienced documentary maker, and they record their results on a website where viewers are encouraged to leave feedback.

The aim is to show people how scientific research really works. So I expect to see weeks of frustration as experiments fail, lots of staring at computer screens, tedious late-night observational runs, and some 18-rated language when the referees’ reports arrive. It’s already reassuring to see so few white lab coats.

This isn’t the first attempt to put science on YouTube. But it may be the first to try to fully document a research project this way. That could be highly informative, not least because it should explode some hoary myths about what scientists do and how they behave.

But somehow, I doubt that a scientist describing her results in front of a laptop will compete with Spiderman swinging around Manhattan. And that’s as it should be, because fantasy is not supposed to be constrained by the mundane laws that confound scientists in the lab.

Reference
1. Efthimiou, C. J. & Llewellyn, R. A. Preprint http://xxx.arxiv.org/physics/0707.1167 (2007).
2. Frayling, C. Mad, Bad, and Dangerous: The Scientists and the Cinema. (Reaktion Books, 2005).
3. Haynes, R. From Faust to Strangelove: Representations of the Scientist in Western Literature (Johns Hopkins University Press, 1994).
4. Wolpert, L. The Unnatural Nature of Science (Faber, 1992).
5. Gresh, L. H. & Weinberg, R. The Science of Superheroes (Wiley, 2002).
6. Efthimiou, C. & Llewellyn, R. Preprint http://xxx.arxiv.org/physics/0303005 (2003).
7. Efthimiou, C. J. & Llewellyn, R. A. Preprint http://xxx.arxiv.org/physics/060805 (2007).

Friday, June 29, 2007


Designs for life


[More matters arising from the Greenland conference: in this case, a paper that John Glass of the Venter Institute discussed, and which is now published in Science. It has had a lot of press, and rightly so. Here is the article I have written for Nature's News & Views section, which will appear in next week's issue.]

The genome of one bacterium has been successfully replaced with that of a different bacterium, transforming one species into another. This development is a harbinger of whole-genome engineering for practical ends.

If your computer doesn’t do the things you want, give it a new operating system. As they describe in Science [1], Carole Lartigue and colleagues at the J. Craig Venter Institute in Rockville, Maryland, have now demonstrated that the same idea will work for living cells. In an innovation that presages the dawn of organisms redesigned from scratch, the authors report the transplantation of an entire genome between species. They have moved the genome from one bacterium, Mycoplasma mycoides, to another, Mycoplasma capricolum, and have shown that the recipient cells can be ‘booted up’ with the new genome — in effect, a transplant that converts one species into another.

This is likely to be a curtain-raiser for the replacement of an organism’s genome with a wholly synthetic one, made by DNA-synthesis technology. The team at the Venter Institute hopes to identify the ‘minimal’ Mycoplasma genome: the smallest subset of genes that will sustain a viable organism [2]. The group currently has a patent application for a minimal bacterial genome of 381 genes identified in Mycoplasma genitalium, the remainder of the organism’s 485 protein-coding genes having been culled as non-essential.

This stripped-down genome would provide a ‘chassis’ on which organisms with new functions might be designed by combining it with genes from other organisms — for example, those encoding cellulase and hydrogenase enzymes, for making cells that respectively break down plant matter and generate hydrogen. Mycoplasma genitalium is a candidate platform for this kind of designer-genome synthetic biology because of its exceptionally small genome [2]. But it has drawbacks, particularly a relatively slow growth rate and a requirement for complex growth media: it is a parasite of the primate genital tract, and is not naturally ‘competent’ on its own. Moreover, its genetic proof-reading mechanisms are sloppy, giving it a rapid rate of mutation and evolution. The goat pathogens M. mycoides and M. capricolum are somewhat faster-growing, dividing in less than two hours.

Incorporation of foreign DNA into cells happens naturally, for example when viruses transfer DNA between bacteria. And in biotechnology, artificial plasmids (circular strands of DNA) a few kilobases in size are routinely transferred into microorganisms using techniques such as electroporation to get them across cell walls. In these cases, the plasmids and host-cell chromosomes coexist and replicate independently. It has remained unclear to what extent transfected DNA can cause a genuine phenotypic change in the host cells — that is, a full transformation in a species’ characteristics. Two years ago, Itaya et al. [3] transferred almost an entire genome of the photosynthetic bacterium Synechocystis PCC6803 into the bacterium Bacillus subtilis. But most of the added genes were silent and the cells remained phenotypically unaltered.

Genome transplantation in Mycoplasma is relatively easy because these organisms lack a bacterial cell wall, having only a lipid bilayer membrane. Lartigue et al. extracted the genome of M. mycoides by suspending the bacterial cells in agarose gel before breaking them open, then digesting the proteinaceous material with proteinase enzymes. This process leaves circular chromosomes, virtually devoid of protein and protected from shear stress by the agarose encasement. This genetic material was transferred to M. capricolum cells in the presence of polyethylene glycol, a compound known to cause fusion of eukaryotic cells (those with genomes contained in a separate organelle, the nucleus). Lartigue et al. speculate that some M. capricolum cells may have fused around the naked M. mycoides genomes.

The researchers did not need to remove the recipient’s DNA before adding that of the donor; instead, they added an antibiotic-resistance gene to the M. mycoides donor genome. With two genomes already present, no replication was needed before the recipient cells could divide: one daughter cell had the DNA of M. capricolum, the other that of M. mycoides. But in the presence of the antibiotic, only the latter survived. Some M. capricolum colonies did develop in the transplanted cells after about ten days, perhaps because their genomes recombined with the antibiotic-resistant M. mycoides. But most of the cells, and all of those that formed in the first few days, seemed to be both genotypically and phenotypically M. mycoides, as assessed by means of specific antibodies and proteomic analysis.

The main question raised by this achievement is how much difference a transplant will tolerate. That is, how much reprogramming is possible? The DNA sequences of M. mycoides and M. capricolum are only about 76% the same, and so it was by no means obvious that the molecular machinery of one would be able to operate on the genome of the other. Yet synthetic biology seems likely to make possible many new cell functions, not by whole-genome transplants but by fusing existing ones. When John I. Glass, a member of the Venter Institute’s team, presented the transplant results at a recent symposium on the merging of synthetic biology and nanotechnology [4], he also described the institute’s work on genome fusion (further comments on matters arising from the symposium appeared in last week’s issue of Nature [5].

One target is to develop a species of an aerobic Clostridium bacterium that will digest plant cellulose into ethanol, thus generating a fuel from biomass. Cellulose is difficult to break down — which is why trees remain standing for so long — but it can be done by Clostridium cellulolyticum. However, this creates glucose. Clostridium acetobutylicum, meanwhile, makes butanol and other alcohols, but not from cellulose. So a combination of genes from both organisms might do the trick. For such applications, it remains to be seen whether custom-built vehicles or hybrids will win the race.

1. Lartigue, C. et al. Science Express doi:10.1126.1144622 (2007).
2. Fraser, C. M. et al. Science 270, 397–403 (1995).
3. Itaya, M. et al. Proc. Natl Acad. Sci. USA 102, 15971–15976 (2005).
4. Kavli Futures Symposium The Merging of Bio and Nano: Towards Cyborg Cells 11–15 June 2007, Ilulissat, Greenland.
5. Editorial Nature 447, 1031–1032 (2007).

Tuesday, June 26, 2007

What is life? A silly question

[This will appear as a leader in next week's Nature, but not before having gone through an editorial grinder...]

While there is probably no technology that has not at some time been deemed an affront to God, none invites the accusation to the same degree as synthetic biology. Only a deity predisposed to cut-and-paste would suffer any serious challenge from genetic engineering as it has been practised in the past. But the efforts to redesign living organisms from scratch – either with a wholly artificial genome made by DNA synthesis technology or, more ambitiously, by using non-natural, bespoke molecular machinery – really might seem to justify the suggestion, made recently by the ETC Group, an environmental pressure group based in Ottawa, that “for the first time, God has competition.”

That accusation was levelled at scientists from the J. Craig Venter Institute in Rockville, Maryland, based on the suspicion that they had synthesized an organism with an artificial genome in the laboratory. The suspicion was unfounded – but this feat will surely be achieved in the next few years, judging from the advances reported at a recent meeting in Greenland on the convergence of synthetic biology and nanotechnology and the progress towards artificial cells.*

But one of the views commonly held by participants was that to regard such efforts as ‘creating life’ is more or less meaningless. This trope has such deep cultural roots, travelling via the medieval homunculus and the golem of Jewish legend to the modern Faustian myth written by Mary Shelley, that it will surely be hard to dislodge. Scientific attempts to draw up criteria for what constitutes ‘life’ only bolster the popular notion that it is something that appears when a threshold is crossed – a reminder that vitalism did not die alongside spontaneous generation.

It would be a service to more than synthetic biology if we might now be permitted to dismiss the idea that life is a precise scientific concept. One of the broader cultural benefits of attempts to make artificial cells is that they force us to confront the contextual contingency of the word. The trigger for the ETC Group’s protest was a patent filed by the Venter Institute last October on a ‘minimal bacterial genome’: a subset of genes, identified in Mycoplasma genitalium, required for the organism to be viable ‘in a rich bacterial culture medium’. That last sounds like a detail, but is in fact essential. The minimal requirements depend on the environment – on what the organism does and doesn’t have to synthesize, for example, and what stresses it experiences. And participants at the Greenland meeting added the reminder that cells do not live alone, but in colonies and, in general, in ecosystems. Life is not a solitary pursuit.

Talk of ‘playing God’ will mostly be indulged either as a lazy journalistic cliché or as an alarmist slogan. But synthetic biology’s gradualist and relative view of what life means should perhaps be invoked to challenge equally lazy perspectives on life that are sometimes used to defend religious dogma. If, for example, this view undermines the notion that a ‘spark of humanity’ abruptly animates a fertilized egg – if the formation of a new being is recognized more clearly to be gradual, contingent and precarious – then the role of the term ‘life’ in that debate might acquire the ambiguity it has always warranted.

*Kavli Futures Symposium, 11-15 June, Ilulissat, Greenland.

Monday, June 25, 2007


The Ilulissat Statement

[This is a statement drafted by the participants of the conference in Greenland that I attended two weeks ago. Its release today coincides with the start of the third conference on synthetic biology in Zürich.]

Synthesizing the Future

A vision for the convergence of synthetic biology and nanotechnology

This document expresses the views that emerged from the Kavli Futures Symposium ‘The merging of bio and nano: towards cyborg cells’, 11-15 June 2007, Ilulissat, Greenland.

Approximately fifty years ago, two revolutions began. The invention of the transistor and the integrated circuit paved the way for the modern information society. At the same time, Watson and Crick unlocked the structure of the double helix of DNA, exposing the language of life with stunning clarity. The electronics revolution has changed the way we live and work, while the genetic revolution has transformed the way we think about life and medical science.

But a third innovation contemporaneous with these was the discovery by Miller and Urey that amino acids may be synthesized in conditions thought to exist on the early Earth. This gave us tantalizing hints that we could create life from scratch. That prospect on the one hand, and the ability to manipulate genetic information using the tools of biotechnology on the other, are now combined in the emerging discipline of synthetic biology. How we shape and implement this revolution will have profound effects for humanity in the next fifty years.

It was also almost fifty years ago that the proposal was made by Feynman of engineering matter at the atomic scale – the first intimation of the now burgeoning field of nanotechnology. Since the nanoscale is also the natural scale on which living cells organize matter, we are now seeing a convergence in which molecular biology offers inspiration and components to nanotechnology, while nanotechnology has provided new tools and techniques for probing the fundamental processes of cell biology. Synthetic biology looks sure to profit from this trend.

It is useful to divide synthetic biology, like computer technology, into two parts: hardware and software. The hardware – the molecular machinery of synthetic biology – is rapidly progressing. The ability to sequence and manufacture DNA is growing exponentially, with costs dropping by a factor of two every two years. The construction of arbitrary genetic sequences comparable to the genome size of simple organisms is now possible. Turning these artificial genomes into functioning single-cell factories is probably only a matter of time. On the hardware side of synthetic biology, the train is leaving the station. All we need to do is stoke the engine (by supporting foundational research in synthetic biology technology) and tell the train where to go.

Less clear are the design rules for this remarkable new technology—the software. We have decoded the letters in which life’s instructions are written, and we now understand many of the words – the genes. But we have come to realize that the language is highly complex and context-dependent: meaning comes not from linear strings of words but from networks of interconnections, with its own entwined grammar. For this reason, the ability to write new stories is currently beyond our ability – although we are starting to master simple couplets. Understanding the relative merits of rational design and evolutionary trial-and-error in this endeavor is a major challenge that will take years if not decades. This task will have fundamental significance, helping us to better understand the web of life as expressed in both the genetic code and the complex ecology of living organisms. It will also have practical significance, allowing us to construct synthetic cells that achieve their applied goals (see below) while creating as few problems as possible for the world around them.

These are not merely academic issues. The early twenty first century is a time of tremendous promise and tremendous peril. We face daunting problems of climate change, energy, health, and water resources. Synthetic biology offer solutions to these issues: microorganisms that convert plant matter to fuels or that synthesize new drugs or target and destroy rogue cells in the body. As with any powerful technology, the promise comes with risk. We need to develop protective measures against accidents and abuses of synthetic biology. A system of best practices must be established to foster positive uses of the technology and suppress negative ones. The risks are real, but the potential benefits are truly extraordinary.

Because of the pressing needs and the unique opportunity that now exists from technology convergence, we strongly encourage research on two broad fronts:

Foundational Research
1. Support the development of hardware platforms for synthetic biology.
2. Support fundamental research exploring the software of life, including its interaction with the environment.
3. Support nanotechnology research to assist in the manufacture of synthetic life and its interfacing with the external world.

Societal Impacts and Applications
4. Support programs directed to address the most pressing applications, including energy and health care.
5. Support the establishment of a professional organization that will engage with the broader society to maximize the benefits, minimize the risks, and oversee the ethics of synthetic life.
6. Develop a flexible and sensible approach to ownership, sharing of knowledge, and regulation, that takes into account the needs of all stakeholders.

Fifty years from now, synthetic biology will be as pervasive and transformative as is electronics today. And as with that technology, the applications and impacts are impossible to predict in the field’s nascent stages. Nevertheless, the decisions we make now will have enormous impact on the shape of this future.

The people listed below, participants at the Kavli Futures Symposium ‘The merging of bio and nano: towards cyborg cells’, 11-15 June 2007, Ilulissat, Greenland, agree with the above statement

Robert Austin
Princeton University, Princeton, USA

Philip Ball
Nature, London, United Kingdom

Angela Belcher
Massachusetts Institute of Technology, Cambridge, USA

David Bensimon
Ecole Normale Superieure, Paris, France

Steven Chu
Lawrence Berkeley National Laboratory, Berkeley, USA

Cees Dekker
Delft University of Technology, Delft, The Netherlands

Freeman Dyson
Institute for Advanced Study, Princeton, USA

Drew Endy
Massachusetts Institute of Technology, Cambridge, USA

Scott Fraser
California Institute of Technology, Pasadena, USA

John Glass
J. Craig Venter Institute, Rockville, USA

Robert Hazen
Carnegie Institution of Washington, Washington, USA

Joe Howard
Max Planck Institute of Molecular Cell Biology and Genetics, Dresden, Germany

Jay Keasling
University of California at Berkeley, Berkeley, USA

Hiroaki Kitano
The Systems Biology Institute, and Sony Computer Science Laboratories, Japan

Paul McEuen
Cornell University, Ithaca, USA

Petra Schwille
TU Dresden, Dresden, Germany

Ehud Shapiro
Weizman Institute of Science, Rehovot, Israel

Julie Theriot
Stanford University, Stanford, USA

Thursday, June 21, 2007


Should synthetic biologists patent their components?
[This, a piece for Nature’s online muse column, is the first fruit of the wonderful workshop I attended last week in Greenland on the convergence of synthetic biology and nanotechnology. That explains the apparent non-sequitur of a picture – taken at midnight in Ilulissat, way above the Arctic Circle. Much more on this to follow…(I may run out of pictures). Anyway, this piece surely won’t survive at this length on the Nature web pages, so here it is in full.]

Behind scary talk about artificial organisms lie real questions about the ownership of biological ‘parts’.

“For the first time, God has competition”, claimed environmental pressure organization the ETC Group two weeks ago. With this catchy headline, they aimed to raise the alarm about a patent on “the world’s first-ever human-made species”, a bacterium allegedly created “with synthetic DNA” in the laboratories of the Venter Institute in Rockville, Maryland.

ETC had discovered a US patent application (20070122826) filed last October by the Venter Institute scientists. The institute was established by genomics pioneer Craig Venter, and one of its goals is to make a microorganism stripped of all non-essential genes (a ‘minimal cell’) as a platform for designing living cells from the bottom up.

ETC’s complaint was a little confused, because it could not figure out whether the synthetic bacterium, which the group dubbed ‘Synthia’, has actually been made. On close reading of the patent, however, it becomes pretty apparent that it has not.

Indeed, there was no indication that the state of the art had advanced this far. But it is a lot closer than you might imagine, as I discovered last week at a meeting in Greenland supported by the Kavli Foundation, entitled “The merging of bio and nano – towards cyborg cells.” Sadly, the details are currently under embargo – so watch this space.

However, the ETC Group was also exercised about the fact that Venter Institute scientists had applied for a patent on what it claimed were the set of essential genes needed to make a minimal organism – or as the application puts it, “for replication of a free-living organism in a rich bacterial culture medium.” These are a subset of the genes possessed by the microbe Mycoplasma genitalium, which has a total of just 485 genes that encode proteins.

If the patent were granted, anyone wanting to design an organism from the minimal subset of 381 genes identified by Venter’s team would need to apply for a license. “These monopoly claims signal the start of a high-stakes commercial race to synthesize and privatize synthetic life forms”, claimed ETC’s Jim Thomas. “Will Venter’s company become the ‘Microbesoft’ of synthetic biology?”

Now, that’s a better question (if rather hyperbolically posed). I’m told that this patent application has little chance of success, but it does raise an important issue. Patenting of genes has of course been a controversial matter for many years, but the advent of synthetic biology – of which a major strand involves redesigning living organisms by reconfiguring their genetic wiring – takes the debate to a new level.

“Synthetic biology presents a particularly revealing example of a difficulty that the law has frequently faced over the last 30 years – the assimilation of a new technology into the conceptual limits posed by existing intellectual property rights”, say Arti Rai and James Boyle, professors of law at Duke University in North Carolina in a recent article in the journal PLoS Biology [1]. “There is reason to fear that tendencies in the way that US law has handled software on the one hand and biotechnology on the other could come together in a ‘perfect storm’ that would impede the potential of the technology.”

What is new here is that genes are used in a genuine ‘invention’ mode, to make devices. Researchers have organized ‘cassettes’ of natural genes into modules that can be inserted into microbial genomes, giving the organisms new types of behaviour. One such module acted as an oscillator, prompting regular bursts of synthesis of a fluorescent protein. When added to the bacteria E. coli, it made the cells flash on and off with light [2].

It is arguably a distortion of the notion of ‘invention’ to patent a gene that exists in nature. But if you can start to make new ‘devices’ by arranging these genes in new ways, doesn’t that qualify? And if so, how small and rudimentary a ‘part’ becomes patentable?

At the Greenland conference, Drew Endy of the Massachusetts Institute of Technology admitted that the framework for ownership and sharing of developments in synthetic biology remains wholly unresolved. He and his MIT colleagues are creating a Registry of Standard Biological Parts to be used as the elements of genetic circuitry just like the transistors, capacitors and so forth.in electronics catalogues. This registry places the parts in the public domain, which can provide some protection against attempts to patent them.

Endy helps to organize an annual competition among university students for the design of engineered organisms with new functions. One of the recent entries, from students at the Universities of Texas at Austin and California at San Francisco, was a light-sensitive version of E. coli that could grow into a photographic film [3]. Endy says that these efforts would be impossibly expensive and slow if the intellectual property rights on all the components first had to be cleared.

He compares it to a situation where, in order to write a piece of computer code, you have to apply for licensing on each command, and perhaps on certain combinations of them too.

And in synthetic biology that sort of patenting seems disturbingly easy right now. “You can take any device from the Texas instruments TTL catalogue, put ‘genetically coded’ in front of it without reducing it to practice, and you have a good chance of getting a patent”, says Endy.

“Evidence from virtually every important industry of the twentieth century suggests that broad patents on foundational research can slow growth”, say Rai and Boyle. Bioengineer Jay Keasling of the University of California at Berkeley, who was also at the Greenland meeting, agrees that patenting has been a brake on the useful applications of biotechnology. He has been working for several years to engineer microbes to synthesize a compound called artemisinin, which is currently one of the best drugs available for fighting malaria4. Artemisinin is produced in tiny amounts by as Asian shrub. Extraction from this source is prohibitively expensive, making it impossible to use artemisinin to combat malaria in developing countries, where it kills 1-3 million people each year.

Keasling’s genetically engineered artemisinin could potentially be made at a fraction of the cost. But its synthesis involves orchestrating the activity of over 40 genetic components, which is a greater challenge than any faced previously in genetic engineering. He hopes that an industrial process might be up and running by 2009.

Scientists at the Venter Institute hope to make organisms that can provide cheap fuels from biomass sources, such as bacteria that digest plant matter and turn it into ethanol. When the ETC Group dismisses these efforts to use synthetic biology for addressing global problems as mere marketing strategies, they are grossly misjudging the researchers and their motives.

But might patenting pose more of a threat than twitchy pressure groups? “If you want to have a community sharing useful and good parts, 20 years of patent protection is obviously not helpful”, says Sven Panke of the ETH in Zürich, Switzerland, one of the organizers of the third Synthetic Biology conference being held there next week. “It would be very helpful if we could find a good way to reward but not impede.”

Endy points out that patenting is by no means the only way to protect intellectual property – although it is certainly one of the most costly, and so suits lawyers nicely. Copyright is another way to do it – although even that might now be too binding (thanks to the precedents set by Disney on Mickey Mouse), and it’s not obvious how it might work for synthetic biology anyway.

Tailormade contracts are another option, but Endy says they tend to be ‘leaky’. It may be that some form of novel, bespoke legal framework would work best, but that could be expensive too.

Intellectual property is prominently on the agenda at next week’s Zürich conference. But Panke says “we are going to take a look at the issue, but we will not solve it. In Europe we are just starting to appreciate the problem.”

References
1. Rai, A. & Boyle, J. PLoS Biology 5(3), e58 (2007).
2. Elowitz, M. B. & Leibler, S. Nature 403, 335 - 338 (2000).
3. Levskaya, A. et al. Nature 438, 441 (2005).
4. Ro, D.-K. et al. Nature 440, 940 - 943 (2006).

Wednesday, June 20, 2007

NATO ponders cyberwarfare
[If I were good at kidding myself, I could imagine that NATO officials read my previous muse@nature.com article on the recent cyberattacks on Estonia. In any event, they seem now to be taking seriously the question of how to view such threats within the context of acts of war. Here’s my latest piece for Nature Online News.]

Attacks on Estonian computer networks have prompted high-level discussion.

Recent attacks on the electronic information networks of Estonia have forced NATO to consider the question of whether this form of cyberattack could ever be construed as an act of war.

The attacks on Estonia happened in April in response to the Baltic country’s decision to move a Soviet-era war memorial from the centre of its capital city Tallinn. This was interpreted by some as a snub to the country’s Soviet past and to its ethnic Russian population. The Estonian government claimed that many of the cyberattacks on government and corporate web sites, which were forced to shut down after being swamped by traffic, could be traced to Russian computers.

The Russian government denied any involvement. But NATO spokesperson James Appathurai says of the attacks that “they were coordinated; they were focused, [and] they had clear national security and economic implications for Estonia.”

Estonia is one of the most ‘wired’ countries in Europe, and renowned for its expertise in information technology. Earlier this year it conducted its national elections electronically.

Last week, NATO officials met at the alliance headquarters in Brussels to discuss how such cyberattacks should be dealt with. All 26 of the alliance members agreed that cyberdefence needs to be a top priority. “Urgent work is needed to enhance the ability to protect information systems of critical importance to the Alliance against cyberattacks”, said Appathurai.

The Estonian experience seems to have sounded a wake-up call: previous NATO statements on cyberdefence have amounted to little more than identifying the potential risk. But the officials may now have to wrestle with the problem of how such an attack, if state-sponsored, should be viewed within the framework of international law on warfare.

Irving Lachow, a specialist on information warfare at the National Defense University in Washington, DC, says that, in the current situation, it is unclear whether cyberattack could be interpreted as an act of war.

“My intuition tells me that cyberwarfare is fundamentally different in nature”, he says. “Traditional warfare is predicated on the physical destruction of objects, including human beings. Cyberwarfare is based on the manipulation of digital objects. Of course, cyberattacks can cause physical harm to people, but they must do so through secondary effects.”

But he adds that “things get trickier when one looks at strategic effects. It is quite possible for cyber attacks to impact military operations in a way that is comparable to physical attacks. If you want to shut down an air defense site, it may not matter whether you bomb it or hack its systems as long as you achieve the same result. Thus it is quite conceivable that a cyberattack could be interpreted as an act of war – it depends on the particulars.”

Clearly, then, NATO will have plenty to talk about. “I think it’s great that NATO is focused on this issue”, says Lachow. “Going through the policy development process will be a useful exercise. Hopefully it will produce guidelines that will help with future incidents.”

Thursday, June 07, 2007


Is this Chaucer’s astrolabe?

[This is the pre-edited version of my latest article for Nature’s online news. the original paper is packed with interesting stuff, and makes me want to know a lot more about Chaucer.]

Several astronomical instruments have been misattributed to the medieval English writer

Want to see the astrolabe used for astronomical calculations by Geoffrey Chaucer himself? You’ll be lucky, says Catherine Eagleton, a curator at the British Museum in London.

In a paper soon to be published in the journal Studies in the History and Philosophy of Science [doi:10.1016/j/shpsa.2007.03.006], she suggests that the several astrolabes that have been alleged as ‘Chaucer’s own’ are probably not that at all.

It’s more likely, she says, that these are instruments made after Chaucer’s death according to the design the English scholar set out in his Treatise on the Astrolabe. Such was Chaucer’s reputation in the centuries after his death that instrument-makers may have taken the drawings in his book as a blueprint.

Eagleton thinks that the claims are therefore back to front: the instruments alleged to be Chaucer’s own were not the ones he used for his drawings, but rather, the drawings supplied the design for the instruments.

Born around 1343, Chaucer is famous now for his literary work known as The Canterbury Tales. But he was a man of many interests, including the sciences of his age: the Tales demonstrate a deep knowledge of astronomy, astrology and alchemy, for example.

He was also a courtier and possibly acted as a spy for England’s King Richard II. “He was an intellectual omnivore”, says Eagleton. “There’s no record that he had a formal university education, but he clearly knows the texts that academics were reading.”

There are several ‘Chaucerian astrolabes’ that have characteristic features depicted in Chaucer’s treatise. Sometimes this alone has been used to date the instruments to the fourteenth century and to link them more or less strongly to Chaucer himself.

An astrolabe is an instrument shaped rather like a pocket watch, with movable dials and pointers that enable the user to calculate the positions of the stars and the Sun at a particular place and time: a kind of early astronomical computer. It is simultaneously a device for timekeeping, determining latitude, and making astrological forecasts.

The astrolabe may have been invented by the ancient Greeks or Indians, but Islamic scientists made its construction a fine art. The designs shown in Chaucer’s books have some distinctive features – in particular, capital letters marking the 24 hours of the day, and a symbol of a dog to represent the Dog Star.

Eagleton says that these correspondences have led some collectors and dealers to claim that a particular instrument could be the very one Chaucer held in his hand. “There are probably four or five of these around”, she says, “but no one needs five astrolabes.”

“There’s a real tendency to link any fourteenth-century instrument to him”, she says, adding that museum curators are usually more careful. The British Museum doesn’t make such strong claims for its own ‘Chaucerian astrolabes’, for example. (The one shown above is called the ‘Chaucer astrolabe’, and is unusual in being inscribed with a pre-Chaucerian date of 1326 – but the museum is suitably cautious about what it infers from that.)

But Eagleton says that an instrument held at Merton College in Oxford University is generally just called ‘Chaucer’s astrolabe’, with the implication that it was his.

She says that none of the ‘Chaucerian astrolabes’ can in fact be definitively dated to the fourteenth century, and that all four of those she studied closely, including another Chaucerian model at the British Museum and one at the Museum of the History of Science in Oxford called the Painswick astrolabe, have features that suggest they were made after Chaucer’s treatise was written. For example, the brackets holding the rings of these two astrolabes have unusual designs that could be attempts to copy the awkward drawing of a bracket in Chaucer’s text, which merges views from different angles. The treatise, in other words, came before the instruments.

“It is extremely unlikely that any of the surviving instruments were Chaucer’s own astrolabe”, she concludes.

So why have others thought they were? “There is this weird celebrity angle, where people get a bit carried away”, she says. “It is always tempting to attach an object to a famous name – it’s a very human tendency, which lets us tell stories about them. But it winds me up when it’s done on the basis of virtually no evidence.”

This isn’t a new trend, however. Chaucer was already a celebrity by the sixteenth century, so that a whole slew of texts and objects became attributed to him. This was common for anyone who became renowned for their scholarship in the Middle Ages.

Of course, whether or not an astrolabe was ‘Chaucer’s own’ would be likely to affect the price it might fetch. “This association with Chaucer probably boosts the value”, says Eagleton. “I might be making myself unpopular with dealers and collectors.”

Monday, June 04, 2007


Tendentious tilings

[This is my Materials Witness column for the July issue of Nature Materials]

Quasicrystal enthusiasts may have been baffled by a rather cryptic spate of comments and clarifications following in the wake of a recent article claiming that medieval Islamic artists had the tools needed to construct quasicrystalline patterns. That suggestion was made by Peter Lu at Harvard University and Paul Steinhardt at Princeton (Science 315, 1106; 2007). [See my previous post on 23 February 2007] But in a news article in the same issue, staff writer John Bohannon explained that these claims had already caused controversy, being allegedly anticipated in the work of crystallographer Emil Makovicky at the University of Copenhagen (Science 315, 1066; 2007).

The central thesis of Lu and Steinhardt is that Islamic artists used a series of tile shapes, which they call girih tiles, to construct their complex patterns. They can be used to make patterns of interlocking pentagons and decagons with the ‘forbidden’ symmetries characteristic of quasicrystalline metal alloys, in which these apparent symmetries, evident in diffraction patterns, are permitted by a lack of true periodicity.

Although nearly all of the designs evident on Islamic buildings of this time are periodic, Lu and Steinhardt founds that those on a fifteenth-century shrine in modern-day Iran can be mapped almost perfectly onto another tiling scheme, devised by mathematician Roger Penrose, which does generate true quasicrystals.

But in 1992 Makovicky made a very similar claim for a different Islamic tomb dating from 1197. Some accused Lu and Steinhardt of citing Makovicky’s work in a way that did not make this clear. The authors, meanwhile, admitted that they were unconvinced by Makovicky’s analysis and didn’t want to get into an argument about it.

The dispute has ruffled feathers. Science subsequently published a ‘clarification’ that irons out barely perceptible wrinkles in Bohannon’s article, while Lu and Steinhardt attempted to calm the waters with a letter in which they ‘gladly acknowledge’ earlier work (Science 316, 982; 2007). It remains to be seen whether that will do the trick, for Makovicky wasn’t the only one upset by their paper. Design consultant Jay Bonner in Santa Fe has also made previous links between Islamic patterns and quasicrystals.

Most provocatively, Bonner discusses the late-fifteenth-century Topkapi architectural scroll that furnishes the key evidence for Lu and Steinhardt’s girih scheme. Bonner points out how this scroll reveals explicitly the ‘underlying polygonal sub-grid’ used to construct the pattern it depicts. He proposes that the artists commonly used such a polygonal matrix, composed of tile-like elements, and demonstrates how these can create aperiodic space-filling designs.

Bonner does not mention quasicrystals, and his use of terms such as self-similarity and even symmetry do not always fit easily with that of physicists and mathematicians. But there’s no doubting that his work deepens the ‘can of worms’ that Bohannon says Lu and Steinhardt have opened.

All this suggests that the satellite conference of the forthcoming European Crystallographic Meeting in Marrakech this August, entitled ‘The enchanting crystallography of Moroccan ornaments’, might be more stormy than enchanting – for it includes back-to-back talks by Makovicky and Bonner.

Friday, May 25, 2007

Does this mean war?
[This is my latest article for muse@nature.com]

Cyber-attacks in the Baltic raise difficult questions about the threat of state-sponsored information warfare.

Is Estonia at war? Even the country’s leaders don’t seem sure. Over the past several weeks the Baltic nation has suffered serious attacks, but no one has been killed and it isn’t even clear who the enemy is.

That’s because the attacks have taken place in cyberspace. The websites of the Estonian government and political parties, as well as its media and banks, have been paralysed by tampering. Access to the sites has now been blocked to users outside the country.

This is all part of a bigger picture in which Estonia and its neighbour Russia are locked in bitter dispute sparked by the Soviet legacy. But the situation could provoke a reappraisal of what cyber-warfare might mean for international relations.

In particular, could it ever constitute a genuine act of war? “Not a single Nato defence minister would define a cyber-attack as a clear military action at present,” says the Estonian defence minister Jaak Aaviksoo — but he seems to doubt whether things should remain that way, adding that “this matter needs to be resolved in the near future.”

The changing face of war


When the North Atlantic Treaty was drafted in 1949, cementing the military alliance of NATO, it seemed clear enough what constituted an act of war, and how to respond. “An armed attack against one or more [member states] shall be considered an attack against them all,” the treaty declared. It was hard at that time to imagine any kind of effective attack that did not involve armed force. Occupation of sovereign territory was one thing (as the Suez crisis soon showed), but no one was going to mobilize troops in response to, say, economic sanctions or verbal abuse.

Now, of course, ‘war’ is itself a debased and murky term. Nation states seem ready to declare war on anything: drugs, poverty, disease, terrorism. Co-opting military jargon for quotidian activities is an ancient habit, but by doing so with such zeal, state leaders have blurred the distinctions.

Cyber-war is, however, something else again. Terrorists had already recognized the value of striking at infrastructures rather than people, as was clear from the IRA bombings of London’s financial district in the early 1990s, before the global pervasion of cyberspace. But now that computer networks are such an integral part of most political and economic systems, the potential effects of ‘virtual attack’ are vastly greater.

And these would not necessarily be ‘victimless’ acts of aggression. Disabling health networks, communications or transport administration could easily have fatal consequences. It is not scaremongering to say that cyberwar could kill without a shot being fired. And the spirit, if not currently the letter, of the NATO treaty must surely compel it to protect against deaths caused by acts of aggression.

Access denied

The attacks on Estonia websites, triggered by the government’s decision to relocate a Soviet-era war memorial, consisted of massed, repeated requests for information that overwhelmed servers and caused sites to freeze — an effect called distributed denial of service. Estonian officials claimed that many of the requests came from computers in Russia, some of them in governmental institutions.

Russia has denied any state involvement, and so far European Union and NATO officials, while denouncing the attacks as “unacceptable” and “very serious”, have not accused the Kremlin of orchestrating the campaign.

The attack is particularly serious for Estonia because of its intense reliance on computer networks for government and business. It boasts a ‘paperless government’ and even its elections are held electronically. Indeed, information technology is one of Estonia’s principal strengths – which is why it was able to batten down the hatches so quickly in response to the attack. In late 2006, Estonia even proposed to set up a cyber-defence centre for NATO.

There is nothing very new about cyber-warfare. In 2002 NATO recognized it as a potential threat, declaring an intention to “strengthen our capabilities to defend against cyber attacks”. In the United States, the CIA, the FBI, the Secret Service and the Air Force all have their own anti-cyber-terrorism squads.

But most of the considerable attention given to cyber-attack by military and defence experts has so far focused on the threat posed by individual aggressors, from bored teenage hackers to politically motivated terrorists. This raises challenges of how to make the web secure, but does not really pose new questions for international law.

The Estonia case may change that, even if (as it seems) there was no official Russian involvement. Military attacks often now focus on the use of armaments to disable communications infrastructure, and it is hard to see how cyber-attacks are any different. The United Nations Charter declares its intention to prevent ‘acts of aggression’, but doesn’t define what those are — an intentional decision so as not to leave loopholes for aggressors, which now looks all the more shrewd.
Irving Lachow, a specialist on information warfare at the National Defense University in Washington, DC, agrees that the issue is unclear at present. “One of the challenges here is figuring out how to classify a cyber-attack”, he says. “Is it a criminal act, a terrorist act, or an act of war? It is hard to make these determinations but important because different laws apply.” He says that the European Convention on Cyber Crime probably wouldn’t apply to a state-sponsored attack, and that while there are clear UN policies regarding ‘acts of war’, it’s not clear what kind of cyber-attack would qualify. “In my mind, the key issues here are intent and scope”, he says. “An act of war would try to achieve a political end through the direct use of force, via cyberspace in this case.”

And what would be the appropriate response to state-sanctioned cyber-attack? The use of military force seems excessive, and could in any case be futile. Some think that the battle will have to be joined online – but with no less a military approach than in the flesh-and-blood world. Computer security specialist Winn Schwartau, has called for the creation of a ‘Fourth Force’, in addition to the army, navy, and air force, to handle cyberspace.

That would be to regard cyberspace as just another battleground. But perhaps instead this should be seen as further reason to abandon traditional notions about what warfare is, and to reconsider what, in the twenty-first century, it is now becoming.