Thursday, April 24, 2008


Buddha in oils?
[This is the pre-edited version of my latest news story for Nature.]

Painters on the Silk Road may have been way ahead of the Europeans.

Artists working in Afghanistan were using a primitive form of oil paint hundreds of years before it became common practice in Europe, a team of scientists has claimed.

Yoko Taniguchi of the National Research Institute for Cultural Properties in Tokyo and her coworkers have analysed samples of Buddhist paintings in caves at Bamiyan in Afghanistan, made in the mid-seventh and early eighth centuries AD. They say that the paint layers contain pigments apparently bound within so-called drying oils, perhaps extracted from walnuts and poppy seeds.

But Jaap Boon, a specialist in the chemical analysis of art at the Institute for Atomic and Molecular Physics in Amsterdam, the Netherlands, cautions that this conclusion must be seen as tentative until more detailed studies have been done.

The Bamiyan caves sit behind the gigantic statues of Buddha that were destroyed by the Taliban in 2001. The paintings, showing robed Buddhas and mythical creatures, were also defaced but not obliterated. The Bamiyan caves are now a designated UNESCO World Heritage site.

The researchers removed tiny samples of the painted surface (typically less than 1 mm across) for analysis using state-of-the-art techniques. These can reveal the chemical identity of the pigments and the materials used to bind them to a layer of earthen plaster on the cave walls.

Taniguchi’s collaborators used X-ray beams produced by the European Synchrotron Radiation Facility in Grenoble, France, to figure out the composition and crystal structures of pigment particles, deposited in a series of microscopically thin layers. The synchrotron facility produces extremely bright X-ray beams, which are essential for getting enough data from such small samples.

Meanwhile, spectroscopic methods, which identify molecular structures from the way their vibrations cause light absorption, were used to identify the organic components of the paint layers. The findings are described in a paper in the Journal of Analytical Atomic Spectrometry [1].

The researchers found pigments familiar from the ancient world, such as vermilion (red mercury sulphide) and lead white (lead carbonate). These were mixed with a range of binders, including natural resins, gums, possibly animal-skin glue or egg – and oils.

Boon suggests that this variety in itself raises concerns about potential contamination – microogranisms on the rock surface, say, or the fingerprints of people touching the paintings (something encouraged in Buddhist tradition).

He says that other techniques that really pin down what the organic molecules are should be applied before jumping to conclusions. With spectroscopy alone, he says, it can be difficult to tell egg from oils, let alone animal from plant oils.

But Marine Cotte of the Centre of Research and Restoration of the French Museums in Paris, a coauthor of the study, is convinced of the conclusions. She says that oils have an unambiguous spectroscopic signature, and adds that their molecular components have been confirmed by the technique of chromatography.

Oil painting is commonly said to have been invented by the Flemish painter Jan van Eyck and his brother Hubert in the fifteenth century. But while the van Eycks seem to have refined this technique to create stunningly rich and durable colours, the practice of mixing pigments with drying oils is known to be considerably older.

It is first mentioned in the late fifth century by the Byzantine writer Aetius, and a recipe for an oil varnish (in which a drying oil is mixed with natural resins) is listed in an eighth-century Italian manuscript.

In the twelfth century, a German Benedictine monk named Theophilus describes how to make oil paints for painting doors. Oil paints are also known from this period on Norwegian churches.

Drying oils are relatively slow to dry compared with the common medieval binders of egg yolk and size from boiled animal hide, which initially led Western craftsmen to regard them as fit only for rather lowly uses.

So the use of oils in fine art as early as the seventh century is surprising – all the more so for painting on plaster-coated rock, where the translucency of oil paints would not be expected to recommend their use. ‘It doesn’t make a lot of sense to use oils’, says Boon. He says that it would be really difficult to keep the paint in good condition for a long time in an environment like this, exposed to damp, fungi and bacteria.

But Cotte says that the oils are found in deeper layers where contamination would not penetrate, while being laid over an opaque bottom or ‘ground’ layer.

It’s not clear who these artists were, the researchers say. They were probably travelling on the Silk Road between China and the Middle East, and may have been bringing with them specialist knowledge from China.

Cotte says that these studies should aid efforts to preserve the paintings. “It helps you do that if you know what is there”, she explains – this would identify the most appropriate cleaning procedures, for example.

Reference

1. Cotte, M. et al., J. Analyt. Atomic Spectrosc. (in press, 2008)

Monday, April 21, 2008

Journeys in musical space
[This is one of the most stimulating things I’ve read for some time (not my article below, published on Nature’s online news site, but the paper it discusses). The paper itself is tough going, but once Dmitri Tymoczko explained to me where it was headed, the implications it opened up are dizzying – basically, that music is an exploration of complex geometries, giving us an intuitive feel for these spaces that we probably couldn’t get from any other kind of sensory input.]

Researchers map out the geometric structure of music.

To most of us, a Mozart piano sonata is an elegant succession of notes. To composer and music theorist Dmitri Tymoczko of Princeton University and his colleagues Clifton Callender and Ian Quinn, it is a journey in multidimensional space that can be described in the language of geometry and symmetry.

In a paper in Science, the trio offer nothing less than a way of mapping out all of pitched music (music which is not constructed from unpitched sounds like percussion), whether it is by Monteverdi or Mötörhead.

Commenting on the work, mathematician Rachel Wells Hall of Saint Joseph’s College in Philadelphia says that it opens up new directions in music theory, and could inspire composers to explore new kinds of music. It might even lead to the invention of new musical instruments, she says.

Although the work uses some fearsome maths, it is ultimately an exercise in simplification. Tymoczko and colleagues have looked for ways of representing geometrically all the equivalences that musicians recognize between different groups or sequences of notes, so that for example C-E-G and D-F#-A are both major triads, or C-E-G played in different octaves is considered basically the same chord.

By recognizing these equivalences, the immense number of possible ways of arranging notes into melodies and chord sequences can be collapsed from a multidimensional universe of permutations into much more compact spaces. The relationships between ‘musical objects’ made of small groupings of notes can then be understood in geometric terms by mapping them onto the shape of the space. Musical pieces may be seen as paths through this space.

It may sound abstract, but the idea brings together things that composers and musicologists have been trying to do in a fragmentary manner for centuries. The researchers say that all music interpretation involves throwing away some information so that particular musical structures can be grouped into classes. For example, playing ‘Somewhere Over the Rainbow’ in the key of G rather than, as originally written, the key of E flat, involves a different sequence of notes, but no one is going to say it is a different song on that account.

The Princeton researchers say there are five common kinds of transformation like this that are used in judging equivalence in music, including octave shifts, reordering of notes (for example, in inversions of chords, such as C-E-G and E-G-C), and duplications (adding a higher E to those chords, say). These equivalences can be applied individually or in combination, giving 32 different ways in which, say, two chords can be considered ‘the same’.

Such symmetries ‘fold up’ the vast space of note permutations in particular ways, Tymoczko explains. The geometric spaces that result may still be complex, but they can be analysed mathematically and are often intuitively comprehensible.

“When you’re sitting at a piano”, he says, “you’re interacting with a very complicated geometry.” In fact, composers in the early nineteenth century were already implicitly exploring such geometries through music that could not have been understood using the mathematics of the time.

In these folded-up spaces, classes of equivalent musical objects – three-note chords, say, or three-note melodies – can each be represented by a point. One point in the space that describes three-note chord types (which is cone-shaped) corresponds to major triads, such as C-E-G, another to augmented chords (in which some notes are sharpened by a semitone), and so on.

Where does this musical taxonomy get us? The researchers show that all kinds of musical problems can be described using their geometric language. For example, it provides a way of evaluating how related different sequences of notes or chords are, and thus whether or not they can be regarded as variations of a single musical idea.

“We can identify ways chord sequences can be related that music theorists haven’t noticed before”, says Tymoczko. For example, he says the approach reveals how a chord sequence used by Claude Debussy in 'L’Après-Midi d’un Faune' is related to one used slightly earlier by Richard Wagner in the prelude to 'Tristan und Isolde' – something that isn’t obvious from conventional ways of analysing the two sequences.

Clearly, Debussy couldn’t have know of this mathematical relationship to Wagner’s work. But Tymoczko says that such connections are bound to emerge as composers explore the musical spaces. Just as a mountaineer will find that only a small number of all the possible routes between two points are actually negotiable, so musicians will have discovered empirically that their options are limited by the underlying shapes and structures of musical possibilities.

“Music theorists have tended to regard the nineteenth-century experiments in harmony as unmotivated whimsy”, says Tymoczko. But his geometric scheme suggests that they were much more rational than that, governed by rigorous rules that their new approach can now uncover.

For example, the scheme supplies a logic for analysing how so-called voice leading works in chord progressions. This describes the way in which a sequence of chords with the same numbers of notes can be broken apart into parallel melodic lines. For example, the progression C-E-G to C-F-A can be thought of as three melodic lines: the E moves to F, and the G to A, with a constant C root. Finding efficient and effective voice-leading patterns has been challenging for composers and music theorists. But in the geometric scheme, a particular step from one chord to another becomes a movement in musical space between two points separated by a well defined distance, and one can discover the best routes.

This is just one of the ways in which the new theory could not only illuminate existing musical works but could point to new ways of solving problems posed in musical composition, the researchers claim.

Reference
1. Callender, C. et al. Science 320, 346-348 (2008).

Sunday, April 20, 2008

NASA loses its (science) head, Pfizer loses its case
[This is my Lab Report column for the May issue of Prospect.]

The resignation of NASA’s science chief Alan Stern in April is a symptom of all that’s wrong with the US space agency. Stern has given no official reason for his abrupt departure, which of course makes it seem all the more that the reason is one he’d rather not talk about. Many suspect his decision stems from a frustrating relationship with NASA’s leadership, specifically its head Mike Griffin, despite Stern’s assertion that Griffin is “the best administrator NASA has ever had”. Stern’s aim to keep projects on schedule and within budget – both persistent problems for NASA – is hard to fault, but it has sometimes caused a collision of priorities.

A highly respected planetary scientist, Stern has been seen as a true voice of science at NASA, favouring projects that actually teach us something about the universe. But increasingly, NASA seems compelled to support popular programmes that pander to the romanticised American vision of space exploration. Griffin has frozen the budget for fundamental science to fund a manned return mission to the moon – a political rather than scientific venture. Stern also tried to reduce the focus of planetary missions on Mars at the expense of the outer planets.

The crunch seems to have come over Stern’s decision in March to shut down Opportunity, one of the two Mars rovers currently exploring the planet’s surface. Griffin was not informed of that decision, and when he found out, he reversed it. Whatever the demands of etiquette, Stern’s decision made sense: the rovers have been an immensely successful testament to the power of robotic exploration, but they have long fulfilled their objectives. Opportunity and Spirit can still gather useful data, but the real problem was that the public loves them: the planned shutdown became headline news and provoked objections in Congress.

The rovers are now portrayed like pets: newspapers talked about Opportunity being ‘put to sleep’ rather than switched off. This pathetic fallacy is a projection of the longing to put humans on Mars. The irony is that a populist commitment to cripplingly expensive human spaceflight projects will ultimately give the taxpayer far less value for money than the kind of missions Stern supported. For now, that kind of absurd sentimentality has deprived NASA of a highly capable head of science.

*****

When scientists submit papers for publication, they usually enter into an unwritten contract of confidentiality with the journal: the paper will not be disseminated outside of the peer review process, but the reviewers will not be disclosed to the authors.

The pharmaceutical company Pfizer has decided that this arrangement should be subordinate to its own interests. During a lawsuit last year over alleged side effects of its painkillers Celebrex and Bextra, it subpoenaed the New England Journal of Medicine (NEJM) to release the reviews and reviewers’ identities for papers published on the drugs, along with details of the journals’ internal editorial deliberations. The NEJM’s refusal has now been upheld by a federal court in Massachusetts.

Pfizer’s lawyers say that the information could help to exonerate the company in deciding to put the drugs on sale. Bextra was withdrawn in 2005 after claims that it could cause heart attacks and strokes; Celebrex remains on the market.

“The public has no interest in protecting the editorial process of a scientific journal”, the lawyers have say. But the public has every interest in knowing that scientific claims will be checked out by independent experts who not only are guaranteed anonymity but do not expose themselves to the danger of litigation. The best reviewers might otherwise decline the task rather than take that risk. A counter-argument is that information relevant to public health should not be kept confidential – but drug companies are after all under no obligation to disclose their own tests and trials.

Besides, Pfizer has not specified what it hoped to find in the documents. One interpretation is that the company is simply fishing for anything that might help its case, rather than acting on a belief that the NEJM holds some pivotal evidence. The court’s decision is the right one, but will it persuade drug companies that they cannot rewrite the rules by which science is conducted?

*****

The new head of the Human Fertilisation and Embryology Authority (HFEA), Renaissance historian Lisa Jardine, has certainly begun her role during ‘interesting times’. The impending vote on the Human Fertilisation and Embryology Bill crystallizes several moral dilemmas about today’s research and practice in these areas, and threatens to heighten the polarization they induce. Whatever positions Jardine takes are sure to upset some vocal group or other.

Perhaps this is why the appointment of someone used to taking the long view, and accustomed also to the hard knocks of public life, makes sense. Certainly, Jardine’s popularizing instincts seem right for the HFEA just now: she considers public education about fertility issues (“something people need to know about”) as important as the regulatory responsibilities. The HFEA, while not exactly an opaque bureaucracy, has seldom previously shown an explicit commitment to inform.

And now is the time to do it. So far, it seems that the kind of misinformation about the bill spread by Catholic officials and other religious groups – talk of animal-human ‘cybrid’ embryos in research as ‘of Frankenstein proportion’ – has not significantly dented a public appreciation of the benefits such research could bring. (The ‘animal’ component here is a mere shell for human genes.) But it’s never a good idea to underestimate the determination of zealots.

Tuesday, April 15, 2008



On their way to a bookshop near you

Well look, you don't seriously think I'm going to go to all this effort if I do not allow myself a bit of advertising now and again. These two books - Universe of Stone (Bodley Head), a study of the twelfth-century renaissance through the prism of Gothic architecture, and The Sun and Moon Corrupted (Portobello), a novel - are on their way to the warehouses as I write. I have gleaming new copies of both books beside me now, and believe me, you should judge these ones by their covers. Oh, you don't need me to put an Amazon link here, do you?

You can hear me talking about the book on Gothic (and a mixed grill of other things, including creationism) on the latest Guardian science podcast (one "l" please).

And there is now a reissue of my book Bright Earth: The Invention of Colour available from Vintage, with a bright and bubbly new cover.
Radio sweat gland - 90 GHz

[Given that part of the point of this blog is to add a bit of value to stuff I publish elsewhere, I thought it was worth putting up this piece that appears this week in a necessarily abbreviated form in Nature's news pages. In particular, it's a shame not to hear from Merla, who is close to this topic, and more from the famous Paul Ekman.]


Sweating – a sign of recent physical activity and, often, of mental stress – can be detected from a distance by a beam of millimetre-wavelength radiation, a team in Israel claims[1].

They have shown that sweat ducts in human skin acts like an array of tiny antennas that pick up radiation at frequencies of about 100 gigahertz – the so-called extremely high frequency or EHF range, lying between microwaves and terahertz radiation. The antenna behaviour is all down to the ducts’ curious shape: they thread through the epidermis as regular helices. Filled with electrically conductive sweat, these channels act a little like coils of wire that absorb radiation across the millimetre and sub-millimetre wavelength band

Yuri Feldman of the Hebrew University of Jerusalem and his colleagues measured the reflection of EHF radiation from skin on the palms of subjects after 20 minutes of jogging, and found that there was a strong band of absorption compared to people who had not exercised. The absorption gradually disappeared as the jogging group rested. They also found that the reflection signals were proportional to blood pressure and pulse rate, known to be indicators of physiological stress.

And when the researchers suppressed sweating with a synthetic compound that mimics the localized paralysis of snake venom, inactivating the sweat glands, they found that the EHF absorption was lower.

Feldman and colleagues say that the helical antenna array makes skin a kind of biological metamaterial, in which the material’s response to electromagnetic radiation is determined by structure rather than composition. Metamaterials made from arrays of tiny electrical circuits are being explored for applications ranging from super-lenses to invisibility shields. “Nature has done what is being attempted extensively today in nanophotonics”, say the researchers.

Arcangelo Merla, who works on biomedical imaging of arousal states at the University ‘G. D’Annunzio’ in Chieti, Italy, calls the work “quite fascinating” and agrees that “it may open an alternative way for remote sensing of this important physiological phenomenon”.

They speculate that their technique could be used to gauge people’s mental state from a distance, perhaps even without their knowing. “This effect might be used for biomedical applications and homeland security applications”, the Israeli team say. Detection of sweating hands has previously been used as in lie detection, although the use of such physiological parameters in ‘polygraph’ lie detectors has become controversial after being strongly criticized in a 2002 report by the US National Academy of Sciences.

“Perspiration is related to increases in emotional arousal”, agrees Paul Ekman, a psychologist in Oakland, California, and one of the authors of that report. “But as with other measures of arousal, such as heart rate, it can be the consequence of many different mental processes. In terms of lying, arousal measures only tell you the person is aroused. Suppose you did not kill your spouse but the police are interrogating you: wouldn't you be aroused? The fear of being caught looks just like the fear of being disbelieved.”

Merla also points out that emotional sweating is driven and controlled in a different way from thermoregulatory sweating caused by exercise. He adds that “understanding a mental state from measures of peripheral activity is a very complex task”. He says that it would be inappropriate to apply the technique to lie detection and monitoring of stress and anxiety unless one combines it with other indicators of arousal. He is developing infrared thermal imaging of skin to determine several such measures simultaneously.

So far, however, Feldman and his colleagues are cautious about whether the idea will work at all, let alone how it might be applied. “We must first to evaluate the limits of performance – for example, what is the range at which we can detect a meaningful signal, how fast changes in the various biometrical parameters we want to monitor are manifested in our signal, and so on”, they say. “We are just starting our journey in these uncharted waters.”

Reference

1. Feldman, Y. et al. Phys. Rev. Lett. 100, 128102 (2008).

Friday, April 04, 2008

Astrology’s myopia
[Do I make rods for my own back? I suspect astrologers will respond to this piece, just published as a Muse column for Nature news, by saying that I clearly haven’t understood what astrology is really about or how it is meant to work. That’s because they have no idea about it themselves – the maze of different theories and traditions is a nightmare. But I think I do know what astrology used to be about, back in the days when it was arm in arm with astronomy.]

Seasonal effects on birth physiology inevitably raises spectres of astrology. But that’s just ahistorical nonsense.

Near-sightedness, or myopia, may be more common in babies born in the summer than the winter, a team of scientists in Israel have claimed [1].

This is just the latest in a string of suggestions that the season of our birth may affect our physical make-up. Among recent findings of this kind are reports of seasonal effects in fingerprint patterns [2] and in animal gestation length and birth weight [3][4].

Like these earlier claims, the seasonality of myopia seems an entirely reasonable thing to suppose, since there is already evidence that exposure to strong light both before and shortly after birth affects the ability of the eye to focus properly. The effect, identified by Yossi Mandel of the Israel Defence Force Medical Corps and colleagues, is small, and seems to kick in only for moderate to severe cases of myopia, which are probably preconditioned by a genetic susceptibility.

So far, so plausible. But you know what I’m thinking? How long before this result is touted as ‘further’ evidence that there is something to astrology after all – that the celestial configuration can imprint itself on our bodies and minds?

We can surely expect this finding to be added to the growing list of scientific findings, so far including sunspot cycles, animal navigation, solar-terrestrial climate correlations and even Gaia theory, that some astrologers have presented as evidence not only that science supports astrology but that it is trying to appropriate its key ideas.

This isn’t the kind of thing one can nip in the bud, and I don’t delude myself otherwise. Let me say simply that all these Cancerians whose poor vision has no doubt made them introverted, bespectacled bookish types are presumably born in the Northern Hemisphere, since one must anticipate that the myopia effect appears in January in the antipodes.

No, I think it is perhaps more edifying to consider why astrologers wants to draw solace from science at all. Most notoriously, they cite the statistical studies of French psychologist Michel Gauquelin, who claimed to show in the 1950s that more successful sportspeople and athletes were born when Mars was “rising or culminating” – just as you might expect for the ‘warrior’ zodiacal sign, after all.

This ‘Mars effect’ can be found echoed in a recent claim that English football league players are almost twice as likely to be born between September and November. (Sceptics might wonder whether the fact that those birth dates make British boys older and thus often bigger than their school peers has anything to do with it.)

Actually, Gauquelin himself called horoscopes an “exploitation of public credulity”. But his research was extolled by the British astronomer Percy Seymour, who has argued in several books (most recently The Scientific Proof of Astrology (2004)) that the configurations of the planets, moon and sun can leave an imprint on us via their magnetic fields. “The whole solar system is playing a symphony on the Earth’s magnetic field”, he says; the ‘interference’ of these fields somehow affects the development of babies’ brains in the womb.

Oh, I know. It is only the thought of countless astrologers saying “He had no arguments against it” that rouses me to point out that a fridge generates stronger magnetic field in the average household than Jupiter does, or that there is not the slightest reason to believe that exposure to magnetic fields can alter infants’ personality in consistent, or indeed any, ways. But I’m not going to preach to the choir.

No, I happen to think that the truth about astrology is more interesting than this kind of silliness. I’d argue that one cannot simultaneously afford astrology its proper place in the history of thought and still believe in it today.

To say (as many scientists might) that astrology has always been nonsense is to say something more or less without meaning. No one can reasonably say that Aristotelian science was nonsense; it was a best guess that proved to be wrong. The same is true of astrology.

It relied on two principles: a correspondence between the macrocosm and the microcosm (“As above, so below”), and on the action of ‘hidden’ (occult) forces. The latter was a perfectly valid assumption: there was nothing to ‘see’, no bodies in contact, that explained magnetism or gravitation. The former – in part, the idea that events in the heavens governed those on Earth, perhaps by some form of astral ‘emanation’ – was part of a long tradition, dating back at least as far as Babylonia, for which the tides and the seasons supplied corroboration.

Yes, the tradition was mistaken, but not unmotivated. Certainly, it is a whole lot less arbitrary than the modern astrologers who have allowed the whims of an astronomical nomenclature committee to determine the astrological virtues of the Centaur planetoid Chiron, discovered in 1977 – named after a mythical centaur renowned for skill at healing, this captured outer asteroid is therefore now associated with astrological healing powers.

In any event, one foregoes the right to claim any justification for these ancient beliefs in modern science if one does not accept what those scientific explanations rule out too. When astrologers say (as one did apropos of Seymour’s work) that the moon affects the oceans and so why not our predominantly watery bodies, they are in effect disqualifying themselves from using gravity as an explanatory mechanism. (Just do the sums.)

More seriously, astrologers who might want to seize on the latest scientific findings, whether of summer-induced myopia or seasonality of sporting prowess, as proof of their beliefs are like theologians hunting for God in dark energy, or indeed scientists seeking to rationalize biblical miracles: they misunderstand the function of those beliefs in the history of ideas. Astrology ‘worked’ when embedded in ancient and medieval cosmologies, which were not scientific but metaphysical. The only meaningful point of scientific continuity between historical and contemporary astrology is not about finding new physical mechanisms for how it ‘works’ but about asking whether the psychological motivations for such convictions – most probably, a need to find meaning in and control of one’s life – remain the same.

But there’s probably more too. Astrology endures, according to social critic Theodore Roszak, because of the inspirational appeal of its rich, venerable imagery. “It has poetry and philosophy built into it”, he says. He’s right about that. All it lacks is veracity.

References

1. Mandel, Y. et al., Ophthalmology doi:10.1016/j.ophtha.2007.04.040 (2008).
2. Kahn, H. S. et al., Am. J. Hum. Biol. 20, 59-65 (2008).
3. Davis, G. H. et al. Anim. Reprod. Sci. 46, 297-303 (1997).
4. Jenkinson, C. M. C. et al. New Zeal. J. Agric. Res. 38, 337-345 (1995).

Wednesday, April 02, 2008


Medieval instrument suggests astronomical knowledge was widespread
[Here’s a story I picked up on a recent visit to the wonderful Museum of the History of Science in Oxford. It is published in Nature today. To my great embarrassment and dismay, while I was at Merton College many years ago, I knew of neither the museum nor of Merton’s ‘Chaucerian’ astrolabe mentioned below. But I was just a nipper then.]

UK museum seeks cash to keep a rare miniature astrolabe in public hands.

The fate of a fourteenth-century pocket calculator is hanging in the balance, while the British Museum in London attempts to raise the £350,000 needed to acquire this extremely rare archaeological finding before the deferral of an export licence expires, releasing it to private sale.

The device, a brass astrolabe quadrant that can tell the time from the position of the sun, calculate the heights of tall objects, and work out the date of Easter, opens a new window on the mathematical and astronomical literacy of the Middle Ages, experts say.

Most surviving astrolabes are larger and more elaborate, and include other functions such as astrological calculations. Their use tended to be highly specialized, confined mostly to academic settings.

But the new quadrant is a simple, everyday item – the kind of thing a cleric or a merchant would have carried with them for convenient time-keeping. All the same, says Jim Bennett, director of the Museum of the History of Science in Oxford, “you had to know some astronomy to work one of these devices.” Bennett was an expert witness in the hearings that deferred a private sale. He says the device suggests that, at least within some parts of fourteenth-century society, “people had a closer astronomical awareness than we do now.”

Most intriguingly of all, the quadrant was found at Canterbury, and has been dated quite narrowly to about 1388, just before Geoffrey Chaucer began to write his Canterbury Tales. Chaucer was highly informed about astronomy and astrology, and in 1391 he wrote a treatise on the astrolabe that became the standard reference text for several centuries.

The Canterbury quadrant was found in 2005, when excavations revealed it beneath a series of clay floors on the site of an old inn called the House of Agnes, just outside the city walls on the main road to London. It had lain there for over 600 years. Conceivably it was lost at the inn by a merchant travelling to or from Canterbury, rather like Chaucer’s pilgrims.

The quadrant was initially put up for sale in 2007 by the auctioneers Bonhams, where it was expected to fetch £60,000-100,000. But subsequent dealings led to an agreed sale at a price of about £350,000 (neither dealer nor buyer has been publicly disclosed).

Because of the perceived cultural importance of the object, however, it was considered by the UK’s Reviewing Committee on the Export of Works of Art and Objects of Cultural Interest, which recommended to the government’s Culture Minister Margaret Hodge that granting of an export licence should be delayed until June 2008, giving time for the British Museum to try to buy the instrument for its forthcoming Medieval Gallery. Such decisions are usually applied to works of fine art, not to scientific items, Bennett says.

The quadrant was clearly made for use in England – it works only within a certain range of latitudes. It has design features that would appeal to medieval gadget freaks, Bennett says, such as a moveable eagle that indicates the date of Easter. “Relatively everyday pieces like this don’t usually survive”, he adds. A quadrant astrolabes kept in the library of Merton College in Oxford, one of the few other such instruments known, is considerably more elaborate and geared for academic use.

The existence of this simple, practical device sheds new light on Chaucer’s treatise. He dedicated it to his son – a gesture that has left scholars wondering whether this was just a literary affectation, since it seemed a little hard to believe that Chaucer could have expected a young person to appreciate an astrolabe’s uses. But the Canterbury quadrant “supports the idea that Chaucer could write such a treatise at a popular level”, says Bennett. “It suggests that this kind of knowledge wasn’t too arcane or academic.”
Science made simple

Starting, I think, on 14 April, the UK's Independent newspaper is issuing a series of booklets called 'Science Made Simple', based on extracts from the Very Short Introductions series published by Oxford University Press. As you might guess, I am offering this little advert for them because some are written by me. They've chosen extracts from my volumes on molecules and the elements (originally published as Stories of the Invisible (2001) and The Ingredients (2002)) for three of the booklets, on 15, 16 and 20 April. Of course, the others will be splendid too, ranging from Earth sciences to cosmology to the brain.

Tuesday, March 25, 2008

On hobbits and Merlin
[This is my latest Lab Report column for Prospect.]

In a hole in the ground there lived a hobbit. But the rest of this story is not fit for children, mired in accusations of grave-robbing and incompetence. The ‘hobbits’ in question, some just three feet tall, have been allegedly found in caves on islands of the Palauan archipelago in Micronesia. Or rather, their bones have, dating to around 1400 years ago. The discoverers, Lee Berger of the University of Witwatersrand in South Africa and his colleagues, think they shed new light on the diminutive Homo floresiensis remains discovered in Indonesia in 2003, which are widely believed to be a new species that lived until 13,000 years ago. If relatively recent humans can be this small, that belief could be undermined. Berger thinks that the smallness of H. floresiensis might be dwarfism caused by a restricted diet and lack of predators on a small island.

But others say Berger’s team are misrepresenting their find. Some claim the bones could be those of individuals no smaller than ‘pygmy’ groups still living in the Philippines, or even of children, and so are nothing to get excited about. And the new species status of H. floresiensis does not rest on size alone, but on detailed anatomical analysis.

On top of these criticisms, Berger’s team faces accusations of cultural insensitivity for prodding around in caves that locals regard as sacred burial places. To make matters worse, Berger’s work was partly funded by the National Geographic Society, which made a film about the study that was released shortly before Berger’s paper appeared in the online journal PLoS One (where peer review focuses on methodology, not conclusions). To other scientists, this seems suspiciously like grandstanding that undermines normal academic channels, although Berger insists he knew nothing of the film’s timing. “This looks like a classic example of what can go wrong when science and the review process are driven by popular media”, palaeoanthropologist Tim White told Nature.

*****

As well as sabre-rattling, the Bush administration has a softer strategy for dealing with nuclear ‘rogue states’. It has set up a club for suitably vetted nations called the Global Nuclear Energy Partnership (GNEP), in which trustworthy members with “secure, advanced nuclear capabilities” provide nuclear fuel to, and deal with the waste from, other nations who agree to peaceful uses of nuclear power only. In effect, it’s a kind of ‘nuclear aid’ scheme with string attached: we give you the fuel, and we clean up for you, if you use it the way we tell you to. So members share information on reactor design but not on reprocessing of spent fuel, which can be used to extract military-grade fissile material. Everyone’s waste will be shipped to a select band of reprocessing states, including China, Russia, France, Japan, Australia and the US itself.

For all its obvious hierarchy, the GNEP is not without merit. The claim is that it will promote non-proliferation of nuclear arms, and it makes sense for the burden of generating energy without fossil fuels to be shared internationally. But one might worry about the prospect of large amounts of nuclear waste being shipped around the planet. Even more troublingly, many nuclear advocates think the current technology is not up to the task. John Deutsch of the Massachusetts Institute of Technology, a specialist in nuclear energy and security, calls GNEP “hugely expensive, hugely misdirected and hugely out of sync with the needs of the industry and the nation.” The US Department of Energy’s plans to build a massive reprocessing facility, without initial pilot projects, has been called “a recipe for disaster” by the Federation of American Scientists, which adds that “GNEP has the potential to become the greatest technological debacle in US history.” It accuses the DoE of selling the idea as a green-sounding ‘recycling’ scheme. Nonetheless, in February the UK signed up as the GNEP’s 21st member, while contemplating the estimated £30 bn bill for cleaning up its own reprocessing facility at Sellafield.

*****

Having come to expect all news to be bad, British astronomers saw a ray of hope in late February when the decision of the Science and Technology Facilities Council (STFC) to withdraw from the Gemini project was reversed. Gemini’s two telescopes in Chile and Hawaii offer peerless views of the entire sky at visible and infrared wavelengths, and the previous decision of the STFC was seen as devastating. But now it’s business as usual, as the STFC has announced that the e-MERLIN project is threatened with closure even before it is up and running. This is an upgrade of MERLIN, a system that sends the signals of six radio telescopes around Britain by radio link-up to Jodrell Bank, near Manchester. In e-MERLIN the radio links are being replaced with optical cables, making the process faster and able to handle more data. It will boost the sensitivity of the observations by a factor of 30, revealing things that just can’t be seen at present – for example, how disks of dust around stars evolve into planetary systems.

e-MERLIN is now nearly completed, but the STFC is considering whether to pull its funding in 2009. That would surely axe jobs at Jodrell Bank and the astronomy department at Manchester, second only in size to Cambridge, but would also harm Britain’s impressive international standing in radio astronomy. With more than ten other projects on the STFC’s endangered list, everyone is now asking where the next blow will fall. There are no obvious duds on the list, yet something has to give if the STFC is to make up its £80 million deficit. But it is the opaque and high-handed way the decisions are being taken that is creating such fury and low morale.

Monday, March 17, 2008

More burning water

[Here is my latest Crucible column for Chemistry World (April). I’m not sure if I’m one of the “unscientific critics who did not delve into the facts first” mentioned in the Roy et al. paper. If so, I’m not sure which of the ‘facts’ mentioned in my earlier article is wrong. Nonetheless, this is an intriguing result; I leave you to judge the implications.]

Take a test tube of sea water and hit it with radio waves. Then light a match – and watch it burn. Flickering over the mouth of the tube is a yellow-white flame, presumably due to the combustion of hydrogen.

When John Kanzius, an engineer in Erie, Pennsylvania, did this last year, the local TV networks were all over him. ‘He may have found a way to solve the world’s energy problems,’ they said. The clips duly found their way onto YouTube, and soon the whole world knew about this apparent new source of ‘clean fuel’.

I wrote then in Nature that Kanzius’s claims ‘must stand or fall on the basis of careful experiment’. Now, it seems, those experiments have begun. Rustum Roy, a materials scientist at Pennsylvania State University with a long and distinguished career in the microwave processing of materials, has collaborated with Kanzius to investigate the effect. The pair, along with Roy’s colleague Manju Rao, have just published a paper describing their findings in Materials Research Innovations[1], a journal that advertises itself as ‘especially suited for the publication of results which are so new, so unexpected, that they are likely to be rejected by tradition-bound journals’.

Materials Research Innovations
, of which Roy is editor-in-chief, practises what it calls ‘super peer review’, which ‘is based on reviewing the authors, not the particular piece of work… the author (at least one) shall have published in the open, often peer-reviewed literature, a large body of work… The only other criterion is that the work be “new”, “a step-function advance”, etc.’

I’m not complaining if Roy’s paper has had an easy ride, however. On the contrary, given the wide interest that Kanzius’s work elicited, it’s very handy to see the results of a methodical study without the long delays that such efforts are often likely to incur from other, more cautious journals under the standard peer-review model. Of course a review system like this is open to abuse (aren’t they all?), but the new paper suggests there is a useful function for MRI’s approach.

Mystery gas
The experimental details in the paper are simple and to the point. Put an aqueous solution of as little as 1 percent sodium chloride in a Pyrex test tube; expose it to a 300 Watt radio frequency field at 13.56 MHz; and ignite the gas that comes from the tube. Note that the inflammable gas was not collected and analysed, but simply burnt.

The effect may sound surprising, but it is not unprecedented. In 1982, a team of chemists at Western Illinois University reported the room-temperature decomposition of water vapour into hydrogen peroxide and hydrogen using radio frequency waves with around 60 percent yield [2]. They too used precisely the same frequency of 13.56 MHz – no coincidence really, since this is a common frequency for radio frequency generators. And in 1993 a Russian team reported the apparent dissociation of water into hydrogen and hydroxyl radicals using microwaves [3]. Neither paper is cited by Roy et al.

Free lunch
If water can indeed be split this way, it is intrinsically interesting. That it seems to require the presence of salt is puzzling, and offers a foothold for further exploration of what’s happening.

But of course the story neither begins nor ends there. The TV reports make it plain what was in the air: energy for free. None of them thought to ask what the energy balance actually was, and Kanzius apparently did not offer it. Roy et al. now stress that Kanzius never claimed he could get out more energy than was put in; but given the direction the reports were taking, it seems not unreasonable to have expected an explicit denial of that.

Still, we have such a denial now (in effect), so that should put an end to the breathless talk of solving the energy crisis.

The real question now is whether this process is any more energy-efficient than standard electrolysis (which has the added advantage of automatically separating the two product gases). If not, it remains unclear how useful the radio frequency process will be, no matter how intriguing. Sadly, the present paper is silent on that matter too.

There seems scant reason, then, for all the media excitement. But this episode is a reminder of the power of visual images – here, a flame dancing over an apparently untouched tube of water, a seductive sight to a culture anxious about its energy resources. It’s a reminder too of the force of water’s mythology, for this is a substance that has throughout history been lauded as a saviour and source of miracles.

References

1. R. Roy et al., Mat. Res. Innov. 2008, 12, 3.
2. S. Roychowdhury et al., Plasma Chem. Plasma Process. 1982, 2, 157.
3. V. L. Vaks et al., Radiophys. Quantum Electr. 1994, 37, 85.

Wednesday, March 05, 2008

Enough theory

One of the side-effects of James Wood’s widely reviewed book How Fiction Works (Jonathan Cape) is that it has renewed talk in the literary pages of theory. Er, which theory is that, the ingénue asks? Oh, do keep up, the postmodernist replies. You know, theory.

Why is this ridiculous affectation so universally indulged? Why do we not simply laugh when Terry Eagleton writes a book called After Theory (and he is not the first)? Now yes, it is true that we are now living in an age which postdates quantum theory, and Darwinian theory, and chaos theory, and, hell, Deryaguin-Landay-Verwey-Overbeek theory. But these people are not talking about theories as such. To them, there is only one theory, indeed only ‘theory’.

All right, we are talking here about literary theory, or if you like, cultural theory. This is not, as you might imagine, a theory about how literature works, or how culture works. It is a particular approach to thinking about literature, or culture. It is a point of view. It is in some respects quite an interesting point of view. In other respects, it is not terribly interested I the business of writing, which is what literature has (I hope you’ll agree) tended to be about. In any event, it became in the 1980s such a hegemonic point of view that it dropped all adjectives and just became ‘theory’, and even in general publications like this one, literary critics no longer felt obliged even to tell us what it says. Sometimes one feels that is just as well. But when critics now talk of theory, they generally tend to mean something clustered around post-modernism and post-structuralism. You can expect a Marxist tint. You can expect mention of hermeneutics. You had better expect to be confused. Most of all, you can expect solipsism of extravagant proportions.

Eagleton’s review of Wood in the latest Prospect is a good example. It makes a few telling points, but on the whole speaks condescendingly of Wood’s ‘A-levelish approach’, pretending to be a little sad that Wood’s determination to read the text carefully is ‘passé’. Eagleton doesn’t quite tell us what is wrong with Wood’s book, but assumes we will know exactly what he means, because are we too not adepts of ‘theory’? It bemoans the absence of any reference to Finnegan’s Wake, which (this is no value judgement) is about as relevant to the question of ‘how fiction works’ as is Catherine Cookson. I am no literary critic, and I’ve no idea if Wood’s book is any good, but I know a rubbish review when I see one.

In any event, ‘theory’ is all very much in line with ‘theory’s’ goals. It takes a word, like ‘theory’, and scoffs at our pretensions to know what it means. It appropriates language. This doesn’t seem a terribly helpful thing in a group of people who are meant to be experts on words. It is a little like declaring that henceforth, ‘breakfast’ will no longer mean the generic first meal of the day, but the croissant and coffee consumed by Derrida in his favourite Left Bank café.

Monday, March 03, 2008


Can a ‘green city’ in the Middle East live up to its claims?
[Here’s my latest piece for Nature’s Muse column.]

The United Arab Emirates has little cause to boast of green credentials, but that shouldn’t make us cynical about its new eco-city.

When Israel’s first prime minister David Ben-Gurion proclaimed his ambition to “make the desert bloom”, he unwittingly foreshadowed one of the enduring sources of controversy and tension in this beleaguered region of the Middle East. His comment has been interpreted by some as a signal of the centrality of water to political power in a parched land – and without doubt, Israel’s armed conflicts with its neighbours have been fought in part over control of water resources.

But Ben-Gurion’s remark also prompts the question of what it really means to make a desert bloom. To critics, one of those meanings involves an inappropriate transposition of a temperate lifestyle to a water-short land. Wasn’t the ‘desert’, which for centuries supported grain, fruit and olive groves, already ‘blooming’ in the most suitable way? Does ‘blooming’ entail golf courses and verdant public parks sucking up precious water?

In other words, there’s something of a collision of imagery in talk of ‘going green’ in an arid climate, where literal greenness imposes a huge burden on resources. That’s now highlighted as plans to create an ambitious ‘green city’ near Abu Dhabi in the United Arab Emirates (UAE) get underway.

Masdar City is slated to cost $22 bn, and the government of the UAE hopes that by 2018 it will be home to around 15,000 people, and a workplace for 50,000. Yet it will have no cars, will run on solar energy, and will produce no carbon emissions or other waste.

Concerns have been raised about whether this will just be an oasis for the rich, with all the incongruous trappings of luxury evident elsewhere in the UAE, where the wealthy can play golf on lush greens and even ski on immense indoor slopes covered with artificial snow.

Others have dismissed Masdar City as a figleaf to hide the energy profligacy of the UAE, where the carbon footprint per capita is the highest in the world, over five times the global average, and greenhouse gas emissions per capita are exceeded only by Qatar and Kuwait. Cynics might ask whether a little patch of clean energy will do much to alter that.

These are fair questions, but it would be a shame if Masdar City was discredited on this basis alone. Like it or not, we need to take greenness wherever we can find it. We do not need to be naïve about the motives for it, but neither does it help to be too snooty. There is some pragmatic truth in the satirical poem ‘The Grumbling Hive’ published in 1705 by Belgian physician Bernard Mandeville, who argued that private vices can have public benefits: that good may sometimes come from dubious intentions.

One might make the same accusations of a cosmetic function for China’s plans to build a zero-emission city, Dongtan, near Shanghai (although China is more worried about environmental issues than is sometimes acknowledged, recognizing them as a potential constraint on economic growth). One might also point out that the US government’s new-found enthusiasm for clean energy is motivated more by fears of its energy security than by an acceptance of the reality of global warming. But if these things lead to useful innovations that can be applied elsewhere, we would be foolish to turn up our noses at them.

It’s not just energy that is at issue here; water is an equally critical aspect of environmental sensitivity and sustainability in the baking Middle Eastern climate. Here there can be little question that necessity has been the mother of invention that makes the Middle Eastern countries world leaders in water technology. Israel has been criticized in the past for its irresponsible (not to mention inequitable) use of the region’s aquifers, and the ecosystem of the Sea of Galilee has certainly suffered badly from water practices. But Israel has in other ways become a pioneer in wise water-use schemes, particularly desalination and sewage farming. The latter reduces the strain on water systems relative to the way that some other less water-stressed countries moisten the crops with water fit to drink.

It would be good to think that there has been some recognition here that even in purely economic terms it is better to find technological solutions to water scarcity than to fight wars over it. The cost of a single F-16 jet fighter is comparable to that of the massive Ashkelon desalination plant in Israel, which produces over 300,000 cubic metres of water a day.

Desalination is a major source of fresh water in the UAE too. The Jebel Ali Desalination Plant, 35 km southwest of Dubai, generates an awesome 300 million cubic metres a year. For Masdar City, on the outskirts of Abu Dhabi City surrounded by sea, desalination is the obvious solution to the water demands of a small population cluster, and the current plans state with almost blithe confidence that this is where the water will come from. That doesn’t seem unfeasible, however. And there is now a wealth of water-resource know-how to draw on from experience elsewhere in the region, such as intelligent use of grey-water recycling.

One of the most attractive aspects of the planned design, however – which will engage the services of British architect Norman Foster, renowned for such feats as the energy-efficient ‘Gherkin’ tower in London – is that it plans to draw on old architectural wisdom as well as new. Without cars (transport will be provided by magnetic light rail), the streets will be narrow like those of older Middle Eastern towns, offering shade for pedestrians. It has long been recognized that some traditional forms of Middle Eastern architecture offer comforts in an energy-efficient manner, for example providing ‘natural’ air conditioning driven simply by convective circulation. It would be good to see such knowledge revived, and indeed Foster has talked of “working with nature, working with the elements and learning from traditional models.”

It seems unlikely that anyone is going to be blindly seduced by the promises of Masdar City – part of the support for the project offered by the World Wildlife Fund seems to involve monitoring progress to ensure that the good intentions are met. Yet we can hope that the lessons it will surely teach can be applied elsewhere.

Saturday, March 01, 2008



Heart of Steel

Birth of an Idea

Chemical art with heart

[Here’s my latest Crucible column for Chemistry World, which appears in the March issue.]

Several years ago I attempted to launch a project that would use the methods of chemical synthesis as a means of sculpture, creating a genuine plastic art at the molecular scale. I shelved it when I saw that it was unrealistic to expect chemists to think like artists: they generally inherit an aesthetic that owes more to Platonic conceptions of beauty than to anything the art world has tended (now or ever) to employ.

But the experience brought me in contact with several people who seek to integrate the molecular sciences with the visual arts. One of them is Julian Voss-Andreae, a former physicist who now works as a sculptor in Portland, Oregon. Despite his background, much of Voss-Andreae’s work is inspired by molecular structures; his latest piece is a metre-and-a-half tall sculpture of an ion channel, commissioned by Roderick MacKinnon of Rockefeller University in New York who shared a Nobel prize for elucidating its structure. It has the elegance and textures of twentieth-century modernism: with its bare, dark metal and bright wire, supported on a base of warm, finely joined wood, it wouldn’t have looked out of place at the recent Louise Bourgeois exhibition at London’s Tate Modern gallery. The title, Birth of an Idea, alludes to the role of ion channels in creating the electrical impulses of our nerve cells.

I find it hard to imagine that sculptures like these could be made by anyone who did not have a deep understanding of what molecules are and what they do. Iconic images of DNA’s double helix are commonplace now (the Cold Spring Harbor Laboratory on Long Island has two), but do little more than express delight at the graceful spiral-staircase shape (while implicitly failing to acknowledge that this is crucially dependent on the surrounding solvent). Voss-Andreae’s molecular sculptures have more to say than that. His Heart of Steel (2005), placed at an intersection in the city of Lake Oswego in Oregon, is a steel model of the structure of haemoglobin, with a red glass sphere at its centre. The twisting polypeptide chains echo those depicted in physical models made in the early days of protein crystallography, photos of which would appear in research papers in lieu of the fancy computer graphics we see today. But Heart of Steel engages with the chemistry of the molecule too, because the steel structure, left exposed to the elements, has gradually (and intentionally) corroded until its coils have become rust-red, a recapitulation of the iron-based redness of our own blood cells. Blood and iron indeed, as Bismarck said of the German Empire.

It’s no surprise that Voss-Andreae is sensitive to such nuances. As a graduate student at the University of Vienna, he was one of the team led by Anton Zeilinger that conducted a ground-breaking experiment in quantum mechanics in 1999. The researchers showed that even molecules as big as C60 can reveal their fundamentally quantum nature under the right conditions: a beam of them passed through a diffraction grating will exhibit the purely wavelike property of interference. A subsequent experiment on C70 showed how interactions with the environment (a background gas of different densities) will gradually wash away the quantumness thanks to the process of decoherence, which is now recognized as the way the classical world emerges from the quantum.

Such experiences evidently inform Voss-Andreae’s Quantum Man (2006), a figure 2.5 m tall made from thin, parallel steel sheets that looks ‘classically’ solid when seen from one angle but almost disappears into a vague haze seen from another. C60 itself has featured in more than one of Voss-Andreae’s sculptures: the football cage, 9 m high, sits among trees in Tryon Creek State Park in Oregon.

Among Voss-Andreae’s latest projects is a sculpture based on foam. “My obsession with buckyballs seems to be due to their bubble-like geometry, which got me started on this new project”, he says. His aim is to produce a foam network that is ‘adapted’ to a particular boundary shape, such as the human body. This involves more than simply ‘carving’ a block of foam to the desired contours (as was done, for example, in making the spectacular swimming stadium for the Beijing Olympics), because, he says, “the cellular structure ‘talks’ with the boundary.” Voss-Andreae is attacking the problem both mathematically and experimentally, casting a resin in the gaps between an artificial foam of water-filled balloons. Eventually he hopes to cast the resulting structure in bronze.

I admit that I am not usually a fan of attempts to turn molecular shapes into ‘art’; all too often this draws on the chemist’s rather particular concept of beauty, and to put it bluntly, a pretty picture does not equate with art. But Voss-Andreae’s work is different, because it looks to convey some of the underlying scientific principles of the subject matter even to viewers who know nothing about them. That’s what good ‘sciart’ does: rather than seeking to ‘educate’, it presents some of the textures of science in a way that nudges the mind and enlivens the senses.

Friday, February 22, 2008

Engineering for the better?
[This is the pre-edited version of my latest Muse column for Nature News.]

Many of the grand technological challenges of the century ahead are inseparable from their sociopolitical context.

At the meeting of the American Association for the Advancement of Science in Boston last week, a team of people selected by the US National Academy of Engineering identified 14 ‘grand challenges for engineering’ that would help make the world “a more sustainable, safe, healthy, and joyous – in other words, better- place.”

It’s heartening to see engineers, long dismissed as the lumpen, dirty-handed serfs labouring at the foot of science’s lofty citadel, asserting in this manner their subject’s centrality to our future course. Without rehearsing again the debates about the murky boundaries between pure and applied science, or science and technology, it’s rather easy to see that technologists have altered human culture in ways that scientists never have. Plato, Galileo, Darwin and Einstein have reshaped our minds, but there is hardly an action we can take in the industrialized world that does not feel the influence of engineering.

This, indeed, is why one can argue that a moral, ethical and generally humanistic sensitivity is needed in engineering even more than it is in the abstract natural sciences. It is by the same token the reason why engineering is a political as well as a technological activity: whether they are making dams or databases, engineers are both moving and being moved by the sociopolitical landscape.

This is abundantly clear in the Grand Challenges project. The vision it outlines is, by and large, a valuable and praiseworthy one. It recognizes explicitly that “the most difficult challenge of all will be to disperse the fruits of engineering widely around the globe, to rich and poor alike.” Its objectives, the statement says are “goals for all the world’s people.”

Yet some of the problems identified arguably say more about the current state of mind of Western culture than about what engineering can do or what goals are most urgent. Two of the challenges are concerned with security – or what the committee calls vulnerability – and two focus on the personalization of services – health and education – that have traditionally been seen as generalized ‘one size fits all’ affairs. There are good arguments why it is worthwhile recognizing individual differences – not all medicines have the same effect on everyone (in either good or bad ways), and not everyone learns in the same way. But there is surely a broader political dimension to the notion that we seem now to demand greater tailoring of public services to our personal needs, and greater protection from ‘outsiders’.

What is particularly striking is how ‘vulnerability’ and security are here no longer discussed in terms of warfare (one of the principal engines of technological innovation since ancient times) but attacks on society from nefarious, faceless aggressors such as nuclear and cyber terrorists. These are real threats, but presented this way in terms of engineering challenges makes for a very odd perspective.

For example, let us say (for the sake of argument) that there exists a country where guns can be readily bought at the corner store. How can we make the law-abiding citizen safe from firearms falling into the hands of homicidal madmen? The answers proposed here are, in effect, to develop technologies for making the stores more secure, for keeping track of where the guns are, for cleaning up after a massacre, and for finding out who did it. To which one might be tempted to add another humble suggestion: what if the shops did not sell guns?

To put it bluntly, discussing nuclear security without any mention of nuclear non-proliferation agreements and efforts towards disarmament is nonsensical. In one sense, perhaps it is understandably difficult for a committee on engineering to suggest that part of the solution to a problem might lie with not making things. Cynics might also suspect a degree of political expediency at work, but I think it is more reasonable to say that questions of this nature don’t really fall into the hands of engineers at all but are contingent on the political climate. To put it another way, I suspect the most stimulating lists of ways to make the world better won’t just include things that everyone can reasonably deem desirable, but things that some will not.

The limited boundaries of the debate are the central shortcoming of an exercise like this. It was made clear from the outset that all these topics are being considered purely from an engineering point of view, but one can hardly read the list without feeling that it is really attempting to enumerate all the big challenges facing humankind that have some degree of technical content. The solutions, and perhaps even the choices, are then bound to disappoint, because just about any challenge of this sort does not depend on technology alone, or even primarily.

Take health, for example. Most of the diseases in the world (and AIDS is now only a partial exception) are ones we know already how to prevent, cure or keep at bay. Technology can play a part in making such treatments cheaper or more widely available (or, in cases of waterborne diseases, say, not necessary in the first place) – but in the immediate future, health informatics and personalized medicine are hardly the key requirements. Economics, development and diet are likely to have a much bigger effect on global health than cutting-edge medical science.

None of this is to deny the value of the Grand Challenges project. But it highlights the fact that one the most important goals is to integrate science and technology with other social and cultural forces. This is a point made by philosopher of science Nicholas Maxwell in his 1984 book From Knowledge to Wisdom (a new edition of which has just been published by Pentire Press).

To blame science for the ills of the world is to miss the point, says Maxwell. “What we urgently need to do - given the unprecedented powers bequeathed to us by science - is to learn how to tackle our immense, intractable problems of living in rather more intelligent, humane, cooperatively rational ways than we do at present… We need a new kind of academic inquiry that gives intellectual priority to our problems of living - to clarifying what our problems are, and to proposing and critically assessing the possible solutions.”

He proposes that, to this end, the natural sciences should include three domains of discussion: not just evidence and theory, but aims, “this last category covering discussion of metaphysics, values and politics.” There is certainly much to challenge in Maxwell’s position. Trofim Lysenko’s fatefully distorted genetics in the Stalinist Soviet Union, for example, had ‘values and politics’; and the hazards of excessively goal-driven research are well-known in this age of political and economic short-termism.

Maxwell tackles such criticisms in his book, but his wider point – that science and technology should not just be cognisant of social and ethical factors but better integrated with them – is important. The Grand Challenges committee is full of wise and humane technologists. Next time, it would be interesting to include some who are the former but not the latter.

Sunday, February 17, 2008

Ye Gods

Yes, as the previous entry shows, I am reading Jeanette Winterson’s The Stone Gods. Among the most trivial of the issues it makes me ponder is what kind of fool gave the name to silicone polymers. You don’t exactly have to be a linguist to see where that was going to lead. The excuse that there was some chemical rationale for it (the ‘-one’ suffix was chosen by analogy to ketones, with which silicones were mistakenly thought to be homologous) is no excuse at all. After all, chemistry is replete with antiquated names in which a terminal ‘e’ became something of a matter of taste, including alizarine and indeed proteine. So we are now saddled with endless confusions of silicone with silicon – with the particularly unfortunate (or is it?) consequence in Winterson’s case that her robot Spike is implied to have a brain made of the same stuff as the brainless Pink’s breasts.

But for some reason I find myself forgiving just about anything in Jeanette Winterson. Partly this is because her passion for words is so ingenuous and valuable, and partly it may be because my instinct for false modesty is so grotesquely over-developed that I can only gaze in awed admiration at someone who will unhesitatingly nominate their own latest book as the year’s best. But I must also guiltily confess that it is clearly also because we are so clearly both on The Same Side on just about every issue (how could it be otherwise for someone who cites Tove Jansson among her influences?). It is deplorable, I know, that I would be all smug and gloating if the science errors in The Stone Gods had come from someone like Michael Crichton. But of course Crichton preens about the ‘accuracy’ of his research (sufficiently to fool admittedly gullible US politicians), whereas it is really missing the point of Winterson to get all het up about her use of light-years as a unit of time.

Ah, but all the same – where were the editors? Is this the fate of famous authors – that no one deems it necessary to fact-check you any more? True, it is only the sci-fi nerd who will worry that Winterson’s spacecraft can zip about at ‘light speed’ (which we can understand, with poetic licence, as near-light-speed) without the slightest sign of any time dilation. And she never really pretends to be imagining a real future (she says she hates science fiction, although I assume with a narrow definition), so there’s no point in scoffing at the notion that blogs and iPods have somehow survived into the age of interstellar travel. But listen, you don’t need to be a scientist to sense something wrong with this:

“In space it is difficult to tell what is the right way up; space is curved, stars and planets are globes. There is no right way up. The Ship itself is tilting at a forty-five degree angle, but it is the instruments that tell me so, not my body looking out of the window.”

Um, and the instruments are measuring with respect to what? This is actually a rather lovely demonstration of the trap of our earthbound intuitions – which brings me back to the piece below. Oh ignore me, Jeanette (as if you needed telling).

Friday, February 15, 2008

There’s no place like home
… but that won’t stop us looking for it in our search for extraterrestrials.

[This is the pre-edited version of my latest Muse column for Nature news. Am I foolish to imagine there might be people out there who appreciate the differences? Don't answer that.]


In searching the skies for other worlds, are we perhaps just like the English tourists waddling down the Costa del Sol, our eyes lighting up when we see “The Red Lion” pub with the Union Jack in the windows and Watneys Red Barrel on tap? Gazing out into the unutterably vast, unnervingly strange depths of the cosmos, are we not really just hankering for somewhere that looks like home?

It isn’t just a longing for the familiar that has stirred up excitement about the discovery of what looks like a scaled-down version of our own solar system surrounding a distant star [1]. But neither, I think, is that impulse absent.

There’s sound reasoning in looking for ‘Earth-like’ extrasolar planets, because the one thing we can say for sure about these places is that they are capable of supporting life. And it is entirely understandable that extraterrestrial life should be the pot of gold at the end of this particular rainbow.

Yet I doubt that the cold logic of this argument is all there is behind our fascination with Earth-likeness. Science-fiction writers and movie makers have sometimes had fun inventing worlds very different to our own, peopled (can we say that?) with denizens of corresponding weirdness. But that, on the whole, is the exception. That the Klingons and Romulans looked strangely like Californians with bad hangovers was not simply a matter of budget constraints. Edgar Rice Burroughs’ Mars was apparently situated somewhere in the Sahara, populated by extras from the Arabian Nights. In Jeanette Winterson’s new novel The Stone Gods, a moribund, degenerate Earth (here called Orbus) rejoices in the discovery of a pristine Blue Planet in the equivalent of the Cretaceous period, because it offers somewhere to escape to (and might that, in these times haunted by environmental change, nuclear proliferation and fears of planet-searing impacts, already be a part of our own reverie?). Most fictional aliens have been very obviously distorted or enhanced versions of ourselves, both physically and mentally, because in the end our stories, right back to those of Valhalla, Olympus and the seven Hindu heavens, have been more about exploring the human condition than genuinely imagining something outside it.

This solipsism is understandable but deep-rooted, and we shouldn’t imagine that astrobiology and extrasolar planetary prospecting are free from it. However, the claim by the discoverers of the new ‘mini-solar system’ that “solar system analogs may be common” around other stars certainly amounts to more than saying “hey, you can get a decent cup of coffee in this god-forsaken place”. It shows that our theories of the formation and evolution of planetary systems are not parochial, and offers some support for the suspicion that previous methods of planet detection bias our findings towards oddballs such as ‘hot Jupiters’. The fact that the relatively new technique used in this case – gravitational microlensing – has so quickly turned up a ‘solar system analog’ is an encouraging sign that indeed our own neighbourhood is not an anomaly.

The desire – it is more than an unspoken expectation – to find a place that looks like home is nevertheless a persistent bugbear of astrobiology. A conference organized in 2003 to address the question “Can life exist without water?” had as part of its agenda the issue of whether non-aqueous biochemistries could be imagined [2]. But in the event, the participants did not feel comfortable in straying beyond our atmosphere, and so the debate became that of whether proteins can function in the dry or in other solvents, rather than whether other solvents can support the evolution of a non-protein equivalent of enzymes. Attempts to re-imagine biology in, say, liquid methane or ammonia, have been rare [3]. An even more fundamental question, which I have never seen addressed anywhere, is whether evolution has to be Darwinian. It would be a daunting challenge to think of any better way to achieve ‘design’ and function blindly, but there is no proof that Darwin has a monopoly on such matters. Do we even need evolution? Are we absolutely sure that some kind of spontaneous self-organization can’t create life-like complexity, without the need for replication, say?

Maybe these questions are too big to be truly scientific in this form. Better, then, to break bits off them. Marcelo Gleiser and his coworkers at Dartmouth College in New Hampshire have done that in a recent preprint [4], asking whether a ‘replica Earth’ would share our left-handed proteins and right-handed nucleic acids. The handedness here refers to the mirror-image shapes of the biomolecular building blocks. The two mirror-image forms are called enantiomers, and are distinguishable by the fact that they rotate the plane of polarized light to the left or the right.

In principle, all our biochemistry could be reversed by mirror reflection of these shapes, and we’d never notice. So the question is why one set of enantiomers was preferred over the other. One possibility is that it was purely random – once the choice is made, it is fixed, because building blocks of the ‘wrong’ chirality don’t ‘fit’ when constructing organisms. Other explanations, however, suggest that life’s hand was biased at the outset, perhaps by the intrinsic left-handedness in the laws of fundamental physics, or because there was an excess of left-handed amino acids that fell to Earth on meteorites and seeded the first life (that is simply deferring the question, however).

Gleiser and his coworkers argue that these ideas may all be irrelevant. They say that environmental disturbances strong and long enough can reset the handedness, if this is propagated in the prebiotic environment by an autocatalytic process in which an enantiomer acts as a catalyst to create more of itself while blocking the chemical reaction that leads to the other enantiomer. Such a self-amplifying process was proposed in 1953 by physicist Charles Frank, and was demonstrated experimentally in the 1990s by Japanese chemist Kenso Soai.

The US researchers show that an initially random mixture of enantiomers in such a system quickly develops patchiness, with big blobs of each enantiomer accumulating like oil separating from vinegar in an unstirred salad dressing. Chance initial variations will lead to one or other enantiomer eventually dominating. But an environmental disruption, like the planet-sterilizing giant impacts suffered by the early Earth, can shake the salad dressing, breaking up the blobs. When the process begins again, the new dominant enantiomer that emerges may be different from the one before, even if there was a small excess of the other at the outset. As a result, they say, the origin of life’s handedness “is enmeshed with Earth’s environmental history” – and is therefore purely contingent.

Other researchers I have spoken to question whether the scheme Gleiser’s team has considered – autocatalytic spreading in an unstirred solvent – has much relevance to ‘warm little ponds’ on the turbulent young Earth, and whether the notion of resetting by shaking isn’t obvious in any case in a process like Frank’s in which chance variations get amplified. But of course, in an astrobiological context the more fundamental issue is whether there is the slightest reason to think that alien life will use amino acids and DNA, so that a comparison of handedness will be possible.

That doesn’t mean these questions aren’t worth pursuing (they remain relevant to life on Earth, at the very least). But it’s another illustration of our tendency to frame the questions parochially. In the quest for life elsewhere, whether searching for new planets or considering the molecular parameters of potential living systems, we are in some ways more akin to historians than to scientists: our data is a unique narrative, and our thinking is likely to stay trapped within it.


References


1. Gaudi, B. S. et al. Science 319, 927-930 (2008).
2. Phil. Trans. R. Soc. Lond. Ser. B special issue, 359 (no. 1448) (2004).
3. Benner, S. A. et al. Curr. Opin. Chem. Biol. 8, 672 (2004).
4. Gleiser, M. et al. http://arxiv.org/abs/0802.1446

Saturday, February 09, 2008

The hazards of saying what you mean

It’s true, the archbishop of Canterbury talking about sharia law doesn’t have much to do with science. Perhaps I’m partly just pissed off and depressed. But there is also a tenuous link insofar as this sorry affair raises the question of how much you must pander to public ignorance in talking about complex matters. Now, the archbishop does not have a way with words, it must be said. You’ve got to dig pretty deep to get at what he’s saying. One might argue that someone in his position should be a more adept communicator, although I’m not sure I or anyone else could name an archbishop who has ever wrapped his messages in gorgeous prose. But to what extent does a public figure have an obligation to explain that “when I say X, I don’t mean the common view of X based on prejudice and ignorance, but the actual meaning of X”?

You know which X I mean.

I simply don’t know whether what Rowan Williams suggests about the possibility of conferring legality on common cultural practices of decision-making that have no legal basis at present is a good one, or a practical one. I can see a good deal of logic in the proposal that, if these are already being widely used, that use might be made more effective, better supported and better regulated if such systems are given the status of more formal recognition. But it’s not clear that providing a choice between alternative systems of legal proceeding is a workable one, even if this need not exactly amount to multiple systems of law coexisting. My own prejudice is to worry that some such systems might have disparities traditional Western societies would feel uncomfortable about, and that making their adoption ‘voluntary’ does not necessarily mean that everyone involved will be free to exercise that choice free of coercion. But I call this a prejudice because I do not know the facts in any depth. It is certainly troubling that some Islamic leaders have suggested there is no real desire in their communities for the kind of structure Williams has proposed.

Yet when Ruth Gledhill in the Times shows us pictures and videos of Islamist extremists, we’re entitled to conclude that there is more to her stance than disagreements of this kind. Oh, don’t be mealy-mouthed, boy: she is simply whipping up anti-Muslim hysteria. The scenes she shows have nothing to do with what Rowan Williams spoke about – but hey, let’s not forget how nutty these people are.

Well, so far so predictable. Don’t even think of looking at the Sun here or the Daily Mail. I said don’t. What is most disheartening from the point of view of a communicator, however, is the craven, complicit response in some parts of the ‘liberal’ press. In the Guardian, Andrew Brown says “it is all very well for the archbishop to explain that he does not want the term ‘sharia’ to refer to criminal punishments, but for most people that’s what the word means: something atavistic, misogynistic, cruel and foreign.” Let me rephrase that: “it is all very well for the archbishop to explain precisely what he means, but most people would prefer to remain ignorant and bigoted.”

And again: “It’s no use being an elistist if you don’t understand the [media] constraints under which an elite must operate.” Or put another way: “It’s no use being a grown-up if you don’t understand that the media demands you be immature and populist.”

And again: “there are certain things which may very well be true, and urgent and important, but which no archbishop can possibly say.” Read that as: “there are certain things which may very well be true, and urgent and important, but which as a supposed moral figurehead in society you had better keep quiet about.”

And again: “Even within his church, there is an enormous reservoir of ill-will towards Islam today, as it was part of his job to know.” Or rather, “he should realise that it’s important not to say anything that smacks of tolerance for other faiths, because that will incite all the Christian bigots.” (And it has: what do you really think synod member Alison Ruoff means when she says of Williams that “he does not stand up for the church”?)

What a dismaying and cynical take on the possibility of subtle and nuanced debate in our culture, and on the possibility of saying what you mean rather than making sure you don’t say what foolish or manipulative people will want to believe or pretend you meant. Madeline Bunting’s article in the Guardian is, on the other hand, a sane and thoughtful analysis. But the general take on the matter in liberal circles seems to be that the archbishop needs a spin doctor. That’s what these bloody people have just spent ten years complaining about in government.

Listen, I’m an atheist, it makes no difference to me if the Church of England (created to save us from dastardly foreign meddling, you understand – Ruth Gledhill says so) wants to kick out the most humane and intelligent archie they’ve had for yonks. But if that happens because they capitulate to mass hysteria and an insistence that everyone now plays by the media’s rules, it’ll be an even sadder affair than it is already.

Friday, February 08, 2008

Waste not, want not
[This is my latest Muse column for Nature News.]

We will now go to any extent to scavenge every last joule of energy from our environment.

As conventional energy reserves dwindle, and the environmental costs of using them take on an apocalyptic complexion, we seem to be developing the mentality of energy paupers, cherishing every penny we can scavenge and considering no source of income too lowly to forgo.

And that’s surely a good thing – it’s a shame, in fact, that it hasn’t happened sooner. While we’ve gorged on the low-hanging fruit of energy production, relishing the bounty of coal and oil that nature brewed up in the Carboniferous, this “spend spend spend” mentality was never going to see us financially secure in our dotage. It’s a curious, almost perverse fact – one feels there should be a thermodynamic explanation, though I can’t quite see it – that the most concentrated energy sources are also the most polluting, in one way or another.

Solar, wind, wave, geothermal: all these ‘clean’ energy resources are vast when integrated over the planet, but frustratingly meagre on the scales human engineering can access. Nature provides a little focusing for hydroelectric power, collecting runoff into narrow, energetic channels – but only if, like the Swiss, you’re lucky enough to have vast mountains on your doorstep.

So we now find ourselves scrambling to claw up as much of this highly dispersed green energy as we can. One of the latest wheezes uses piezoelectric plastic sheets to generate electricity from the impact of raindrops – in effect, a kind of solar cell re-imagined for rotten weather. Other efforts seek to capture the energy of vibrating machinery and bridges. Every joule, it now seems, is sacred.

That applies not just for megawatt applications but for microgeneration too. The motivations for harnessing low levels of ‘ambient’ energy at the scale of individual people are not always the same as those that apply to powering cities, but they overlap – and they are both informed by the same ethic of sustainability and of making the most of what is out there.

That’s true of a new scheme to harvest energy from human motion [1]. Researchers in Canada and the US have made a device that can be mounted on the human knee joint to mop up energy released by the body each time you swing your leg during walking. More specifically, the device can be programmed to do this only during the ‘braking’ part of the cycle, where you’re using muscle energy to slow the lower leg down. Just as in the regenerative braking of hybrid vehicles, this minimizes the extra fuel expended in sustaining motion.

While advances in materials have helped to make such systems lightweight and resilient, this new example shows that conceptual advances have played a role too. We now recognize that the movements of humans and other large animals are partly ‘passive’: rather than every motion being driven by energy-consuming motors, as they typically are in robotics, energy can be stored in flexing tissues and then released in another part of the cycle. Or better still, gravity alone may move freely hinging joints, so that some parts of the cycle seem to derive energy ‘for free’ (more precisely, the overall efficiency of the cycle is lower than it would be if actively driven throughout).

There’s more behind these efforts than simply a desire to throw away the batteries of your MP3 player as you hike along (though that’s an option). If you have a pacemaker or an implanted drug-delivery pump, you won’t relish the need for surgery every time the battery runs out. Drawing power from the body rather than from the slow discharge of an electrochemical dam seems an eminently sensible way to solve that.

The idea goes way back; cyclists will recognize the same principle at work in the dynamos that power lights from the spinning of the wheels. They’ll also recognize the problems: a bad dynamo leaves you feeling as though you’re constantly cycling uphill, squeaking as you go. What’s more, you stop at the traffic lights on a dark night, and your visibility plummets (although capacitive ‘stand-light’ facilities can now address this). And in the rain, when you most want to be seen, the damned thing starts slipping. The disparity between the evident common sense of bicycle dynamos and the rather low incidence of their use suggests that even this old and apparently straightforward energy-harvesting technology struggles to find the right balance between cost, convenience and reliability.

Cycle dynamos do, however, also illustrate one of the encouraging aspects of ambient energy scavenging: advances in electronic engineering have allowed the power consumption of many hand-held devices to drop dramatically, reducing the demands on the power source. LED bike lights need less power than old-fashioned incandescent bulbs, and a dynamo will keep them glowing brightly even if you cycle at walking pace.

Ultra-low power consumption is now crucial to some implantable medical technologies, and is arguably the key enabling factor in the development of wireless continuous-monitoring devices: ‘digital plasters’ that can be perpetually broadcasting your heartbeat and other physiological parameters to a remote alarm system while you go about your business at home [2].

In fact, a reduction in power requirements can open up entirely new potential avenues of energy scavenging. It would have been hard, in days of power-hungry electronics, to have found much use for the very low levels of electricity that can be drawn from seafloor sludge by ‘microbial batteries’, electrochemical devices that simply plug into the mud and suck up energy from the electrical gradients created by the metabolic activity of bacteria [3]. These systems can drive remote-monitoring systems in marine environments, and might even find domestic uses when engineered into waste-water systems [4].

And what could work for bacteria might work for your own cells too. Ultimately we get our metabolic energy from the chemical reaction of oxygen and glucose – basically, burning up sugar in a controlled way, mediated by enzymes. Some researchers hope to tap into that process by wiring up the relevant enzymes to electrodes and sucking off the electrons involved in the reaction, producing electrical power [5]. They’ve shown that the idea works in grapes; apes are another matter.

Such devices go beyond the harvesting of biomechanical energy. They promise to cut out the inefficiencies of muscle action, which tends to squander around three-quarters of the available metabolic energy, and simply tap straight into the powerhouses of the cell. It’s almost scary, this idea of plugging into your own body – the kind of image you might expect in a David Cronenberg movie.

These examples show that harnessing ‘people power’ and global energy generation do share some common ground. Dispersed energy sources like tidal and geothermal offer the same kinds of low-grade energy, in motion and heat gradients say, as we find in biological systems. Exploiting this on a large scale is much more constrained by economics; but there’s every reason to believe that the two fields can learn from each other.

And who knows – once you’ve felt how much energy is needed to keep your television on standby, you might be more inclined to switch it off.

References
1. Donelan, J. M. et al. Science 319, 807-810 (2008).
2. Toumazou, C. & Cass, T. Phil. Trans. R. Soc. Lond. B Biol. Sci. 362, 1321–1328 (2007).
3. Mano, N. & Heller, A. J. Am. Chem. Soc. 125, 6588-6594 (2003).
4. Logan, B. E. & Regan, J. M. Environ. Sci. Technol. 40, 5172-5180 (2006).
5. Logan, B. E. Wat. Sci. Technol. 52, 31-37 (2005).