Saturday, January 26, 2008

No option

There is an excellent article in today’s Guardian by the author John Lanchester, who turns out to have a surprisingly (but after all, why not?) thorough understanding of the derivatives market. Lanchester’s piece is motivated by the extraordinary losses chalked up by rogue trader Jérôme Kerviel of the French bank Société Générale. Kerviel’s exploits seem to be provoking the predictable shock-horror about the kind of person entrusted with the world’s finances (as though the last 20 years had never happened). I suspect it was Lanchester’s intention to leave it unstated, but one can’t read his piece without a mounting sense that the derivatives market is one of humankind’s more deranged inventions. To bemoan that is not in itself terribly productive, since it is not clear how one legislates against the situation where one person bets an insane amount of (someone else's) money on an event of which he (not she, on the whole) has not the slightest real idea of the outcome, and another person says ‘you’re on!’. All the same, it is hard to quibble with Lanchester’s conclusion that “If our laws are not extended to control the new kinds of super-powerful, super-complex, and potentially super-risky investment vehicles, they will one day cause a financial disaster of global-systemic proportions.”

All this makes me appreciate that, while I have been a small voice among many to have criticized the conventional models of economics, in fact economists are only the poor chaps trying to make sense of the lunacy that is the economy. Which brings me to Fischer Black and Myron Scholes, who, Lanchester explains, published a paper in 1973 that gave a formula for how to price derivatives (specifically, options). What Lanchester doesn’t mention is that this Nobel-winning work made the assumption that the volatility of the market – the fluctuations in prices – follows the form dictated by a normal or Gaussian distribution. The problem is that it doesn’t. This is what I said about that in my book Critical Mass:

“Options are supposed to be relatively tame derivatives—thanks to the Black-Scholes model, which has been described as ‘the most successful theory not only in finance but in all of economics’. Black and Scholes considered the question of strategy: what is the best price for the buyer, and how can both the buyer and the writer minimize the risks? It was assumed that the buyer would be given a ‘risk discount’ that reflects the uncertainty in the stock price covered by the option he or she takes out. Scholes and Black proposed that these premiums are already inherent in the stock price, since riskier stock sells for relatively less than its expected future value than does safer stock.
Based on this idea, the two went on to devise a formula for calculating the ‘fair price’ of an option. The theory was a gift to the trader, who had only to plug in appropriate numbers and get out the figure he or she should pay.
But there was just one element of the model that could not be readily specified: the market volatility, or how the market fluctuates. To calculate this, Black and Scholes assumed that the fluctuations were gaussian.
Not only do we know that this is not true, but it means that the Black-Scholes formula can produce nonsensical results: it suggests that option-writing can be conducted in a risk-free manner. This is a potentially disastrous message, imbuing a false sense of confidence that can lead to huge losses. The shortcoming arises from the erroneous assumption about market variability, showing that it matters very much in practical terms exactly how the fluctuations should be described.
The drawbacks of the Scholes-Black theory are known to economists, but they have failed to ameliorate them. Many extensions and modifications of the model have been proposed, yet none of them guarantees to remove the risks. It has been estimated that the deficiencies of such models account for up to 40 percent of the 1997 losses in derivatives trading, and it appears that in some cases traders’ rules of thumb do better than mathematically sophisticated models.”

Just a little reminder that, say what you will about the ‘econophysicists’ who are among those to be working on this issue, there are some rather important lacunae remaining in economic theory.

Thursday, January 24, 2008

Scratchbuilt genomes
[Here’s the pre-edited version of my latest story for Nature’s online news. I discuss this work also in the BBC World Service’s Science in Action programme this week.]

By announcing the first chemical synthesis of a complete bacterial genome [1], scientists in the US have shown that the stage is now set for the creation of the first artificial organisms – something that looks likely to be achieved within the next year.

The genome of the pathogenic bacterium Mycoplasma genitalium, made in the laboratory by Hamilton Smith and his colleagues at the J. Craig Venter Institute in Rockville, Maryland, represents an increase by more than a factor of ten in the longest stretch of genetic material ever created by chemical means.

The complete genome of M. genitalium contains 582,970 of the fundamental building blocks of DNA, called nucleotide bases. Each of these was stitched in place by commercial DNA-synthesis companies according to the Venter Institute’s specifications, to make 101 separate segments of the genome. The scientists then used biotechnological methods to combine these fragments into a single genome within cells of E. coli bacteria and yeast.

M. genitalium has the smallest genome of any organism that can grow and replicate independently. (Viruses have smaller genomes, some of which have been synthesized before, but they cannot replicate on their own.) Its DNA contains the instructions for making just 485 proteins, which orchestrate the cells’ functions.

This genetic concision makes M. genitalium a candidate for the basis of a ‘minimal organism’, which would be stripped down further to contain the bare minimum of genes needed to survive. The Venter Institute team, which includes the institute’s founder, genomics pioneer Craig Venter, believe that around 100 of the bacterium’s genes could be jettisoned – but they don’t know which 100 these are.

The way to test that would be to make versions of the M. genitalium genome that lack some genes, and see whether it still provides a viable ‘operating system’ for the organism. Such an approach would also require a method for replacing a cell’s existing genome with a new, redesigned one. But Venter and his colleagues have already achieved such a ‘gene transplant’, which they reported last year between two bacteria closely related to M. genitalium [2].

Their current synthesis of the entire M. genitalium genome thus provides the other part of the puzzle. Chemical synthesis of DNA involves sequentially adding one of the four nucleotide bases to a growing chain in a specified sequence. The Venter Institute team farmed out this task to the companies Blue Heron Technology, DNA2.0 and GENEART.

But it is beyond the capabilities of the current techniques to join up all half a million or so bases in a single, continuous process. That was why the researchers ordered 101 fragments or ‘cassettes’, each of about 5000-7000 bases and with overlapping sequences that enabled them to be stuck together by enzymes.

To distinguish the synthetic DNA from the genomes of ‘wild’ M. genitalium, Smith and colleagues included ‘watermark’ sequences: stretches of DNA carrying a kind of barcode that designates its artificiality. These watermarks must be inserted at sites in the genome known to be able to tolerate such additions without their genetic function being impaired.

The researchers made one further change to the natural genome: they altered one gene in a way that was known to render M. genitalium unable to stick to mammalian cells. This ensured that cells carrying the artificial genome could not act as pathogens.

The cassettes were stitched together into strands that each contained a quarter of the total genome using DNA-linking enzymes within E. coli cells. But, for reasons that the researchers don’t yet understand, the final assembly of these quarter-genomes into a single circular strand didn’t run smoothly in the bacteria. So the team transferred them to cells of brewers’ yeast, in which the last steps of the assembly were carried out.

Smith and colleagues then extracted these synthetic genomes from the yeast cells, and used enzymes to chew up the yeast’s own DNA. They read out the sequences of the remaining DNA to check that these matched those of wild M. genitalium (apart from the deliberate modifications such as watermarks).

The ultimate evidence that the synthetic genomes are authentic copies, however, will be to show that cells can be ‘booted up’ when loaded with this genetic material. “This is the next step and we are working on it”, says Smith.

Advances in DNA synthesis might ultimately make this laborious stitching of fragments unnecessary, but Dorene Farnham, director of sales and marketing at Blue Heron in Bothell, Washington, stresses that that’s not a foregone conclusion. “The difficulty is not about length”, she says. “There are many other factors that go into getting these synthetic genes to survive in cells.”

Venter’s team hopes that a stripped-down version of the M. genitalium genome might serve as a general-purpose chassis to which might be added all sorts of useful designer functions, for example including genes that turn the bacteria into biological factories for making carbon-based ‘green’ fuels or hydrogen when fed with nutrients.

The next step towards that goal is to build potential minimal genomes from scratch, transplant them into Mycoplasma, and see if they will keep the cells alive. “We plan to start removing putative ‘non-essential’ genes and test whether we get viable transplants”, says Smith.

References

1. Gibson, D. G. et al. Science Express doi:10.1126/science.1151721 (2008).
2. Lartigue, C. et al. Science 317, 632 (2007).

Tuesday, January 22, 2008


Differences in the shower
[This is how my latest article for Nature’s Muse column started out. Check out also a couple of interesting papers in the latest issue of Phys. Rev. E: a study of how ‘spies’ affect the minority game, and a look at the value of diversity in promoting cooperation in the spatial Prisoner’s Dilemma.]


A company sets out to hire a 20-person team to solve a tricky problem, and has a thousand applicants to choose from. So they set them all a test related to the problem in question. Should they then pick the 20 people who do best? That sounds like a no-brainer, but there situations in which it would be better to hire 20 of the applicants at random.

This scenario was presented four years ago by social scientists Lu Hong and Scott Page of the University of Michigan [1] as an illustration of the value of diversity in human groups. It shows that many different minds are sometimes more effective than many ‘expert’ minds. The drawback of having a team composed of the ‘best’ problem-solvers is that they are likely all to think in the same way, and so are less likely to come up with versatile, flexible solutions. “Diversity”, said Hong and Page, “trumps ability.”

Page believes that studies like this, which present mathematical models of decision-making, show that initiatives to encourage cultural diversity in social, academic and institutional settings are not just exercises in politically correct posturing. To Page, they are ways of making the most of the social capital that human difference offers.

There are evolutionary analogues to this. Genetic diversity in a population confers robustness in the face of a changing environment, whereas a group of almost identical ‘optimally adapted’ organisms can come to grief when the wind shifts. Similarly, sexual reproduction provides healthy variety in our own genomes, while in ecology monocultures are notoriously fragile in the face of new threats.

But it’s possible to overplay the diversity card. Expert opinion, literary and artistic canons, and indeed the whole concept of ‘excellence’ have become fashionable whipping boys to the extent that some, particularly in the humanities, worry about standards and judgement vanishing in a deluge of relativist mediocrity. Of course it is important to recognize that diversity does not have to mean ‘anything goes’ (a range of artistic styles does not preclude discrimination of good from bad within each of them) – but that’s often what sceptics of the value of ‘diversity’ fear.

This is why models like that of Hong and Page bring some valuable precision to the questions of what diversity is and why and when it matters. That issue now receives a further dose of enlightenment from a study that looks, at face value, to be absurdly whimsical.

Economist Christina Matzke and physicist Damien Challet have devised a mathematical model of (as they put it) “taking a shower in youth hostels” [2]. Among the risks of budget travel, few are more hazardous than this. If you try to have a shower at the same time as everyone else, it’s a devil of a job adjusting the taps to get the right water temperature.

The problem, say Matzke and Challet, is that in the primitive plumbing systems of typical hostels, one person changing their shower temperature settings alters the balance of hot and cold water for everyone else too. They in turn try to retune the settings to their own comfort, with the result that the shower temperatures fluctuate wildly between scalding and freezing. Under what conditions, they ask, can everyone find a mutually acceptable compromise, rather than all furiously altering their shower controls while cursing the other guests?

So far, so amusing. But is this really such a (excuse me) burning issue? Challet’s previous work provides some kind of answer to that. Several years ago, he and physicist Yi-Cheng Zhang devised the so-called minority game as a model for human decision-making [3]. They took their lead from economist Brian Arthur, who was in the habit of frequenting a bar called El Farol in the town of Santa Fe where he worked [4]. The bar hosted an Irish music night on Thursdays which was often so popular that the place would be too crowded for comfort.

Noting this, some El Farol clients began staying away on Irish nights. That was great for those who did turn up – but once word got round that things were more comfortable, overcrowding resumed. In other words, attendance would fluctuate wildly, and the aim was to go only on those nights when you figured others would stay away.

But how do you know which nights those are? You don’t, of course. Human nature, however, prompts us to think we can guess. Maybe low attendance one week means high attendance the next? Or if it’s been busy three weeks in a row, the next is sure to be quiet? The fact is that there’s no ‘best’ strategy – it depends on what strategies other use.

The point of the El Farol problem, which Challet and Zhang generalized, is to be in the minority: to stay away when most others go, and vice versa. The reason why this is not a trivial issue is that the minority game serves as a proxy for many social situations, from lane-changing in heavy traffic to choosing your holiday destination. It is especially relevant in economics: in a buyer’s market, for example, it pays to be a seller. It’s unlikely that anyone decided whether or not to go to El Farol by plotting graphs and statistics, but market traders certainly do so, hoping to tease out trends that will enable them to make the best decisions. Each has a preferred strategy.

The maths of the minority game looks at how such strategies affect one another, how they evolve and how the ‘agents’ playing the game learn from experience. I once played it in an interactive lecture in which push-button voting devices were distributed to the audience, who were asked to decide in each round whether to be in group A or group B. (The one person who succeeded in being in the minority in all of several rounds said that his strategy was to switch his vote from one group to the other “one round later than it seemed common sense to do so.”)

So what about the role of diversity? Challet’s work showed that the more mixed the strategies of decision-making are, the more reliably the game settles down to the optimal average size of the majority and minority groups. In other words, attendance at El Farol doesn’t in that case fluctuate so much from one week to the next, and is usually close to capacity.

The Shower Temperature Problem is very different, because in principle the ideal situation, where everyone gets closest to their preferred temperature, happens when they all set their taps in the same way – that is, they all use the same strategy. However, this solution is unstable – the slightest deviation, caused by one person trying to tweak the shower settings to get a bit closer to the ideal, sets off wild oscillations in temperature as others respond.

In contrast, when there is a diversity of strategies – agents use a range of tap settings in an attempt to hit the desired water temperature – then these oscillations are suppressed and the system converges more reliably to an acceptable temperature for all. But there’s a price paid for that stability. While overall the water temperature doesn’t fluctuate strongly, individuals may find they have to settle for a temperature further from the ideal value than they would in the case of identical shower settings.

This problem is representative of any in which many agents try to obtain equal amounts of some fixed quantity that is not necessarily abundant enough to satisfy them all completely – factories or homes competing for energy in a power grid, perhaps. But more generally, the model of Matzke and Challet shows how diversity in decision-making may fundamentally alter the collective outcome. That may sound obvious, but don’t count on it. Conventional economic models have for decades stubbornly insisted on making all their agents identical. They are ‘representative’ – one size fits all – and they follow a single ‘optimal’ strategy that maximizes their gains.

There’s a good reason for this assumption: the models are very hard to solve otherwise. But there’s little point in having a tractable model if it doesn’t come close to describing reality. The static view of a ‘representative’ agent leads to the prediction of an ‘equilibrium’ economy, rather like the equilibrium shower system of Matzke and Challet’s homogeneous agents. Anyone contemplating the current world economy knows all too well what a myth this equilibrium is – and how real-world behaviour is sure to depend on the complex mix of beliefs that economic agents hold about the future and how to deal with it.

More generally, the Shower Temperature Problem offers another example of how difference and diversity can improve the outcome of group decisions. Encouraging diversity is not then about being liberal or tolerant (although it tends to require both) but about being rational. Perhaps the deeper challenge for human societies, and the one that underpins current debates about multiculturalism, is how to cope with differences not in problem-solving strategies but in the question of what the problems are and what the desired solutions should be.


References

1. Hong, L. & Page, S. E. Proc. Natl Acad. Sci. USA 101, 16385 (2004).
2. Matzke, C. & Challet, D. preprint http://www.arxiv.org/abs/0801.1573 (2008).
3. Arthur, B. W. Am. Econ. Assoc. Papers & Proc. 84, 406. (1994).
4. Challet, D. & Zhang, Y.-C. Physica A 246, 407 (1997).

Wednesday, January 16, 2008

Groups, glaciation and the pox
[This is the pre-edited version of my Lab Report column for the February issue of Prospect.]

Blaming America for the woes of the world is an old European habit. Barely three decades after Columbus’s crew returned from the New World, a Spanish doctor claimed they brought back the new disease that was haunting Europe: syphilis, so named in the 1530s by the Italian Girolamo Fracastoro. All social strata were afflicted: kings, cardinals and popes suffered alongside soldiers, although sexual promiscuity was so common that the venereal nature of the disease took time to emerge. Treatments were fierce and of limited value: inhalations of mercury vapour had side-effects as bad as the symptoms, while only the rich could afford medicines made from guaiac wood imported from the West Indies.

But it became fashionable during the twentieth century to doubt the New World origin of syphilis: perhaps the disease was a dormant European one that acquired new virulence during the Renaissance? Certainly, the bacterial spirochete Treponema pallidum (subspecies pallidum) that causes syphilis is closely related to other ‘treponemal’ pathogens, such as that which causes yaws in hot, humid regions like the Congo and Indonesia. Most of these diseases leave marks on the skeleton and so can be identified in human remains. They are seen widely in New World populations dating back thousands of years, but reported cases of syphilis-like lesions in Old World remains before Columbus have been ambiguous.

Now a team of scientists in Atlanta, Georgia, has analysed the genetics of many different strains of treponemal bacteria to construct an evolutionary tree that not only identifies how venereal syphilis emerged but shows where in the world its nearest genetic relatives are found. This kind of ‘molecular phylogenetics’, which builds family trees not from a traditional comparison of morphologies but by comparing gene sequences, has revolutionized palaeonotology, and it works as well for viruses and bacteria as it does for hominids and dinosaurs. The upshot is that T. pallidum subsp. pallidum is more closely related to a New World subspecies than it is to Old World strains. In other words, it looks as though the syphilis spirochete indeed mutated from an American progenitor. That doesn’t quite imply that Columbus’s sailors brought syphilis back with them, however – it’s also possible that they carried a non-venereal form that quickly mutated into the sexually transmitted disease on its arrival. Given that syphilis was reported within two years of Columbus’s landing in Spain, that would have been a quick change.

****

Having helped to bury the notion of group selection in the 1970s, Harvard biologist E. O. Wilson is now attempting to resurrect it. He has a tough job on his hands; most evolutionary biologists have firmly rejected this explanation for altruism, and Richard Dawkins has called Wilson’s new support for group selection a ‘weird infatuation’ that is ‘unfortunate in a biologist who is so justly influential.’

The argument is all about why we are (occasionally) nice to one another, rather than battling, red in tooth and claw, for limited resources. The old view of group selection said simply that survival prospects may improve if organisms act collectively rather than individually. Human altruism, with its framework of moral and social imperatives, is murky territory for such questions, but cooperation is common enough in the wild, particularly in eusocial insects such as ants and bees. Since the mid-twentieth century such behaviour has been explained not by vague group selection but via kin selection: by helping those genetically related to us, we propagate our genes. It is summed up in the famous formulation of J. B. S. Haldane that he would lay down his life for two brothers or eight cousins – a statement of the average genetic overlaps that make the sacrifice worthwhile. Game theory now offers versions of altruism that don’t demand kinship – cooperation of non-relatives can also be to mutual benefit – but kin selection remains the dominant explanation for eusociality.

That was the position advocated by Wilson in his 1975 book Sociobiology. In a forthcoming book The Superorganism, and a recent paper, he now reverses this claim and says that kin selection may not be all that important. What matters, he says, is that a population possess genes that predispose the organisms to flexible behavioural choices, permitting a switch from competitive to cooperative action in ‘one single leap’ when the circumstances make it potentially beneficial.

Wilson cites a lack of direct, quantitative evidence for kin selection, although others have disputed that criticism. In the end the devil is in the details – specifically in the maths of how much genetic common ground a group needs to make self-sacrifice pay – and it’s not clear that either camp yet has the numbers to make an airtight case.

****

The discovery of ice sheets half the size of today’s Antarctic ice cap during the ‘super-greenhouse’ climate of the Turonian stage, 93.5-89.3 million years ago, seems to imply that we need not fret about polar melting today. With atmospheric greenhouse gas levels 3-10 times higher than now, ocean temperatures around 5 degC warmer, and crocodiles swimming in the Arctic, the Turonian sounds like the IPCC’s worst nightmare. But it’s not at all straightforward to extrapolate between then and now. More intense circulation of water in the atmosphere could have left thick glaciers on the high mountains and plateaus of Antarctica even in those torrid times. In any event, a rather particular set of climatic circumstances seems to have been at play – the glaciation does not persist throughout the warm Cretaceous period. And it is always important to remember that, with climate, where you end up tends to depend on where you started from.

Friday, January 11, 2008

In praise of wrong ideas
[This is my latest column for Chemistry World, and explains what I got up to on Monday night. I’m not sure when the series is being broadcast – this was the first to be recorded. It’s an odd format, and I’m not entirely sure it works, or at least, not yet. Along with Jonathan Miller, my fellow guest was mathematician Marcus de Sautoy. Jonathan chose to submit the Nottingham Alabasters (look it up – interesting stuff), and Marcus the odd symmetry group called the Monster.]


I can’t say that I’d expected to find myself defending phlogiston, let alone in front of a comedy audience. But I wasn’t angling for laughs. I was aiming to secure a place for phlogiston in the ‘Museum of Curiosities’, an institution that exists only in the ethereal realm of BBC’s Radio 4. In a forthcoming series of the same name, panellists submit an item of their choice to the museum, explaining why it deserves a place. The show will have laughs – the curator is the British comedian Bill Bailey – but actually it needn’t. The real aim is to spark discussion of the issues that each guest’s choice raises. For phlogiston, there are plenty of those.

What struck me most during the recording was how strongly the old historiographic image of phlogiston still seems to hold sway. In 1930 the chemical popularizer Bernard Jaffe wrote that phlogiston, which he attributed to the alchemist Johann Becher, ‘nearly destroyed the progress of chemistry’, while in 1957 the science historian John Read called it a ‘theory of unreason.’ Many of us doubtless encountered phlogiston in derisive terms during our education, which is perhaps why it is forgivable that the programme’s producers wanted to know of ‘other scientific theories from the past that look silly today’. But even the esteemed science communicator, the medical doctor Jonathan Miller (who was one of my co-panellists), spoke of the ‘drivel’ of the alchemists and suggested that natural philosophers of earlier times got things like this wrong because they ‘didn’t think smartly enough’.

I feel this isn’t the right way to think about phlogiston. Yes, it had serious problems even from the outset, but that was true of just about any fundamental chemical theory of the time, Lavoisier’s oxygen included. Phlogiston also had a lot going for it, not least because it unified a wealth of observations and phenomena. Arguably it was the first overarching chemical theory with a recognizably modern character, even if the debts to ancient and alchemical theories of the elements remained clear.

Phlogiston was in fact named in 1718 by Georg Stahl, professor of medicine at the University of Halle, who derived it from the Greek phlogistos, to set on fire. But Stahl took the notion from Becher’s terra pinguis or fatty earth, one of three types of ‘earth’ that Becher designated as ‘principles’ responsible for mineral formation. Becher’s ‘earths’ were themselves a restatement of the alchemical principles sulphur, mercury and salt proposed as the components of all things by Paracelsus. Terra pinguis was the principle of combustibility – it was abundant in oily or sulphurous substances.

The idea, then, was that phlogiston made things burn. When wood or coal was ignited, its phlogiston was lost to the air, which was why its mass decreases. Combustion ceases when air is saturated in phlogiston. One key problem, noted but not explained by Stahl, was that metals don’t lose but gain weight when combusted. This is often a source of modern scorn, for it led later scientists to contorted explanations such as that phlogiston buoyed up heavier substances, or (sometimes) had negative weight. Those claims prompted Lavoisier ultimately to denounce phlogiston as a ‘veritable Proteus’ that ‘adapts itself to all the explanations for which it may be required.’ But actually it was not always clear whether metals did gain weight when burnt, for the powerful lenses used for heating them could sublimate the oxides.

In any event, phlogiston explained not only combustion but also acidity, respiration, chemical reactivity, and the growth and properties of plants. As Oliver Morton points out in his new book Eating the Sun (Fourth Estate), the Scottish geologist James Hutton invoked a ‘phlogiston cycle’ analogous to the carbon and energy cycles of modern earth scientists, in which phlogiston was a kind of fixed sunlight taken up by plants, some of which is buried in the deep earth as coal and which creates a ‘constant fire in the mineral regions’ that powers volcanism.

So phlogiston was an astonishingly fertile idea. The problem was not that it was plain wrong, but that it was so nearly right – it was the mirror image of the oxygen theory – that it could not easily be discredited. And indeed, that didn’t happen as cleanly and abruptly as implied in conventional accounts of the Chemical Revolution – as Hasok Chang at University College London has explained, phlogistonists persisted well into the nineteenth century, and even eminent figures such as Humphry Davy were sceptical of Lavoisier.

This is one of the reasons I chose phlogiston for the museum – it reminds us of our ahistorical tendency to clean up science history in retrospect, and to divide people facilely into progressives and conservatives. It also shows that the opposite of a good idea can also be a good idea. And it reminds us that science is not about being right but being a little less wrong. I’m sure that one day the dark matter and dark energy of cosmologists will look like phlogiston does now: not silly ideas, but ones that we needed until something better came along.

Thursday, December 20, 2007

Wise words from the Vatican?
[I’m no fan of the pope. And what I don’t say below (because it would simply be cut out as irrelevant) is that his message for World Peace Day includes some typically hateful homophobic stuff in regard to families. AIDS-related contraception and stem-cell research are just two of the areas in which the papacy has put twisted dogma before human well-being. But I feel we should always be ready to give credit where it is due. And so here, in my latest Muse article for Nature News, I try to do so.]

When Cardinal Joseph Ratzinger became Pope Benedict XVI in 2005, many both inside and outside the Christian world feared that the Catholic church was set on a course of hardline conservatism. But in two recent addresses, Benedict XVI shows intriguing signs that he is keen to engage with the technological age, and that he has in some ways a surprisingly thoughtful position on the dialogue between faith and reason.

In his second Encyclical Letter, released on 30 November, the pope tackles the question of how Christian thought should respond to technological change. And in a message for World Peace Day on 1 January 2008, he considers the immense challenges posed by climate change.

Let’s take the latter first, since it is in some ways more straightforward. Benedict XVI’s comments on the environment have already been interpreted in some quarters as “a surprise attack on climate change prophets of doom” who are motivated by “dubious ideology.” According to the British newspaper the Daily Mail, the pope “suggested that fears over man-made emissions melting the ice caps and causing a wave of unprecedented disasters were nothing more than scare-mongering.”

Now, non-British readers may not be aware that the Daily Mail is itself a stalwart bastion of “dubious ideology”, but this claim plumbs new depths even by the newspaper’s impressive standards of distortion and fabrication. Here’s what the pope actually said: “Humanity today is rightly concerned about the ecological balance of tomorrow. It is important for assessments in this regard to be carried out prudently, in dialogue with experts and people of wisdom, uninhibited by ideological pressure to draw hasty conclusions, and above all with the aim of reaching agreement on a model of sustainable development capable of ensuring the well-being of all while respecting environmental balances.”

Hands up those who disagree with this proposition. I thought not. When you consider that the idea that human activities might affect climate has been around for over a century, and the possibility that this might now be occurring has received serious study for more than two decades – during which time the climate science community has resolutely resisted pressing any alarm buttons until they could draw as informed a conclusion as possible – you might just begin to doubt it is they, and their current consensus that human-induced climate change seems real, who are in the pope’s sights when he talks of “hasty conclusions”. Might the charge be levelled, on the contrary, at those who pounce on every new suggestion that there are other factors in climate, such as solar fluctuations, as evidence of a global scientific conspiracy to pin the blame on humanity? I leave you to judge.

The pope’s statement is simply the one that any reasonable person would make. He calls for investment in “sufficient resources in the search for alternative sources of energy and for greater energy efficiency”, for technologically advanced countries to “reassess the high levels of consumption due to the present model of development”, and for humankind not to “selfishly consider nature to be at the complete disposal of our own interests.” Doesn’t that just sound a little like the environmentalists whom the pope is said by some to be lambasting? Admittedly, one might ask whether the Judaeo-Christian notion of human stewardship of the earth has contributed to our current sense of entitlement over its resources; but that’s another debate.

So far, then, good on Benedict XVI. And there’s more: “One must acknowledge with regret the growing number of states engaged in the arms race: even some developing nations allot a significant proportion of their scant domestic product to the purchase of weapons. The responsibility for this baneful commerce is not limited: the countries of the industrially developed world profit immensely from the sale of arms… it is truly necessary for all persons of good will to come together to reach concrete agreements aimed at an effective demilitarization, especially in the area of nuclear arms.” Goodness me, it’s almost enough to make me consider going to Christmas Mass.

The Encyclical Letter, meanwhile (entitled “On Christian Hope”), bites into some more meaty and difficult pies. On one level, its message might sound rather prosaic, however valid: science cannot provide society with a moral compass. The pope is particularly critical of Francis Bacon’s vision of a technological utopia: he and his followers “were wrong to believe that man would be redeemed through science.” Even committed technophiles ought to find that unobjectionable.

Without doubt, Benedict XVI says, progress (for which we might here read science) “offers new possibilities for good, but it also opens up appalling possibilities for evil.” He cites social philosopher Theodor Adorno’s remark that one view of ‘progress’ leads us from the sling to the atom bomb.

More generally, the pope argues that there can be no ready-made prescription for utopia: “Anyone who promises the better world that is guaranteed to last for ever is making a false promise.” Of course, one can see what is coming next: “it is not science that redeems man: man is redeemed by love” – which the pope believes may come only through faith in God. Only with that last step, however, does he enter into his own closed system of reference, in which our own moral lack can be filled only from a divine source.

More interesting is the accompanying remark that “in the field of ethical awareness and moral decision-making… decisions can never simply be made for us in advance by others… in fundamental decisions, every person and every generation is a new beginning.” Now, like most spiritual statements this one is open to interpretation, but surely one way of reading it is to conclude that, when technologies such as stem cell science throw up new ethical questions, we won’t find the answers already written down in any book. The papacy has not been noted for its enlightened attitude to that particular issue, but we might draw a small bit of encouragement from the suggestion that such developments require fresh thinking rather than a knee-jerk response based on outmoded dogma.

Most surprising of all (though I don’t claim to have my finger on the pulse of theological fashion) is the pope’s apparent assertion that the ‘eternal life’ promised biblically is not to be taken literally. He seems concerned, and with good reason, that many people now regard this as a threat rather than a promise: “do we really want this – to live eternally?” he asks. In this regard, Benedict XVI seems to possess rather more wisdom than the rich people who look forward to resurrection of their frozen heads. ‘Eternal life’, he says, is merely a metaphor for an authentic and happy life lived on earth.

True, this then makes no acknowledgement of how badly generations of earlier churchmen have misled their flock. And it seems strange that a pope who believes this interpretation can at the same time feel so evidently fondly towards St Paul and St Augustine, who between them made earthly life a deservedly miserable existence endured by sinners, and towards the Cistercian leader Bernard of Clairvaux, who in consequence pronounced that “We are wounded as soon as we come into this world, while we live in it, and when we leave it; from the soles of our feet to the top of our heads, nothing is healthy in us.”

Perhaps this is one of the many subtle points of theology I don’t understand. All the same, the suggestion that we’d better look for our happiness on an earth managed responsibly, rather than deferring it to some heavenly eternity, gives me a little hope that faith and reason are not set on inevitably divergent paths.

Friday, December 14, 2007


Can Aladdin’s carpet fly?
[Here’s a seasonal news story I just wrote for Nature, which will appear (in edited form) in the last issue of the year. I gather, incidentally, that the original text of the ‘Arabian Nights’ doesn’t specify that the carpet flies as such, but only that anyone who sits on it is transported instantly to other lands.]


A team of scientists in the US and France has the perfect offering for the pantomime season: instructions for making a flying carpet.

The magical device may owe more to Walt Disney than to The Arabian Nights, but it is not pure fantasy, according to Lakshminarayanan Mahadevan of Harvard University, Mederic Argentina of the University of Nice, and Jan Skotheim of the Rockefeller University in New York. They have studied the aerodynamics of a flexible, rippling sheet moving through a fluid, and find that it should be possible to make one that will stay aloft in air, propelled by actively powered undulations much as a marine ray swims through water [1].

No such carpet is going to ferry humans around, though. The researchers say that, to stay afloat in air, a sheet would need to be typically about 10 cm long, 0.1 mm thick, and vibrate at about 10 Hz with an amplitude of about 0.25 mm. Making a heavier carpet ‘fly’ is not absolutely forbidden by physics, but it would require such a powerful engine to drive vibrations that the researchers say “our computations and scaling laws suggest it will remain in the magical, mystical and virtual realm.”

The key to a magic carpet is to create uplift as the ripples push against the viscous fluid. If the sheet is close to a horizontal surface, like a piece of foil settling down onto the floor, then such movements can create a high pressure in the gap between the sheet and the floor. “As waves propagate along the flexible foil, they generate a fluid flow that leads to a pressure that lifts the foil, roughly balancing its weight”, Mahadevan explains.

But as well as lifting it, ripples can drive the foil forward – as any respectable magic carpet would require. “If the waves propagate from one edge”, says Mahadevan, “this causes the foil to tilt ever so slightly and then move in one direction, towards the edge that is slightly higher. Fluid is then squeezed from this end to the other, causing the sheet to progress like a submarine ray.”

To generate a big thrust and thus a high speed, the carpet has to undulate in big ripples, comparable to the carpet's total size. This makes for a very bumpy ride. ”If you want a smooth ride, you can generate a lot of small ripples”, says Mahadevan. “But you’ll be slower.” He points out that this is not so different from any other mode of transport, where speed tends to induce bumpiness while moving more smoothly means moving slower.

"It's cute, it's charming", says physicist Tom Witten at the University of Chicago. He adds that the result is not very surprising, but says "the main interest is that someone would think to pose this problem."

Could artificial flying mini-carpets really be made? Spontaneous undulating motions have already been demonstrated in ‘smart’ polymers suspended in fluids, which can be made to swell or shrink in response to external signals. In September, a team also at Harvard University described flexible sheets of plastic coated with cultured rat muscle cells that flex in response to electrical signals and could exhibit swimming movements [2]. “In air, it should be possible to make moving sheets – a kind of micro hovercraft – with very light materials, or with very powerful engines”, says Mahadevan.

Mahadevan has developed something of a speciality in looking for unusual effects from everyday physics – his previous papers have included a study of the ‘Cheerios effect’, where small floating rings (like the breakfast cereal) stick together through surface tension, and an analysis of the sawtooth shape made by ripping open envelopes with a finger.

“I think the most interesting questions are the ones that everyone has wondered about, usually idly”, he says. “I think that is what it means to be an applied mathematician – it is our responsibility to build mathematical tools and models to help explain and rationalize what we all see.”

References

1. Argentina, M. et al., Phys. Rev. Lett. 99, 224503 (2007).
2. Feinberg, A. W. et al., Science 317, 1366-1370 (2007).

Thursday, December 13, 2007

Surfers and stem cells
[This is the pre-edited version of my Lab Report column for the January issue of Prospect.]

Just when you thought that the Dancing Wu Li Masters and the Tao of Physics had finally been left in the 1970s, along comes a surfer living on the Hawaiian island of Maui who claims to have a simple theory of everything which shows that the universe is an ‘exceptionally beautiful shape’. Garrett Lisi has a physics PhD but no university affiliation, and lists his three most important things as physics, love and surfing – “and no, those aren’t in order.”

But Lisi is no semi-mystic drawing charming but ultimately unedifying analogies. He is being taken seriously by the theoretical physics community, and has been invited to the high-powered Perimeter Institute in Waterloo, Canada, where leading physicist Lee Smolin has called his work “fabulous.”

One thing rather fabulous is that it is almost comprehensible, at least by the standards of modern fundamental physics. Lisi himself admits that, in comparison to string theory, the main contender for a theory of everything, he uses only “baby mathematics.” That’s not to say it’s easy, though.

A theory of everything must unify the theory of general relativity, which describes gravity and the structure of spacetime on large scales, with quantum theory, which describes how fundamental particles behave at the subatomic scale. To put it another way, gravity must be mixed into the so-called Standard Model of particle physics, which explains the interactions between all known fundamental particles – quarks, electrons, photons and so forth.

Physicists typically attempt unification by using symmetry. To put it crudely, suppose there are two particles that look the same except that they spin in opposite directions. These can be ‘unified’ into a single particle by appreciating that they can be interconverted by reflection in a mirror – a symmetry operation.

The idea is that the proliferation of particles and forces in today’s universe happened in a series of ‘symmetry-breaking’ steps, just as lowering a square’s symmetry to rectangular creates two distinct pairs of sides from four identical ones. This is already known to be true of some forces and particles, but not all of them.

Lisi claims that the primordial symmetry is a pattern called E8, known to mathematicians for over a century but fully understood only recently; it is rather like a multi-dimensional polyhedron with 248 ‘corners’. He has shown that all the known particles, plus descriptions of gravity, can be mapped onto the corners of E8. So a bit of it looks like the Standard Model, while a bit looks like gravity and spacetime. Twenty of the ‘corners’ remain empty, corresponding to hypothetical particles not yet known: the E8 model thus predicts their existence. It’s rather like the way nineteenth-century chemists found a pattern that brought coherence and order to the chemical elements – the periodic table – while noting that it had gaps, predicting elements that were later found.

Is E8 really the answer to everything? Physicists are reserving judgement, for Lisi’s paper, which is not yet peer-reviewed or published, is just a sketch – not a theory, and barely even a model. Mathematical physicist Peter Woit is unsure about the whole approach, saying that playing with symmetry just defers the question of what breaks it to make the world we know. But the trick worked before in the 1950s, when Murray Gell-Mann predicted a new particle by mapping a group of known ones onto a symmetry group called SU(3).

Lisi’s surfer-dude persona is fun, but so what, really? The real point is that his suggestion invigorates a field that, wandering in the thickets of string theory, sorely needs it.

*****

Stem-cell researchers in Shoukhrat Mitalipov’s team at the Oregon Health and Science University might be forgiven a little chagrin. No sooner had they reported the breakthrough that has eluded the field for years than they were trumped by two reports seeming to offer an even more attractive way of making human stem cells. Having sung the praises of Mitalipov’s achievement, Ian Wilmut, the University of Edinburgh cloning pioneer who created Dolly the sheep, announced that he was ditching their approach in favour of the new one.

Stem cells are the all-purpose cells present in the very early stages of embryo growth that can develop into just about any type of specialized tissue cells. The ‘traditional’ strategy for making them with DNA matched to the eventual recipient involves stripping the genetic material from an unfertilized egg and replacing it with donor DNA, and then prompting the egg to grow into a blastocyst, the initial stage of an embryo, from which stem cells can be extracted. This is called somatic cell nuclear transfer (SCNT), and is the method used in animal cloning. It works for sheep, dogs and mice, but there had previously been no success for humans or other primates.

On 14 November last year, Mitalipov and colleagues reported stem cells made by SCNT from rhesus macaques that could develop into other cell types. But a week later, teams based at the universities of Kyoto and Wisconsin-Madison independently reported the creation of human stem cells from ordinary skin cells, by treating them with proteins that reprogrammed them. In effect, the proteins switch the gene circuits from a ‘skin cell’ to a ‘stem cell’ setting. This reversal of normal developmental pathways is extraordinary.

The two teams used different cocktails of proteins to do the reprogramming – the Wisconsin team manage to avoid an agent that carries a cancer risk – showing that there is some scope for optimising the mix. Best of all, the method avoids the creation and destruction of embryos that has dogged the ethics of stem-cell research. But Mitalipov insists that starting with eggs is still best, and he has now started collaborating with a team in Newcastle licensed to work with human embryos. After years of frustrating effort, suddenly all options seem open.

Wednesday, December 12, 2007

Money for old rope

… except without the money. At no extra work to myself, I appear in a couple of recent books:
The Public Image of Chemistry, eds J. Schummer, B. Bensaude-Vincent & B. Van Tiggelen (World Scientific, 2007). This is a kind of proceedings volume of a conference of (almost) the same name in 2004, supplemented by contributions from a session at the 5th International Conference on the History of Chemistry in 2005. There’s lots of interesting stuff in it. It contains my paper ‘Chemistry and Power in Recent American Fiction’, which was published previously in the journal Hyle.
Futures from Nature, edited by my friend Henry Gee and published by Tor in January 2008. This is a collection of 100 of the short sci-fi stories published in Nature in recent years, and includes a contribution (I won’t say a short story, more of a pastiche) by one Theo von Hohenheim, who sounds vaguely familiar. Buy it here.

And while I’m at it, I recorded today a review of the year in science for the BBC World Service’s Science in Action. Don’t know when it is being broadcast… but before the year is out, clearly.

And while I'm at it at it, I have a piece in the latest issue of Seed on why RNA is the new DNA...

Sunday, December 09, 2007

We’re only after your money

There is a very sour little piece in this Saturday’s Guardian from Wendy Cope on copyright. I should say first of all that I must acknowledge a few items:
1. Cope is right to say that a poem is much more likely to get copied (either digitally or on paper) and downloaded than an entire book – in that sense, poets are especially vulnerable to copyright violations.
2. It’s mostly damned hard making a living as a writer, and perhaps especially so as a poet, so some sensitivity to potential earnings lost seems reasonable.

But it seems rather sad to see a writer of any sort so bitterly possessive about their words. To read Cope’s piece, one might imagine that she sits scribbling away resentfully, thinking each time she finishes a poem, ‘Now, get out there and earn your keep, you little sod.’ Now, to be honest, my rather limited experience of Cope’s work tallies rather well with the notion that bitterness is one of her prime motivations, but this piece seemed so jealous of every last penny potentially denied her that one wonders why she doesn’t just throw in the towel and become a plumber. Indeed, it seems to me that she doesn’t even truly understand why people read or buy poetry. Why, if anyone genuinely loved her poems, would they be content to download a few from the web and, and then – well, then what? File the printouts? Poetry lovers must be among the most bookish people in the world – they surely relish having the books on their shelves, rather than just scanning their eyes briefly over a piece of downloaded text and then binning it.

‘You want to read my poems? Then buy the book’, is Cope’s crabby refrain. Does she pull her volumes off the shelves of public libraries, I wonder? What is particularly dispiriting about this little rant is that it gives no sense of writing being about wanting to share with people ideas, images, thoughts and stories – and recognizing that this will never happen solely through the medium of books sold – but that it is instead about creating ‘word product’ that you buggers must pay for.

No source of income is too minor or incidental that its possible loss is not begrudged. Other people reading your poems at festivals is no good, because you might not get your little commission for it. (You get paid just for standing up and reading out old words? What the hell are you complaining about?) Another thing I find odd, although perhaps it just shows that things work differently in the poetry world, is that Cope is so covetous of every last book sale because of its financial rewards. In non-fiction at least, if you’re the kind of writer who gets a substantial part of your income from royalties, as opposed to pocketing a modest advance that might with great luck be paid off in ten years’ time, then you must be selling so many books that you shouldn’t need the supplement of £1.20 for a book sale that comes from someone’s refusal to copy one of your poems and give it to friend.

But what caps it all – and indeed reveals the pathology of Cope’s obsession – is her anger and regret that all those possible royalties are going to be lost when you’re dead. “I sometimes feel a bit annoyed by the prospect of people making money out of my poems when I’m too dead to spend it”, she moans. Well personally, Wendy, if someone keeps my words alive when I’m not, I’ll be over the bloody moon, and I don’t give a damn what they make from doing so.

Thursday, December 06, 2007

Beyond recycling
[This is my Materials Witness column for the January 2008 issue of Nature Materials.]

It is surely ironic that global warming and environmental degradation now pose serious risks at a time when industry and technology are cleaner than at any other stage of the Industrial Revolution. Admittedly, that may not be globally true, but in principle we can manufacture products and generate energy more efficiently and with less pollution than ever before. So why the problem?

Partly, the answer is obvious: cleaner technologies struggle to keep pace with increased industrial activity as populations and economies grow. And green methodologies are typically costly, so aren’t universally available. But the equation is still more complex than that. For example, cars can be more fuel-efficient, less polluting and cheaper. But consumers who save money on fuel tend to spend it elsewhere: they drive more, say, or they spend it on holiday air flights. And cheap cars mean more cars. There is an ‘environmental rebound effect’ to such savings, counteracting the gains.

This is just one way in which ‘green’ manufacturing – using fewer materials and environmentally friendly processing, recycling wastes, and making products themselves recyclable or biodegradable – may fall short of its goal of making the world cleaner. All of these things are surely valuable, indeed essential, in making economic growth sustainable. But the problem goes beyond how things are made, to the issue of how they are used. We need to look not just at production, but at consumption.

One of the initiatives here is the so-called Product-Service System (PSS): a combination of product design and manufacture with the supply of related consumer services that has the potential to give consumers greater utility while reducing the ecological footprint. That might sound like marketing jargon, but it’s a tangible concept of proven value, enacted for example in formalized car-sharing schemes, leasing of temporary furnished office space, biological pest management services, and polystyrene recycling. It’s not mere philanthropy either: there’s a profit incentive too.

One of the key benefits of a PSS approach is that it might offer a way of simply making less stuff. You don’t need to be an eco-warrior to be shocked at the senseless excesses of current manufacturing. A splendid example of an alternative model is offered by a team in Sweden, who have outlined plans for a baby-pram leasing and remanufacturing scheme (O. Mont et al., J. Cleaner Prod. 14, 1509; 2006). Since baby prams generally last for much longer than they are needed (per child), who not lease one instead of buying it? If the infrastructure exists for repairing minor wear and tear, every customer gets an ‘as new’ product, and no prams end up on the waste tip in a near-pristine state.

Developing countries are often adept at informal schemes like this already: little gets thrown away there. But if implemented all the way from the product design stage, it is much more than recycling. What remains is to break our current cult of ‘product ownership’. Prams seem as good a place to start as any.

Thursday, November 29, 2007

Why 'Never Let Me Go' isn't really a 'science novel'

I have just finished reading Kazuo Ishiguro’s Never Let Me Go. What a strange book. First, there’s the tone – purposely amateurish writing (there can’t be any doubt, given his earlier books, that this is intentional), which creates an odd sense of flatness. As the Telegraph’s reviewer put it, “There is no aesthetic thrill to be had from the sentences – except that of a writer getting the desired dreary effect exactly right.” It’s a testament to Ishiguro that his control of this voice never slips, and that the story remains compelling in spite of the deliberately clumsy prose. That s probably a far harder trick to pull off than it seems. Second, there are the trademark bits of childlike quasi-surrealism, where he develops an idea that seems utterly implausible yet is presented so deadpan that you start to think “Is he serious about this?” – for instance, Tommy’s theory about the ‘art gallery’. This sort of dreamlike riffing was put to wonderful effect in The Unconsoled, which was a dream world from start to finish. It jarred a little at the end of When We Were Orphans, because it didn’t quite fit with the rest of the book – but was still strangely compelling. Here it seems to be an expression of the enforced naivety of the characters, but is disorientating when it becomes so utterly a part of the world that Kathy H depicts.

But my biggest concern is that the plot just doesn’t seem at all plausible enough to create a strong critique of cloning and related biotechnologies. Is that even the intention? I’m still unsure, as were several reviewers. The situation of the donor children is so unethical and so deeply at odds with any current ethical perspectives on cloning and reproductive technologies that one can’t really imagine how a world could have got this way. After all, in other respects it seems to be a world just like ours. It is not even set in some dystopian future, but has a feeling of being more like the 1980s. The ‘normal’ humans aren’t cold-hearted dysfunctionals – they seem pretty much like ordinary people, except that they seem to accept this donor business largely without question – whereas nothing like this would be tolerated or even contemplated for an instant today. It feels as though Ishiguro just hasn’t worked hard enough to make an alternative reality that can support the terrible scenario he portrays. As a result, whatever broader point he is making loses its force. What we are left with is a well told tale of friendship and tragedy experienced by sympathetic characters put in a situation that couldn’t arise under the social conditions presented. I enjoyed the book, but I can’t see how it can add much to the cloning debate. Perhaps, as one reviewer suggested, this is all just an allegory about mortality – in which case it works rather well, but is somewhat perverse.

I’ve just taken a look at M John Harrison’s review in the Guardian, which puts these same points extremely well:
“Inevitably, it being set in an alternate Britain, in an alternate 1990s, this novel will be described as science fiction. But there's no science here. How are the clones kept alive once they've begun "donating"? Who can afford this kind of medicine, in a society the author depicts as no richer, indeed perhaps less rich, than ours?

Ishiguro's refusal to consider questions such as these forces his story into a pure rhetorical space. You read by pawing constantly at the text, turning it over in your hands, looking for some vital seam or row of rivets. Precisely how naturalistic is it supposed to be? Precisely how parabolic? Receiving no answer, you're thrown back on the obvious explanation: the novel is about its own moral position on cloning. But that position has been visited before (one thinks immediately of Michael Marshall Smith's savage 1996 offering, Spares). There's nothing new here; there's nothing all that startling; and there certainly isn't anything to argue with. Who on earth could be "for" the exploitation of human beings in this way?

Ishiguro's contribution to the cloning debate turns out to be sleight of hand, eye candy, cover for his pathological need to be subtle… This extraordinary and, in the end, rather frighteningly clever novel isn't about cloning, or being a clone, at all. It's about why we don't explode, why we don't just wake up one day and go sobbing and crying down the street, kicking everything to pieces out of the raw, infuriating, completely personal sense of our lives never having been what they could have been.”

Monday, November 26, 2007

Listen out

Let me now be rather less coy about media appearances. This Wednesday night at 9 pm I am presenting Frontiers on BBC Radio 4, looking at digital medicine. This meant that I got to strap a ‘digital plaster’ to my chest which relayed my heartbeat to a remote monitor through a wireless link. I am apparently alive and well.

Salt-free Paxo

No one can reasonably expect Jeremy Paxman to have a fluent knowledge of all the subjects on which he has to ask sometimes remarkably different questions on University Challenge. But if the topic is chemistry, you’d better get it word-perfect, because he’s got no latitude for interpretation. Tonight’s round had a moment that went something like this:
Paxman: “Which hydrated ferrous salt was once known as green vitriol?”
Hapless student: “Iron sulphate.”
Paxman: “No, it’s just sulphate.”
I’ve seen precisely the same thing happen before. How come someone doesn’t pick Paxo up on it? The fact is, contestants are advised that they can press their button to challenge if they think their answer was unfairly dismissed. The offending portion of the filming then gets snipped out. But I suspect no one ever does this – it’s just too intimidating to say to Paxo “I think you’ve got that wrong.”

Friday, November 23, 2007

War is not an exact science
[This is my latest muse column for news@nature.com]

General theories of why we go to war are interesting. But they'll never tell the whole story.

Why are we always fighting wars? That’s the kind of question expected from naïve peaceniks, to which historians will wearily reply “Well, it’s complicated.”

But according to a new paper by an international, interdisciplinary team, it isn’t that complicated. Their answer is: climate change. David Zhang of the University of Hong Kong and his colleagues show that, in a variety of geographical regions – Europe, China and the arid zones of the Northern Hemisphere – the frequency of war has fluctuated in step with major shifts in climate, particularly the Little Ice Age from the mid-fifteenth until the mid-nineteenth century [1].

Cold spells like this, they say, significantly reduced agricultural production, and as a result food prices soared, food became scarce – and nations went to war, whether to seize more land or as a result of famine-induced mass migration.

On the one hand, this claim might seem unexceptional, even trivial: food shortages heighten social tensions. On the other hand, it is outrageous: wars, it says, have little to do with ideology, political ambition or sheer greed, but are driven primarily by the weather.

Take, for example, the seventeenth century, when Europe was torn apart by strife. The Thirty Years War alone, between 1618 and 1648, killed around a third of the population in the German states. Look at the history books and you’ll find this to be either a religious conflict resulting from the Reformation of Martin Luther and Jean Calvin, or a political power struggle between the Habsburg dynasty and their rivals. Well, forget all that, Zhang and his colleagues seem to be saying: it’s all because we were suffering the frigid depths of the Little Ice Age.

I expect historians to respond to this sort of thing with lofty disdain. You can see their point. The analysis stops at 1900, and so says nothing about the two most lethal wars in history – which, as the researchers imply, took place in an age when economic, technological and institutional changes had reduced the impact of agricultural production on world affairs. Can you really claim to have anything like a ‘theory of war’ if it neglects the global conflicts of the twentieth century?

And historians will rightly say that grand synoptic theories of history are of little use to them. Clearly, not all wars are about food. Similarly, not all food shortages lead to war. There is, in historical terms, an equally compelling case to be made that famine leads to social unrest and potential civil war, not to the conflict of nation states. But more generally, the point of history (say most historians) is to explain why particular events happened, not why generic social forces sometimes lead to generic consequences. There is a warranted scepticism of the kind of thinking that draws casual parallels between, say, Napoleon’s imperialism and current US foreign policy.

Yet some of this resistance to grand historical theorizing may be merely a backlash. In particular, it stands in opposition to the Marxist position popular among historians around the middle of the last century, and which has now fallen out of fashion. And the Marxist vision of a ‘scientific’ socio-political theory was itself a product of nineteenth century mechanistic positivism, as prevalent among conservatives like Leo Tolstoy and liberals like John Stuart Mill as it was in the revolutionary socialism of Marx and Engels. It was Tolstoy who, in War and Peace, invoked Newtonian imagery in asking “What is the force that moves nations?”

Much of this can be traced to the famous proposal of Thomas Robert Malthus, outlined in his Essay on the Principles of Population (1826), that population growth cannot continue for ever on an exponential rise because it eventually falls foul of the necessarily slower rise in means of production – basically, the food runs out. That gloomy vision was also an inspiration to Charles Darwin, who saw that in the wild this competition for limited resources must lead to natural selection.

Zhang and colleagues state explicitly that their findings provide a partial vindication of Malthus. They point out that Malthus did not fully account for the economic pressures and sheer ingenuity that could boost agricultural production when population growth demanded it, but they say that such improvements have their limits, which were exceeded when climate cooling lowered crop yields in Europe and China.

For all their apparently impressive correlation indices, however, it is probably fair to say that responses to Zhang et al.’s thesis will be a matter of taste. In the end, an awful lot seems to hinge on the coincidence of minimal agricultural production (and maximum in food prices), low average temperatures, and a peak in the number of wars (and fatalities) during the early to mid-seventeenth century in both Europe and China. The rest of the curves are suggestive, but don’t obviously create a compelling historical narrative. At best, they provoke a challenge: if one cannot now show a clear link between climate/agriculture and, say, the Napoleonic wars from the available historical records themselves, historians might be forgiven for questioning the value of this kind of statistical analysis.

Yet what if the study helps us to understand, even a little bit, what causes war? That itself is an age-old question – Zhang and colleagues identify it, for example, in Thucydides’ History of the Peloponnesian Wars in the 5th century BC. Neither are they by any means the first in modern times to look for an overarching theory of war. The issue motivated the physicist Lewis Fry Richardson between about 1920 and 1950 to plot size against frequency for many recent wars (including the two world wars), and thereby to identify the kind of power-law scaling that has led to the notion that wars are like landslides, where small disturbances can trigger events of any scale [2-4]. Other studies have focused on the cyclic nature of war and peace, as for example in ecologist Peter Turchin’s so-called cliodynamics, which attempts to develop a theory of the expansion and collapse of empires [5,6].

Perhaps most prominent in this arena is an international project called the Correlates of War, which has since 1963 been attempting to understand and quantify the factors that create (and mitigate) international conflict and thus to further the “scientific knowledge about war”. Its data sets have been used, for example, in quantitative studies of how warring nations form alliances [7], and they argue rather forcefully against any notion of collapsing the causative factors onto a single axis such as climate.

What, finally, do Zhang and colleagues have to tell us about future conflict in an anthropogenically warmed world? At face value, the study might seem to say little about that, given that it correlates war with cooling events. There is some reason to think that strong warming could be as detrimental to agriculture as strong cooling, but it’s not clear exactly how that would play out, especially in the face of both a more vigorous hydrological cycle and the possibility of more regional droughts. We already know that water availability will become a serious issue for agricultural production, but also that there’s a lot that can still be done to ameliorate that, for instance by improvements in irrigation efficiency.

We’d be wise to greet the provocative conclusions of Zhang et al. with neither naïve acceptance nor cynical dismissal. They do not amount to a theory of history, or of war, and it seems most unlikely that any such things exist. But their paper is at least a warning against a kind of fatalistic solipsism which assumes that all human conflicts are purely the result of human failings.

References

1. Zhang, D. D. et al. Proc. Natl Acad. Sci. USA doi/10.1073/pnas.0703073104
2. Richardson, L. F. Statistics of Deadly Quarrels, eds Q. Wright and C. C. Lienau (Boxwood Press, Pittsburgh, 1960).
3. Nicholson, M. Brit. J. Polit. Sci. 29, 541-563 (1999).
4. Buchanan, M. Ubiquity (Phoenix, London, 2001).
5. Turchin, P. Historical Dynamics (Princeton University Press, 2003).
6. Turchin, P. War and Peace and War (Pi Press, 2005).
7. Axelrod R. & D. S. Bennett, Brit. J. Polit. Sci. 23, 211-233 (1993).

Thursday, November 22, 2007

Schrödinger’s cat is not dead yet

[This is an article I’ve written for news@nature. One of the things I found most interesting was that Schrödinger didn’t set up his ‘cat’ thought experiment with a gun, but with an elaborate poisoning scheme. Johannes Kofler says “He puts a cat into a steel chamber and calls it "hell machine" (German: Höllenmaschine). Then there is a radioactive substance in such a tiny dose that within one hour one atom might decay but with same likelihood nothing decays. If an atom decays, a Geiger counter reacts. In this case this then triggers a small hammer which breaks a tiny flask with hydrocyanic acid which poisons the cat. Schrödinger is really very detailed in describing the situation.” There’s a translation of Schrödinger’s original paper here, but as Johannes says, the wonderful “hell machine” is simply translated as “device”, which is a bit feeble.]

Theory shows how quantum weirdness may still be going on at the large scale.

Since the particles that make up the world obey the rules of quantum theory, allowing them to do counter-intuitive things such as being in several different places or states at once, why don’t we see this sort of bizarre behaviour in the world around us? The explanation commonly offered in physics textbooks is that quantum effects apply only at very small scales, and get smoothed away at the everyday scales we can perceive.

But that’s not so, say two physicists in Austria. They claim that we’d be experiencing quantum weirdness all the time – balls that don’t follow definite paths, say, or objects ‘tunnelling’ out of sealed containers – if only we had sharper powers of perception.

Johannes Kofler and Caslav Brukner of the University of Vienna and the Institute of Quantum Optics and Quantum Information, also in Vienna, say that the emergence of the ‘classical’ laws of physics, deduced by the likes of Galileo and Newton, from the quantum world is an issue not of size but of measurement [1]. If we could make every measurement with as much precision as we liked, there would be no classical world at all, they say.

Killing the cat

Austrian physicist Erwin Schrödinger famously illustrated the apparent conflict between the quantum and classical descriptions of the world. He imagined a situation where a cat was trapped in a box with a small flask of poison that would be broken if a quantum particle was in one state, and not broken if the particle was in another.

Quantum theory states that such a particle can exist in a superposition of both states until it is observed, at which point the quantum superposition ‘collapses’ into one state or the other. Schrödinger pointed out that this means that the cat is neither dead nor alive until someone opens the box to have a look – a seemingly absurd conclusion.

Physicists generally resolve this paradox through a process called decoherence, which happens when quantum particles interact with their environment. Decoherence destroys the delicately poised quantum state and leads to classical behaviour.

The more quantum particles there are in a system, the harder it is to prevent decoherence. So somewhere in the process of coupling a single quantum particle to a macroscopic object like a flask of poison, decoherence sets in and the superposition is destroyed. This means that Schrödinger’s cat is always unambiguously in a macroscopically ‘realistic’ state, either alive or dead, and not both at once.

But that’s not the whole story, say Kofler and Brukner. They think that although decoherence typically intervenes in practice, it need not do so in principle.

Bring back the cat

The fate of Schrödinger’s cat is an example of what in 1985 physicists Anthony Leggett and Anupam Garg called macrorealism [2]. In a macrorealistic world, they said, objects are always in a single state and we can make measurements on them without altering that state. Our everyday world seems to obey these rules. According to the macrorealistic view, “there are no Schrödinger cats allowed” says Kofler.

But Kofler and Brukner have proved that a quantum state can get as ‘large’ as you like, without conforming to macrorealism.

The two researchers consider a system akin to a magnetic compass needle placed in a magnetic field. In our classical world, the needle rotates around the direction of the field in a process called precession. That movement can be described by classical physics. But in the quantum world, there would be no smooth rotation – the needle could be in a superposition of different alignments, and would just jump instantaneously into a particular alignment once we tried to measure it.

So why don’t we see quantum jumps like this? The researchers show that it depends on the precision of measurement. If the measurements are a bit fuzzy, so that we can’t distinguish one quantum state from several other, similar ones, this smoothes out the quantum oddities into a classical picture. Kofler and Brukner show that, once a degree of fuzziness is introduced into measured values, the quantum equations describing the observed objects turn into classical ones. This happens regardless of whether there is any decoherence caused by interaction with the environment.

Having kittens

Kofler says that we should be able to see this transition between classical and quantum behaviour. The transition would be curious: classical behaviour would be punctuated by occasional quantum jumps, so that, say, the compass needle would mostly rotate smoothly, but sometimes jump instantaneously.

Seeing the transition for macroscopic objects like Schrödinger’s cat would require that we be able to distinguish an impractically large number of quantum states. For a ‘cat’ containing 10**20 quantum particles, say, we would need to be able to tell the difference between 10**10 states – just too many to be feasible.

But our experimental tools should already be good enough to look for this transition in much smaller ‘Schrödinger kittens’ consisting of many but not macroscopic numbers of particles, says Kofler and Brukner.

What, then, becomes of these kittens before the transition, while they are still in the quantum regime? Are they alive or dead? ‘We prefer to say that they are neither dead nor alive,’ say Kofler and Brukner, ‘but in a new state that has no counterpart in classical physics.’

References

1. Kofler, J. & Brukner, C. Phys. Rev. Lett. 99, 180403 (2007).
2. Leggett, A. & Garg, A. Phys. Rev. Lett. 54, 857 (1985).
Not natural?
[Here’s a book review I’ve written for Nature, which I put here because the discussion is not just about the book!]

The Artificial and the Natural: An Evolving Polarity
Ed. Bernadette Bensaude-Vincent and William R. Newman
MIT Press, Cambridge, Ma., 2007

The topic of this book – how boundaries are drawn between natural and synthetic – has received too little serious attention, both in science and in society. Chemists are notoriously (and justifiably) touchy about descriptions of commercial products as ‘chemical-free’; but the usual response, which is to lament media or public ignorance, fails to recognize the complex history and sociology that lies behind preconceptions about chemical artifacts. Roald Hoffmann has written sensitively on this matter in The Same and Not the Same (Columbia University Press, 1995), and he contributes brief concluding remarks to this volume. But the issue is much broader, touching on areas ranging from stem-cell therapy and assisted conception to biomimetic engineering, synthetic biology, machine intelligence and ecosystem management.

It is not, in fact, an issue for the sciences alone. Arguably the distinction between nature and artifice is equally fraught in what we now call the fine arts – where again it tends to be swept under the carpet. While some modern artists, such as Richard Long and Andy Goldsworthy, address the matter head-on with their interventions in nature such as the production of artificial rainbows, much popular art criticism now imposes a contemporary view even on the Old Masters. Through this lens, Renaissance writer Giorgio Vasari’s astonishment that Leonardo’s painted dewdrops “looked more convincing than the real thing” appears a little childish, as though he has missed the point of art – for no one now believes that the artist’s job is to mimic nature as accurately as possible. Perhaps with good reason, but it is left to art historians to point out that there is nothing absolute about this view.

At the heart of the matter is the fact that ‘art’ has not always meant what it does today. Until the late Enlightenment, it simply referred to anything human-made, whether that be a sculpture or an engine. The panoply of mutated creatures described in Francis Bacon’s The New Atlantis (1627) were the products of ‘art’, and so were the metals generated in the alchemist’s laboratory. The equivalent word in ancient Greece was techne, the root of ‘technology’ of course, but in itself a term that embraced subtle shades of meaning, examined here in ancient medicine by Heinrich von Staden and in mechanics by Francis Wolff.

The critical issue was how this ‘art’ was related to ‘nature’, approximately identified with what Aristotle called physis. Can art produce things identical to those in nature, or only superficial imitations of them? (That latter belief left Plato rather dismissive of the visual arts.) Does art operate using the same principles as nature, or does it violate them? Alchemy was commonly deemed to operate simply by speeding up natural processes: metals ripened into gold sooner in the crucible than they did in the ground, while (al)chemical medicines accelerated natural healing. And while some considered ‘artificial’ things to be always inferior to their ‘natural’ equivalents, it was also widely held that art could exceed nature, bringing objects to a greater state of perfection, as Roger Bacon thought of alchemical gold.

The emphasis in The Artificial and the Natural is historical, ranging from Hippocrates to nylon. These motley essays are full of wonders and insights, but are ultimately frustrating too in their microcosmic way. There is no real synthesis on offer, no vision of how attitudes have evolved and fragmented. There are too many conspicuous absences for the book to represent an overview. One can hardly feel satisfied with such a survey in which Leonardo da Vinci is not even mentioned. It would have been nice to see some analysis of changing ideas about experimentation, the adoption of which was surely hindered by Aristotle’s doubts that ‘art’ (and thus laboratory manipulation) was capable of illuminating nature. Prejudices about experiments often went even further: even in the Renaissance one could feel free to disregard what they said if it conflicted with a priori ‘truths’ gleaned from nature, rather as Pythagoras advocated studying music “setting aside the judgement of the ears”. And it would have been fascinating to see how these issues were discussed in other cultures, particularly in technologically precocious China.

But most importantly, the discussion sorely lacks a contemporary perspective, except for Bernadette Bensaude-Vincent’s chapter on plastics and biomimetics. This debate is no historical curiosity, but urgently needs airing today. Legislation on trans-species embryology, reproductive technology, genome engineering and environmental protection is being drawn up based on what sometimes seems like little more than a handful of received wisdoms (some of them scriptural) moderated by conventional risk analysis. There is, with the possible exception of biodiversity discussions, almost no conceptual framework to act as a support and guide. All too often, what is considered ‘natural’ assumes an absurdly idealized view of nature that owes more to the delusions of Rousseau’s romanticism than to any historically informed perspective. By revealing how sophisticated, and yet how transitory, the distinctions have been in the past, this book is an appealingly erudite invitation to begin the conversation.

Sunday, November 18, 2007


Astronomy: the dim view

One Brian Robinson contributes to human understanding on the letters page of this Saturday’s Guardian with the following:
“Providing funding for astronomers does not in any way benefit the taxpayer. Astronomy may be interesting, but the only mouths that will get fed are the children of the astronomers. Astronomy is a hobby, and as such should not be subsidised by the Treasury any more than trainspotting.”
The invitation is to regard this as the sort of Thatcherite anti-intellectualism that is now ingrained in our political system. And indeed, the notion that anything state-funded must ‘benefit the taxpayer’ – specifically, by putting food in mouths – is depressing not only in its contempt for learning but also in its ignorance of how the much-vaunted ‘wealth creation’ in a technological society works.

But then you say, hang on a minute. Why astronomy, of all things? Why not theology, archaeology, philosophy, and all the arts other than the popular forms that are mass-marketable and exportable? And then you twig: ‘astronomy is a hobby’ – like trainspotting. This bloke thinks that professional astronomers are sitting round their telescopes saying ‘Look, I’ve just got a great view of Saturn’s rings!’ They are like the funny men in their sheds looking at Orion, only with much bigger telescopes (and sheds). In other words, Mr Robinson hasn’t the faintest notion of what astronomy is.

Now, I have some gripes with astronomers. It is not just my view, but seems to be objectively the case, that the field is sometimes narrowly incestuous and lacks the fecundity that comes from collaborating with people in other fields, with the result that its literature is often far more barren than it has any right to be, given what’s being studied here. And the astronomical definition of ‘metals’ is so scientifically illiterate that it should be banned without further ado, or else all other scientists should retaliate by calling anything in space that isn’t the Earth a ‘star’. But astronomy is not only one of the oldest and most profound of human intellectual endeavours; it also enriches our broader culture in countless ways.

The presence of Mr Robinson’s letter on the letters page, then, is not a piece of cheeky provocation, but an example of the nearly ubiquitous ignorance of science among letters-page editors. They simply didn’t see what he was driving at, and thus how laughable it is. It is truly amazing what idiocies can get into even the most august of places – the equivalent, often, of a reader writing in to say that, oh I don’t know, that Winston Churchill was obviously a Kremlin spy or that Orwell wrote Cold Comfort Farm. Next we’ll be told that astronomers are obviously fakes because their horoscopes never come true.

Monday, November 12, 2007


Is this what writers’ studies really look like?

Here is another reason to love Russell Hoban (aside from his having written the totally wonderful Riddley Walker, and a lot of other great stuff too). It is a picture of his workplace, revealed in the Guardian’s series of ‘writers’ rooms’ this week. I love it. After endless shots of beautiful mahogany desks surrounded by elegant bookshelves and looking out onto greenery, like something from Home and Garden, here at last is a study that looks as though the writer works in it. It is the first one in the series that looks possibly even worse than mine.

The mystery is what all the other writers do. Sure, there may be little stacks of books being used for their latest project – but what about all the other ‘latest projects’? The papers printed out and unread for months? The bills unpaid (or paid and not filed)? The letters unanswered (or ditto)? The books that aren’t left out for any reason, other than that there is no other place to put them? The screwdrivers and sellotape and tissues and plastic bags and stuff I’d rather not even mention? What do these people do all day? These pictures seem to demand the image of a writer who, at the end of the day, stretches out his/her arms and says “Ah, now for a really good tidy-up”. That is where my powers of imagination fail me.

It all confirms that we simply do not deserve Russell Hoban.