Tuesday, August 17, 2010

Christmas is coming


I’m excited. Really. I have just discovered that my friend Mark Miodownik is going to deliver the Royal Institution Christmas Lectures this year. Mark is a materials scientist at King’s College (where, outrageously, the Materials Science Department is no more). He runs the wonderful Materials Library, where one can see and touch lots of very weird materials. I can think of no one better to fill Michael Faraday’s shoes for the Christmas Lectures, and I plan to be there.

Wednesday, August 04, 2010

More on the problem with economics


OK, my article on agent-based modelling of the economy is now out in the Economist – you might be able to get it here, but if firewalls prevent that then here, naturally, is the original thing. And I’m interested that the reader comments don’t seem by any means as adverse to this sort of thing as I’d imagined regular economists would be. Encouraging. Some feel that the economy, or people, are too complex to be captured by any kind of modelling. I don’t believe there is any reason to think that (and some good evidence to the contrary), although it is surely right that we must keep all models in perspective. And we have to remember that social science is the hardest science of all.

*****************************

For economists, the most serious deficit of the credit crunch may be in credibility. Vocal critics such as Nassim Nicholas Taleb are demanding to know why, when they failed utterly to foresee the crisis – indeed, apparently endorsed the conditions that created it – we should have the slightest faith in their capacity to mend it. And the diametrically opposed views of professional economists on what the remedy should be scarcely commands trust.

Yet there is little sign of discomfort or self-reflection in the citadel of orthodox economic theory. Much the same people, using much the same tools, are guiding economic policy after the crash as before it. Forecasting at the Federal Reserve, for example, is still being done using the so-called dynamic stochastic general equilibrium (DSGE) models that led one of its governors, Frederic Mishkin, to deliver an assessment of the downturn in the US housing market in summer 2007 that now looks grotesquely optimistic. The message seems to be ‘if you don’t fix it, it ain’t broke.’

Mainstream economics has always had its dissidents. But the seeds of change have never before found such fertile soil. Heavyweights such as Joseph Stiglitz and Paul Krugman are calling for radical rethinking. The Institute for New Economic Thinking (INET) in New York, which had its inaugural conference in April, boasts Stiglitz and Amartya Sen on its advisory board, and is bankrolled by George Soros. A hearing of the US House of Representatives Committee of Science and Technology in July called on distinguished witnesses such as Robert Solow to ‘build a science of economics for the real world’.

Critics tend to concur about what is wrong with the tools currently used for macroeconomic forecasting and policy – DSGE models were targeted in the House hearing, for example, while the INET has attacked many of the assumptions, including the efficient-market hypothesis and rational expectations, on which these models are predicated. But there is less agreement about what should replace the old techniques.

The hearing aimed to ‘question the wisdom of relying for national economic policy on a single, specific model when alternatives are available.’ One of the most promising and popular of these alternatives was on display at a workshop in Warrenton, Virginia at the end of June, funded by the US National Science Foundation and attended by a diverse bunch that included economists from the Fed and the Bank of England, social scientists, policy advisors and computer scientists. They explored the potential of so-called agent-based models (ABMs) of the economy to help us learn the lessons of the current financial crisis and perhaps to develop an early-warning system for anticipating the next one.  Better still, this non-traditional approach might offer prevention rather than cure: not the false promise of a crisis-free economy, but a way of identifying systemic vulnerabilities and mitigating their effects.

Agent-based modeling [1] does not assume that the economy can achieve a settled equilibrium. The modeler imposes no order or design on the economy from the top down, and unlike many traditional models, ABMs are not populated with ‘representative agents’: identical traders, firms or households whose individual behaviour mirrors the economy as a whole. Rather, an ABM uses a bottom-up approach which assigns particular behavioural rules to each agent. For example, some may believe that prices reflect fundamentals while others may rely on empirical observations of past price trends.

Crucially, agent behaviour may be determined (and altered) by direct interactions between them, whereas in conventional models interaction happens only indirectly through pricing. This provision of ABMs enables, for example, the copycat behaviour that leads to “herding” among investors. The agents may learn from experience or switch their strategies according to majority opinion. They can aggregate into institutional structures such as banks and firms. These things are very hard, sometimes impossible, to build into conventional models. But in an agent-based model one simply runs a computer simulation to see what emerges, free from any top-down assumptions. As economist Alan Kirman has put it, ABMs ‘provide an account of macro phenomena which are caused by interaction at the micro level but are no longer a blown-up version of that activity.’

Agent-based models are not exactly an alternative to conventional approaches, but a generalization of them: just about any economic theory could be expressed as an ABM, including the DSGE models now used for forecasting by most central banks. While those models are also based on microeconomic foundations, they accept the traditional view that there exists some ideal equilibrium towards which all prices are drawn. That this is often approximately true is why DSGE models perform well enough in a business-as-usual economy.

But DSGE models are useless in a crisis, as even advocates such as Robert Lucas admit. Last year, Lucas responded in this magazine to the criticism that these theories had failed to foresee the credit crunch by saying that such events are inherently unpredictable. All that can be reasonably expected of economic models, Lucas implied, is that they work well in ‘normal’ times. Crashes must forever be anomalies where theory breaks down.

That’s true of DSGE models because their ‘dynamic stochastic’ element amounts to minor fluctuations around an equilibrium state. Yet there is no equilibrium during big market fluctuations such as crashes – one can say that DSGE models thus insist that such events never occur.

ABMs, in contrast, make no assumptions about the existence of efficient markets or general equilibrium. The markets that they generate are generally not in equilibrium at all but are more like a turbulent river or the weather system, subject to constant storms and seizures of all sizes. Big fluctuations and even crashes are an inherent feature.

That’s because ABMs contain feedback mechanisms that can potentially amplify small effects, such as the herding and panic that generates bubbles and crashes. In mathematical terms the models are nonlinear, meaning that effects need not be proportional to their causes. These nonlinearities are absent from DSGE models, but they were evidently central to the credit crunch.

For example, in Virginia Andrew Lo of MIT’s Laboratory for Financial Engineering presented a model of the US housing market, inspired by ABM approaches, which showed how a fateful conjunction of rising house prices, falling interest rates and easy access to refinancing created high systemic risk, amplifying the housing downturn into an awesome burden of debt [2]. And John Geanakoplos of Yale University explained how the leverage cycle in remortgaging – high leverage during booms, low during recessions – can bloom into instability like an out-of-control pendulum, unless carefully managed [3]. The web of interdependencies forged from the buck-passing of risk using complex derivatives may create the potential for propagating nonlinear instabilities analogous to those that crashed the power grid of the North American eastern seaboard in 2003, and are precisely the kind of thing that ABMs are well suited to capturing. Sujit Kapadia of the Bank of England is attempting to uncover and model these network-based vulnerabilities in financial systems [4],

While all of these culprits have been fingered in the voluminous post-mortems of the current crisis, there has been barely any discussion of the way nonlinear feedbacks gave them such impact. As a result, the understanding on which any preventative regulation and ‘macroprudential’ strategies might be based is still thin.

Another of the key lessons of the crisis is the role of interactions between different sectors – housing and finance, say. While concentional macroeconomic models can incorporate these, ABMs might be better tailored to each specific sector – for example, including banks in financial markets, which DSGE models do not. In principle, ABMs can include as much of the economy as you like, with all the sector-specific structures and quirks. Indeed, the organizers of the Virginia workshop – physicist-turned-economist Doyne Farmer of the Santa Fe Institute in New Mexico and social scientist Robert Axtell of George Mason University in Virginia – wanted to explore the feasibility and utility of constructing an immense ABM of the entire global economy by ‘wiring’ many such modules together.

What might be required for such an enterprise in resources and expertise, and what might it hope to achieve? One vision is a real-time simulation, fed by masses of input data, that would operate rather like the traffic models now used for forecasting on the roads of Dallas and the Rhine-Westphalia region. But it might be more realistic and useful to employ a suite of such models, in the manner of global climate simulations, which project various possible futures and thus give an aggregated forecast – and show how our actions, laws and institutions might influence it.

In either case, the models would need much more data on the activities of individuals, banks and companies than is currently available. Gathering such information will be one of the key tasks of the US Office of Financial Research instituted by the 2010 Dodd-Frank Act to reform Wall Street. While this plan has raised privacy fears, such data-gathering is no less essential for understanding the economy than are meteorological observations for understanding climate, or geological monitoring to anticipate earthquakes.

And although seismologists may never be able to make precise forecasts, it would be deplorable if they were to shrug and resign themselves to modelling just the regular, gradual movements of tectonic plates and faults. Instead they have developed methods for mapping the evolution of stress patterns, identifying areas at risk, and refining rough heuristics for hazard assessment. Why should the same not be done for the financial system? It won’t be cheap or easy. But to deny the very possibility merely to absolve the conventional models of their severe limitations is starting to look unforgivable.

References

1. B. LeBaron & L. Tesfatasion, Am. Econ. Rev. 98(2), 246-250 (2008).
2. A. E. Khandani, A. W. Lo & R. C. Merton, Working Paper, September 2009.
3. A. Fostel & J. Geanakoplos, Am. Econ. Rev. 98(4), 1211-1244 (2008).
4. P. Gai & S. Kapadia, Bank of England Working Paper 383 (2010).

Wednesday, July 28, 2010

A new kind of economics


This is the first of the pieces I've written on the back of a workshop that I attended at the end of June on agent-based modelling of the economy. It appears (in edited form) in the August issue of Prospect. I have also written on this for the Economist - will post that shortly. And I am writing on the more general issue of large-scale simulation of the economy and other social systems for New Scientist.

***************************************************

Critics of conventional economic theory have never had it so good. The credit crunch has left the theory embraced by most of the economic community a sitting duck. Using the equations of the most orthodox theoretical framework – so-called dynamic stochastic general equilibrium (DSGE) models – the governor of the Federal Reserve Frederic Mishkin forecast in the summer of 2007 that the banking problems triggered by stagnation of the US housing market would be a minor blip. The story that unfolded subsequently, culminating in September 2008 in the near-collapse of the global financial market, seemed to represent the kind of falsification that would bury any theory in the natural sciences.

But it has not done so here, and probably will not. How come? The Nobel laureate Robert Lucas, who advocated the replacement of Keynesian economic models with DSGE models in the 1970s, has explained why: this theory is explicitly not designed to handle crashes, so of course it will not predict them. That’s not a shortcoming of the models, Lucas says, but a reflection of the stark reality that crashes are inherently unpredictable by this or any other theory. They are aberrations, lacunae in the laws of economics.

You can see his point. Retrospective claims to have foreseen the crisis amount to little more than valid but generalized concerns about the perils of prosperity propped up by easy credit, or of complex financial instruments whose risks are opaque even to those using them. No one forecast the timing, direction or severity of the crash – and how could they, given that the debts directly tied up in ‘toxic’ sub-prime mortgage defaults were relatively minor?

But this pessimistic position is under challenge. For Lucas is wrong; there are models of financial markets that do generate crashes. Fluctuations ranging from the quotidian to the catastrophic are an intrinsic feature of some models that dispense with the simplifying premises of DSGE and instead try to construct market behaviour from the bottom up. They create computer simulations of large numbers of ‘agents’ – individuals who trade with one another according to specified decision-making rules, while responding to each others’ decisions. These so-called agent-based models take advantage of the capacity of modern computers to simulate complex interactions between vast numbers of agents. The approach has already been used successfully to understand and predict traffic flow and pedestrian movements – here the agents (vehicles or people) are programmed to move to their destination at a preferred speed unless they must slow down or veer to avoid a collision – as well as to improve models of contagion in disease epidemics.

A handful of economists, along with interlopers from the natural sciences, believe that agent-based models offer the best hope of understanding the economy in all its messy glory, rather than just the decorous aspects addressed by conventional theories. At a workshop in Virginia in June, I heard how ABMs might help us learn the lessons of the credit crunch, anticipate and guard against the next one, and perhaps even offer a working model of the entire economic system.

Some aspects of ABMs are so obviously an improvement on conventional economic theories that it seems bizarre to outsiders why they are still marginalized. Agents, like real traders, can behave in diverse ways. They can learn from experience. They are affected by each other’s actions, potentially leading to the herd behaviour that undoubtedly afflicts markets. ABMs, unlike DSGE models, can include institutions such as banks (a worrying omission, you might imagine, in models of financial markets). Some of these factors can be incorporated into orthodox theories, but not easily or transparently, and often they are not.

What upsets traditional economists most, however, is that ABMs are ‘non-equilibrium’ models, which means that they generally never settle into a steady state in which prices adjust to meet demand and markets ‘clear’, meaning that supply is perfectly matched to demand. Conventional economic thinking has, more or less since Adam Smith, assumed the reality of this platonic ideal, which is ruffled by external ‘shocks’ such as political events and policies. In its most simplistic form, this perfect market demands laissez-faire free trade, and is only hindered by regulation and intervention.

Even though orthodox theorists acknowledge that ‘market imperfections’ cause deviations from this ideal (market failures), that very terminology gives the game away. In ABMs, ‘imperfections’ and ‘failures’ are generally a natural, emergent feature of the more realistic ingredients of the models. This posits a totally different view of how the economy operates. For example, feedbacks such as herd-like trading behaviour can create bubbles in which commodity prices soar on a wave of optimism, and crashes when panic sweeps across the trading floor. It seems clear that such amplifying processes turned a downturn of the US housing market into a freezing of credit throughout the entire banking system.

What made the Virginia meeting, sponsored by the US National Science Foundation, unusual is that it was relatively heedless of these battle lines between conventional and alternative thinkers. Committed agent-based modellers mixed with researchers from the Federal Reserve, the Bank of England and the Rand Corporation, specialists in housing markets and policy advisers. The goal was both to unravel the lessons of the credit crunch and to discuss the feasibility of making immense ABMs with genuine predictive capability. That would be a formidable enterprise, requiring the collaboration of many different experts and probably costing tens of millions of dollars. Even with the resources, it would probably take at least five years to have a model up and running.

Once that would have seemed a lot to gamble. Now, with a bill from the crisis running to trillions (and the threat of more to come), to refuse this investment would border on the irresponsible. Could such a model predict the next crisis, though? That’s the wrong question. The aim – and there is surely no president, chancellor, or lending or investment bank CEO who does not now crave this – would be to identify where the systemic vulnerabilities lie, what regulations might mitigate them (and which would do the opposite), and whether early-warning systems could spot danger signs. We’ve done it for climate change. Does anyone now doubt that economic meltdown poses comparable risks and costs?

Monday, July 26, 2010

Darwin vs D'Arcy: a false dichotomy?


I’ve just been directed towards P. Z. Myer’s Pharyngula blog in which, during the course of a dissection of Fodor and Piattelli-Palmarini’s book What Darwin Got Wrong, Myers has the following to say about D’Arcy Thompson and On Growth and Form:

D’Arcy Wentworth Thompson was wrong.

Elegantly wrong, but still wrong. He just never grasped how much of genetics explained the mathematical beauty of biology, and it's a real shame — if he were alive today, I'm sure he'd be busily applying network theory to genetic interactions.

[Sorry, must stop you there. Not even Fodor and Piattelli-Palmarini called their book Darwin Was Wrong. I suspect they wanted to, but could not justify it even to themselves. D’Arcy Thompson’s book is over 1000 pages long. Is it all wrong? Simple answer: of course it is not. Take a look at this, for example. I know; this is simply rhetoric. It’s just that I still believe it matters to find the right words, rather than sound bites.]

Let's consider that Fibonacci sequence much beloved by poseurs. It's beautiful, it is so simple, it appears over and over again in nature, surely it must reflect some intrinsic, fundamentally mathematical ideal inherent in the universe, some wonderful cosmic law — it appears in the spiral of a nautilus shell as well as the distribution of seeds in the head of a sunflower, so it must be magic. Nope. In biology, it’s all genes and cellular interactions, explained perfectly well by the reductionism [Mary] Midgley deplores [in her review of F&P-P].

The Fibonacci sequence (1, 1, 2, 3, 5, 8…each term generated by summing the previous two terms) has long had this kind of semi-mystical aura about it. It's related to the Golden Ratio, phi, of 1.6180339887… because, as you divide each term by the previous term, the ratio tends towards the Golden Ratio as you carry the sequence out farther and farther. It also provides a neat way to generate logarithmic spirals, as we seen in sunflowers and nautiluses. And that's where the genes sneak in.

Start with a single square on a piece of graph paper. Working counterclockwise in this example, draw a second square with sides of the same length next to it. Then a third square with the same dimensions on one side as the previous two squares. Then a fourth next to the previous squares…you get the idea. You can do this until you fill up the whole sheet of paper. Now look at the lengths of each side of the squares in the series — it's the Fibonacci sequence, no surprise at all there.

You can also connect the corners with a smooth curve, and what emerges is a very pretty spiral — like a nautilus shell.

It's magic! Or, it's mathematics, which sometimes seems like magic! But it's also simple biology. I look at the whirling squares with the eyes of a developmental biologist, and what do I see? A simple sequential pattern of induction. A patch of cells uses molecules to signal an adjacent patch of cells to differentiate into a structure, and then together they induce a larger adjacent patch, and together they induce an even larger patch…the pattern is a consequence of a mathematical property of a series expressed on a 2-dimensional sheet, but the actual explanation for why it recurs in nature is because it's what happens when patches of cells recruit adjacent cells in a temporal sequence. Abstract math won't tell you the details of how it happens; for that, you need to ask what are the signaling molecules and what are the responding genes in the sunflower or the mollusc. That's where Thompson and these new wankers of the pluralist wedge fail — they stop at the cool pictures and the mathematical formulae and regard the mechanics of implementation as non-essential details, when it's precisely those molecular details that generate the emergent property that dazzles them…

There is nothing in this concept that vitiates our modern understanding of evolutionary theory, the whole program of studying changes in genes and their propagation through populations. That's the mechanism of evolutionary change. What evo-devo does is add another dimension to the issue: how does a mutation in one gene generate a ripple of alterations in the pattern of expression of other genes? How does a change in a sequence of DNA get translated into a change in form and physiology?

Those are interesting and important questions, and of course they have consequences on evolutionary outcomes…but they don't argue against genetics, population genetics, speciation theory, mutation, selection, drift, or the whole danged edifice of modern evolutionary biology. To argue otherwise is like claiming the prettiness of a flower is evidence against the existence of a root.

OK (hello, me again), I think I’d go along with just about all of this, apart from a suspicion that there is probably a better term for ‘wankers of the pluralist wedge’. Indeed, it is precisely how this self-organization is initiated at the biomolecular/cellular level that I have explored, both in phyllotaxis and in developmental biology generally, in my book Shapes (OUP, 2009) (alright, but I’m just saying). Yet there seems to be a big oversight here. Myers seems to be implying that, because genetic signals are involved, phyllotactic patterns are adaptive. I’m not aware that there is any evidence for that. In fact, quite the contrary: it seems that spiral Fibonacci phyllotaxis is the generic pattern for any meristem budding process that operates by some reaction-diffusion scheme, or indeed by any more general process in which the pattern elements experience an effective mutual repulsion in this cylindrical geometry (see here). So apparently, in phyllotaxis at least, the patterns and shapes are not a product of natural selection. Possessing leaves is surely adaptive, but there seems to be little choice in where they go if they are to be initiated by diffusing hormones. In his review of F&P-P, Michael Ruse puts it this way: ‘The order of a plant’s leaves may be fixed, but how those leaves stand up or lie down is selection-driven all of the way.’

So sure, there is absolutely nothing in this picture that challenges Neodarwinism. And sure, we should say so. But it does imply that, in the case of plants, an important aspect of shape determination may lie beyond the reach of natural selection to do much about. And this surely suggests that, since the same processes of morphogen diffusion operate in animal development, there might equally be aspects of that process too that have little to do with natural selection. Myers alludes to the case of spiralling mollusc shells: well yes, here too it appears that the basic logarithmic-spiral shape is going to be enforced by the simple maths of self-similar growth, and all evolution can do is fine-tune the contours of that spiral. That, indeed, is what Myers has said, though appearing to think he has not: the pattern is an inevitable consequence of the maths of the growth process. So no, it’s not magic. But it’s not in itself adaptive either. And correct me if I’m wrong, but I believe that was basically D’Arcy Thompson’s point (which is not to deny that he was unreasonably suspicious of adaptive explanations).

One of the points that F&P-P make is that insufficient effort has been devoted to asking how far these constraints operate. I agree with that much, and the blasé way in which Myers implies self-organization is just enslaved by natural selection perhaps explains why this is so. Let me say this clearly (because, my God, you have to do that with all these fellows): of course canny Neodarwinists accept that not every aspect of growth and form is adaptive (and by the way, I’m this kind of Neodarwinist too). But it seems quite possible that even rather significant morphological features such as phyllotactic patterns may be included in these non-adaptive traits – and that is less commonly recognized. Ian Stewart argued the same point in Life’s Other Secret. Anyone wishing to argue that such constraints undermine the case for natural selection happening at all is of course talking utter nonsense (and not even Fodor and Piattelli-Palmarini go that far). But it’s an interesting issue, and I’m made uncomfortable when people, through understandable fear of creationism’s malign distortions, want to insist that it’s not.

Thursday, July 22, 2010

The Disappearing Spoon


I have a review of Sam Kean’s book The Disappearing Spoon in the latest issue of Nature. I am posting the pre-edited version here mostly because a change made to the text after I’d seen the proofs has inverted my meaning in the published version in an important way, rendering it most confusing. Such things happen. But this is what it was meant to say.

I really didn’t want to be too hard on this book, and I hope I wasn’t – it does have genuine merits, and I feel sure Kean will write some more good stuff. But it did sometimes make me grind my teeth.

********************************************************************* 

The Disappearing Spoon

Sam Kean
Little, Brown & Co, New York.
400 pages
$24.99

Can there be a more pointless enterprise in scientific taxonomy than redesigning the Periodic Table? What is it that inspires these spirals, pretzels, pyramids and hyper-cubes? They hint at a suspicion that we have not yet fully cracked the geometry of the elements, that there is some hidden understanding to be teased out from these baroque juxtapositions of nature’s ‘building blocks’. It is probably the same impulse that motivated grand unified theories and supersymmetry – a determination to find cryptic order and simplicity, albeit here inappropriately directed towards contingency.

To call the Periodic Table contingent might elicit howls of protest, for the allowed configurations of electrons around nuclei are surely a deterministic consequence of quantum mechanics. But the logic of these arrangements is in the end tortuous, with the electron-shell occupancy (2, 8, 18…) subdivided and interleaved. The delicate balance of electron-electron interactions creates untidy anomalies such as non-sequential sub-shell filling and the postponed incursions of the d and f subshells, making the Periodic Table especially unwieldy in two dimensions. And relativistic effects – the distortion of electron energies by their tremendous speeds in heavy atoms – create oddities such as mercury’s low melting point and gold’s yellow lustre. All can be explained, but not elegantly.

There is thus little to venerate aesthetically in the Periodic Table, a messy family tree whose charm stems more from its quirks than its orderliness. No one doubts its mnemonic utility, but new-fangled configurations of the elements will not improve that function more than infinitesimally. It seems perverse that we continue to regard the Table as an object of beauty, rather than as just the piecemeal way things turned out at this level in the hierarchy of matter.

More pertinently, it seems odd still to regard it as the intellectual framework of chemistry. Sam Kean’s The Disappearing Spoon implicitly accepts that notion, although he is more interested in presenting it as a cast of characters, a way of telling stories about ‘all of the wonderful and artful and ugly aspects of human beings and how we interact with the physical world.’ Those stories are here unashamedly as much about physics as chemistry, for exploring the nether reaches of the Periodic Table has depended on nuclear physics and particle accelerators. With molecules featuring only occasionally as receptacles into which atoms of specific elements are fitted like stones in jewellery, The Disappearing Spoon is not the survey of chemistry it might at first seem.

So what, you might say – except that by making the Periodic Table the organizational emblem of his book, Kean ends up with a similarly piecemeal construction, an arrangement of facts about the behaviours and histories of the elements rather than a thesis about our conception of the material world. It is an attractive collection of tales, but lacks a moral: resolutely from the ‘there’s a thing’ school of science writing, it is best taken in small, energizing bites than digested in one sitting. This makes for enjoyable snacking, and I defy anyone not to learn something – in my case, for example, the story (treated with appropriate caution) of Scott of the Antarctic’s misadventure with tin solder, allegedly converted by the extreme cold into a brittle allotrope. The more familiar tale of the disintegrating buttons of Napoleon’s troops in the fateful Russian campaign, alluded to here, furnished the title of Penny Le Couteur and Jay Burreson’s portmanteau of ‘molecules that changed history’, Napoleon’s Buttons (Tarcher/Puttnam, 2003), another example of this genre – and indeed most of Kean’s stories have been told before.

It should be said, moreover, that when the reader learns something, it is at what we might call a particular cognitive level – namely, that which Kelvin considers Rutherford to be ‘full of crap’ and William Crookes’ dalliance with spiritualism enabled ‘135 years of New Age-y BS’. There’s a fine line between accessible informality and ahistorical sloppiness, between the wryness of hindsight and smirks at the conventions (and sartorial norms) of the past. And although Kean’s writing has the virtues of energy and pace, one hopes that his cultural horizons might come to extend beyond the United States: rarely have I felt so constantly reminded of an author’s nationality, whether by Cold War partisanship or references to Mentos and Life Savers.

More serious is the Whiggish strain that turns retrospective errors into irredeemable gaffes rather than the normal business of science. Emilio Segrè certainly slipped up when he failed to spot the first transuranic element, neptunium, and Linus Pauling’s inside-out model of DNA was worse than a poor guess, ignoring the implausibility of the closely packed anionic phosphate groups. But scientists routinely perpetrate such mistakes, and it is more illuminating to put them in context than to present them as pratfalls.

The Disappearing Spoon is a first book, and its flaws detract only slightly from the promise its author exhibits. His next will doubtless give a more telling indication of what he can do.

Wednesday, July 21, 2010

Why music is good for you


Here’s my latest Muse article for Nature News. I hope it does not sound in any way critical of the peg paper (reference 3), which is a very nice read.

*********************************************************************

A survey of the cognitive benefits of music makes a valid case for its educational importance. But that's not the best reason to teach all children music.

Remember the Mozart effect? Thanks to a suggestion in 1993 that listening to Mozart makes you cleverer, there has been a flood of compilation CDs filled with classical tunes that will allegedly boost your baby’s brain power.

Yet there’s no evidence for this claim, and indeed the original ‘Mozart effect’ paper [1] did not make it. It reported a slight, short-term performance enhancement in some spatial tasks when preceded by listening to Mozart as opposed to sitting in silence. Some follow-up studies replicated the effect, others did not. None found it specific to Mozart; one study showed that pop music could have the same effect on schoolchildren [2]. It seems this curious but marginal effect stems from the cognitive benefits of any enjoyable auditory stimulus, which need not even be musical.

The original claim doubtless had such inordinate impact because it plays to a long-standing suspicion that music makes you smarter. And as neuroscientists Nina Kraus and Bharath Chandrasekaran of Northwestern University in Illinois point out in a review in Nature Reviews Neuroscience [3], there is good evidence that music training reshapes the brain in ways that convey broader cognitive benefits. It can, they say, lead to ‘changes throughout the auditory system that prime musicians for listening challenges beyond music processing’ – such as interpreting language.

This is no surprise. Many sorts of mental training and learning alter the brain, just as physical training alters the body, and learning-related structural differences between the brains of musicians and non-musicians are well established [4]. Moreover, both neurological and psychological tests show that music processing draws on cognitive resources that are not music-specific, such as pitch processing, memory and pattern recognition [5] – so cultivating these mental functions through music would naturally be expected to have a wider payoff. The interactions are two-way: the pitch sensitivity imbued by tonal languages such as Mandarin Chinese, for example, enhances the ability to name a musical note just from hearing it (called absolute pitch) [6].

We can hardly be surprised, meanwhile, that music lessons improve childrens’ IQ [7], given that these will nourish general faculties such as memory, coordination and attentiveness. Kraus and Chandrasekaran now point out that, thanks to the brain’s plasticity (ability to ‘rewire’ itself), musical training sharpens our sensitivity to pitch, timing and timbre, and as a result our capacity to discern emotional intonation in speech, to learn our native and foreign languages, and to identify statistical regularities in abstract sound stimuli.

Yet all these benefits of music education have done rather little to alter a common perception that music is an optional extra to be offered (beyond tokenistic exposure) only if children have the time and inclination. Ethnomusicologist John Blacking put it more damningly: we insist that musicality is a rare gift, so that music is to be created by a tiny minority for the passive consumption of the majority [8]. Having spent years among African cultures that recognized no such distinctions, Blacking was appalled at the way this elitism labelled most people ‘unmusical’.

Kraus and Chandrasekaran rightly argue that the marginalization of music training in schools ‘should be reassessed’ in light of the benefits it may offer by ‘improving learning skills and listening ability’. But it will be a sad day when the only way to persuade educationalists to embrace music is via its side-effects on cognition and intelligence. We should be especially wary of that argument in this age of cost-benefit analyses, targets and utilitarian impact assessments. Music should indeed be celebrated (and studied) as a gymnasium for the mind; but ultimately its value lies with the way it enriches, socializes and humanizes us qua music.

And while in no way detracting from the validity of calling for music to be essential in education, it’s significant that musical training, like any other pleasure, has its hazards when taken to excess. I was recently privileged to discuss with the pianist Leon Fleisher his traumatic but fascinating struggle with focal dystonia, a condition that results in localized loss of muscle control. Fleisher’s dazzling career as a concert pianist was almost ended in the early 1960s when he found that two fingers of his right hand insisted on curling up. After several decades of teaching and one-handed playing, Fleisher regained the use of both hands through a regime of deep massage and injections of botox to relax the muscles. But he says his condition is still present, and he must constantly battle against it.

Focal dystonia is not a muscular problem (like cramp) but a neural one: over-training disrupts the feedback between muscles and brain, expanding the representation of the hand in the sensory cortex until the neural correlates of the fingers blur. It is the dark side of neural plasticity, and not so uncommon – an estimated one in a hundred professional musicians suffer from it, though some do so in secrecy, fearful of admitting to the debilitating problem.

We would be hugely impoverished without virtuosi such as Fleisher. But his plight serves as a reminder that hot-housing has its dangers, not only for the performers but (as Blacking) suggests for the rest of us. Give us fine music, but rough music too.

References

1. Rauscher, F. H., Shaw, G. L. & Ky, K. N. Nature 365, 611 (1993).
2. Schellenberg, E. G. & Hallam, S. Ann. N. Y. Acad. Sci. 1060, 202-209 (2005).
3. Kraus, N. & Chandrasekaran, B. Nat. Rev. Neurosci. 11, 599-605 (2010).
4. Gaser, C. & Schlaug, G. J. Neurosci. 23, 9240-9245 (2003).
5. Patel, A. D. Music, Language, and the Brain (Oxford University Press, New York, 2008).
6. Deutsch, D., Henthorn, T., Marvin, E. & Xu, H.-S. J. Acoust. Soc. Am. 119, 719-722 (2006).
7. Schellenberg, E. G. J. Educ. Psychol. 98, 457-468 (2006).
8. Blacking, J. How Musical Is Man? (Faber & Faber, London, 1976).

Monday, July 19, 2010

Organic nightmares


How do you make and use a Grignard reagent? This isn’t a question that has generally kept me awake at night. But last night it gave me nightmares. As a rule my ‘exam anxiety’ dreams, three decades after the event, feature maths: I find, days before the exam, that I have done none of the coursework or required reading, and am clueless about all of the mathematical methods on which I’m about to be grilled. Now it seems I may be about to transfer my disturbance to organic synthesis. Last night I was even at the stage of sitting in the exam hall waiting to be told to open the test paper, when I realised that I could recall not one of the countless details of reagents, methods and strategies involved in the Grignard reaction in particular (yes, alkylmagnesium, I know that much) or the aldol reaction, Claesen rerrangement, Friedel-Crafts acylation and all the rest. Now, I know this is nothing shameful even for someone who writes about chemistry for a living – as I say, it was three decades ago that I learnt this stuff, and if I want to know the details now then I can look them up, right? And my memory of the Diels-Alder reaction, about which I’ve written relatively recently, remains sufficiently fleshed-out to reassure me that the decay constant of my mind is at least still measured in years rather than days. All the same, it is sobering bordering on scary to realise that (i) I did once have to memorize all this stuff, and (ii) I did so. Organic synthesis can achieve tremendous elegance, and is not devoid of general principles; but this dream reminds me that it is nonetheless perhaps the closest chemistry comes to becoming a list of bald, unforgiving facts.

Oh God, and the truly scary thing is that I just looked up Grignard reagents on Wikipedia, and… and there was a carbonyl group lurking in my dream too. I think I’d have been more reassured to know that I was just improvising than that a fragment of that grim scheme has stayed lodged in my cortex.

Wednesday, July 14, 2010

Who should pay for the police?


I have a Muse piece on Nature News about a forthcoming paper in Nature on cooperation and punishment in game theory, by Karl Sigmund and colleagues. It’s quite closely related to recent work by Dirk Helbing, also discussed briefly below. There are many interesting aspects to Dirk’s papers, which I can’t touch on here – not least, the fact that the outcomes of these games can be dependent on the spatial configuration of the players. Here is the pre-edited article.

***********************************************************************

The punishment of anti-social behaviour seems necessary for a stable society. But how should it be policed, and how severe should it be? Game theory offers some answers.

The fundamental axis of political thought in democratic nations could be said to refer to the ‘size’ of government. How much or how little should the state interfere in our lives? At one end of the axis sits political philosopher Thomas Hobbes, whose state is so authoritarian – an absolute monarchy – that it barely qualifies as a democracy at all once the ruler is elected. At the other extreme we have Peter Kropotkin, the Russian revolutionary anarchist who argued in Mutual Aid (1902) that people can organize themselves harmoniously without any government at all.

At least, that’s one view. What’s curious is that both extremes of this spectrum can be viewed as either politically right- or left-wing. Hobbes’ domineering state could equally be Stalin’s, while the armed, vigilante world of extreme US libertarianism (and Glenn Beck) looks more like the brutal ‘State of Nature’ that Hobbes feared – everyone for themselves – than Kropotkin’s cosy commune.

But which works best? I’m prepared to guess that most Nature readers, being benign moderates, will cluster around the middle ground defined by John Stuart Mill, who argued that government is needed to maintain social stability, but should intrude only to the extent of preventing individuals from harming others. Laws and police forces, in this view, exist to ensure that you don’t pillage and murder, not to ensure that you have moral thoughts.

If only it were that simple. The trouble is that ‘harming others’ is a slippery concept, illustrated most profoundly by the problem of the ‘commons’. If you drop litter, if you don’t pay your taxes, if you tip your sewage into the river, it’s hard to pinpoint how or who your actions ‘harm’, if anyone – but if we all do it, society suffers. So laws and penal codes must not only prevent or punish obvious crimes like murder, but also discourage free-riders who cheat on the mechanisms that promote social order.

How much to punish, though, and how to implement it? If you steal, should you temporarily lose your liberty, or permanently lose your hand? And what works best in promoting cooperative behaviour: the peer pressure of social ostracism, or the state pressure of police arrest?

Experiments in behavioural economics, in particular ‘public goods games’ where participants seek to maximize their rewards through competition or cooperation, have shown that people care about punishment to an ‘irrational’ degree [1]. Say, for example, players are asked to put some of their money into a collective pot, which will then be multiplied and divided among the players. The more you all put in, the better the payoff. But if one person doesn’t contribute, they still get the reward – so there’s a temptation to free-ride.

If players are allowed to fine free-riders, but at a cost to themselves, they will generally do it even if they make a loss: they care more about fairness than profit. Now, however, the problem is that there’s a second-order temptation to free-ride: you contribute to the pot but leave others to shoulder the cost of sanctioning the cheaters who don’t. There’s an infinite regress of opportunities to free-ride, which can eventually undermine cooperation.

But what if the players can share the cost of punishment by contributing to a pool in advance – equivalent, say, to paying for a police force and penal service? This decreases the overall profits – it costs society – because the ‘punishment pool’ is wasted if no one actually cheats. Yet in a new paper in Nature [2], game theorist Karl Sigmund of the University of Vienna and his colleagues show in a computer model that pool-punishment can nevertheless evolve as the preferred option over peer-punishment as a way of policing the game and promoting cooperation: a preference, you might say, for a state police force as opposed to vigilante justice. This arrangement is, however, self-organized à la Kropotkin, not imposed from the top down à la Hobbes: pool-punishment simply emerges as the most successful (that is, the most stable) strategy.

Of course, we know that what often distinguishes these things in real life is that state-sponsored policing is more moderate and less arbitrary or emotion-led than vigilante retribution. That highlights another axis of political opinion: are extreme punishments more effective at suppressing defection than less severe ones? A related modelling study of public-goods games by Dirk Helbing of ETH in Zürich and his coworkers, soon to be published in the New Journal of Physics [3] and elaborated in another recent paper [4], suggests that the level of cooperation may depend on the strength of punishment in subtle, non-intuitive ways. For example, above a critical punishment (fine) threshold, cooperators who punish can gain strength by sticking together, eventually crowding out both defectors and non-punishing cooperators (second-order free riders). But if punishment is carried out not by cooperators but by other defectors, too high a fine is counterproductive and reduces cooperation. Cooperation can also be created by an ‘unholy alliance’ of cooperators and defectors who both punish.

Why would defectors punish other defectors? This behaviour sounds bizarre, but is well documented experimentally [5], and familiar in real life: there are both hypocritical ‘punishing defectors’ (think of TV evangelists whose condemnation of sexual misdemeanours ignores their own) and ‘sincere’ ones, who deplore certain types of cheating while practising others.

One of the most important lessons of these game-theory models in recent years is that the outcomes are not necessarily permanent or absolute. What most people (perhaps even Glenn Beck) want is a society in which people cooperate. But different strategies for promoting this have different vulnerabilities to an invasion of defectors. And strategies evolve: prolonged cooperation might erode a belief in the need for (costly) policing, opening the way for a defector take-over. Which is perhaps to say that public policy should be informed but not determined by computer models. As Stephen Jay Gould has said, ‘There are no shortcuts to moral insight’ [6].

References 
[1] Fehr, E. & Gächter, S. Am. Econ. Rev. 90, 980-994 (2000).
[2] Sigmund, K., De Silva, H., Traulsen, A. & Hauert, C. Nature doi:10/1038/nature09203.
[3] Helbing, D., Szolnoki, A., Perc, M. & Szabó, G. New J. Phys. (in press); see http://arxiv.org/abs/1007.0431 (2010).
[4] Helbing, D., Szolnoki, A., Perc, M. & Szabó, G. PLoS Comput. Biol. 6(4), e1000758 (2010).
[5] Shinada, M., Yamagishi, T. & Omura, Y. Evol. Hum. Behav. 25, 379-393 (2004).
[6] Gould, S. J. Natural History 106 (6), 12-21 (1997).

The music of chemistry


My latest Crucible column for Chemistry World  (July) is mostly non-technical enough to put up here. This is the pre-edited version.

************************************************************************

The English composer Edward Elgar is said to have boasted that the first of his Pomp and Circumstance Marches had a tune that would ‘knock ‘em flat’. But on another occasion he inadvertently found a way to achieve that effect rather too literally. Elgar was an enthusiastic amateur chemist, and fitted up his home in Hereford with a laboratory which he called The Ark. His friend, the conductor and composer William Henry Reed, tells how Elgar delighted in making a ‘phosphoric concoction’ which would explode spontaneously when dry – possibly Armstrong’s mixture, red phosphorus and potassium chlorate, used in toy cap guns. One day, Reed says, Elgar made a batch of the stuff but then musical inspiration struck. He put the mixture into a metal basin and dumped it in the water butt before returning to the house.

‘Just as he was getting on famously,’ wrote Reed, ‘writing in horn and trumpet parts, and mapping out wood-wind, a sudden and unexpected crash, as of all the percussion in all the orchestras on earth, shook the room… The water-butt had blown up: the hoops were rent: the staves flew in all directions; and the liberated water went down the drive in a solid wall. Silence reigned for a few seconds. Then all the dogs in Herefordshire gave tongue.’

Schoolboy pranks were not, however, the limit of Elgar’s contribution to chemistry. He took his hobby seriously enough to invent a device for synthesizing hydrogen sulphide, which was patented and briefly manufactured as the Elgar Sulphuretted Hydrogen Apparatus. Elgar’s godson claimed that the device was ‘in regular use in Herefordshire, Worcestershire and elsewhere for many years,’

Elgar is one of a small, select band of individuals who made recognized contributions to both chemistry and music [1,2] (although chemists who are also musicians are legion). Georges Urbain, best known as the discoverer of the element lutetium, was also a noted pianist and composer. Eighteenth-century musician and composer George Berg conducted extensive experiments in the chemistry of glass-making. But the most famous representative of the genre is Aleksandr Borodin, whose name is still familiar to chemists and musicians alike. As one of the Five, the group of Russian composers that included Mussorgsky and Rimsky-Korsakov, Borodin created a musical idiom every bit as characteristically Russian as Elgar’s was English.

As historian of chemistry Michael Gordin says of Borodin, ‘it is the fascination of this hybrid figure that has drawn a great deal of attention to the man, mostly focusing on whether there was some sort of “conflict” between his music and his science’ [3]. There is good reason to suspect that the conflict was felt by Borodin himself, who seems to have stood accused by both chemists and musicians of spending too long on ‘the other side’. ‘You waste too much time thinking about music’, his professor Nikolai Zinin told him. ‘A man cannot serve two masters.’ Meanwhile, Borodin complained in a letter that ‘Our musicians never stop abusing me. They say I never do anything, and won’t drop my idiotic activities, that is to say, my work at the laboratory.’

Rimsky-Korsakov portrayed his friend as literally rushing between his two passions, trying to keep both balls in the air. ‘When I went to see him’, he wrote, ‘I would often find him at work in the laboratory next door to his flat. When he had finished what he was doing, he would come back with me to his flat, and we would play together or talk. Right in the middle he would jump up and rush back into the laboratory to make sure nothing had burnt or boiled over, all the while making the corridor echo with incredible sequences of successive ninths or sevenths’ [4].

Such anecdotes titillate our curiosity not just about whether a person can ‘serve two [intellectual] masters’ but whether each might fertilize or inhibit the other. Yet there is little evidence that scientific knowledge does much more for artists (or vice versa) than supply novel sources of metaphor and plot: the science literacy evident in, say, the novels of Vladimir Nabokov (a lepidopterist) or Thomas Pynchon (an engineer) is a are joy to the scientist, but one imagines they would have been great writers in any event. And the ‘science’ in Goethe’s works might have been better omitted.

The enduring appeal of these questions has in Gordin’s view unduly elevated Borodin’s chemical reputation. He has been credited with discovering the so-called Hunsdiecker reaction, the decarboxylation of silver salts of carboxylic acids with bromine (sometimes called the Borodin reaction) [5], and most importantly with the aldol reaction, the conversion of an aldehyde into a b-hydroxy aldehyde, which forms a new carbon-carbon bond. The latter is often presented as Borodin’s discovery which was ‘stolen’ by the German chemist Charles-Adolphe Wurtz, whereas Gordin shows that in fact Wurtz got there first and that Borodin conceded as much.

Borodin’s priority claim was inflated, Gordin says, because of the desire to cast him as polymathic musician-chemist. ‘His chemistry is at best historically interesting’, says Gordin, ‘but not outstandingly so.’ Perhaps this determination to make Borodin ‘special’ in the end does more harm than good to the notion that ordinary scientists can be interested in more than just science.

1. L. May, Bull. Hist. Chem. 33, 35-43 (2008).
2. S. Alvarez, New J. Chem. 32, 571-580 (2008).
3. M. D. Gordin, J. Chem. Educ. 83, 561-565 (2006).
4. Quoted in A. Bradbury, Aldeburgh Festival programme booklet 2010, p.30-33.
5. See E. J. Behrman, J. Chem. Educ. 83, 1138 (2006).

Monday, June 21, 2010

The hand of a master


Last week I had the immense pleasure of going to the Aldeburgh Festival in Suffolk to interview the pianist Leon Fleisher in front of an audience. I was standing in for Antonio Damasio, who had been unable to fly trans-Atlantic because of a recent accident. Leon was already deemed one of the most significant pianists of his time as a young man in the 1960s, when he found himself afflicted by ‘musician’s cramp’, also known as focal dystonia, which made two fingers of his right hand curl up and refuse to accede to his demands. This condition left him unable to play two-handed for the best part of three decades, during which he taught, conducted, and performed the left-handed repertoire (mostly written for Paul Wittgenstein, who lost his right arm in the First World War). Leon finally regained use of his right hand, and now performs two-handed – he had already played at the festival with his wife Katherine Jacobson Fleisher, and will shortly perform Bach and Brahms with the Signum Quartet. I said a little about Leon’s condition in my piece for the FT; Oliver Sacks says more in his book Musicophilia. It was a tremendous privilege to be able to talk with Leon before and during the event; I felt myself to be in the presence of someone who genuinely lived inside the music. Radio 3 are broadcasting several of the Aldeburgh events, including Leon’s performance with Signum. They also recorded an interview with Leon and me before the event, but I don’t know if it will ever see the light of day.

Here, however, is a piece about one of the other events, which I did not have a chance to mention in the FT. It also involves Signum, and appears in the July issue of Prospect.

*****************************************************************

There are two golden rules for an orchestra’, the conductor Sir Thomas Beecham is alleged to have said. ‘Start together and finish together. The public doesn’t give a damn what goes on in between.’ Beecham was prone to witty overstatement, but his remark fits an intuition that we are acutely sensitive to failures of synchronization in musical performance. ‘They were all over the place’ is the typical put-down of ensembles with sloppy timing.

But playing together in time is far from trivial. Even orchestral musicians watching a conductor have to be anticipating the beat if they’re not going to miss it, and the smaller ensembles for chamber music have no human metronome to follow. Besides, most music requires a variable metronome: a string quartet, just like a soloist, will slow down and speed up for expressive purposes. Who decides the rhythm and how to vary it, when there is no one obviously leading?

That’s a question being studied by psychologist Alan Wing and Satoshi Endo at Birmingham University, together with cellist Adrian Bradbury. At the Aldeburgh Festival in June, Wing will describe his experiments with the Signum Quartet, a German ensemble who are also performing at the festival. Signum have gamely agreed to be the guinea pigs for Wing’s studies of musical synchronization, the results of which he was still analysing as the festival’s opening loomed.

Wing’s interest in human timing and synchronization led only by degrees to music. Some years ago he investigated how rowers in the Cambridge Blues all pulled together, and he hints with tongue in cheek that those studies might have helped stem the long run of Oxford victories.

But rowing, with everyone striving to synchronize an identical and highly regular action, is easy compared with music. In a string quartet, each musician plays a different part, and yet they must all intermesh to create a single rhythmic pulse, albeit one that satisfies the elastic demands of musical expression. Who is following whom?

To find out, Wing tracked the movements of the Signum musicians using motion-capture video, which bounces infrared light off reflectors attached to their bows. Electronic pick-ups on the instruments then allowed him to follow the relationships between movement and sound for each player in turn, and to look for correlations between the players. Previous work, particularly on keyboardists (whose finger movements are easy to detect electronically), has shown that performers don’t keep to a strict tempo but vary the gaps between beats by perhaps a few milliseconds. Some of these variations are random, but some are intentional and repeatable from one performance to another. Such variations not only convey emotion but also, paradoxically, help the listener to discern the music’s pulse in the first place by slightly exaggerating its rhythmic patterns.

In string quartets, the performer who carries the melody – often the first violin – is usually deemed to be the leader. That, at least, is what the musicians will profess. But do their microscopic variations in timing bear that out – do the other players, for example, fall in step slightly behind the first violin? That’s what Wing’s results suggest. By using mathematical techniques analogous to those used to study periodic change in climate and animal populations, he analysed the timings of each player during a test passage from Haydn to figure out whether any one player’s rhythm depended on that of any other. It seems that the cello and viola form a tight-knit ‘rhythm section’, like the bass and drums of a rock band, which responded to but did not in turn affect what the first violin was doing.

Mindful of Beecham’s dictum, Wing also wondered just how ensembles begin a piece. How, and how well, do they all come in together? The lead player in a quartet will usually make an exaggerated movement of the bow or head to signal the first beat, but what exactly is it that the other players respond to? From video recordings, Wing created a virtual avatar of the first violinist, from which he could lop off the head or an arm to see how much it influenced the other players when they were guided by the on-screen image. As intuition suggests, the head and the right (bowing) arm were crucial, while the left (fingering) arm didn’t really matter. And the tempo adopted by the ensemble as the music proceeds from its outset seemed to be set by the energy of these initial gestures – how much the bow arm accelerates in preparing for the first stroke, say.

In a sense, this is an extreme example of the kind of “unspoken leadership” that has been studied in animal communities, for example in the question of how just a few honeybees with “privileged information” about the location of a good nest site can induce the rest of the swarm to follow them. It’s possible, then, that the ramifications extend beyond the togetherness of musicians: to that of dancers and acrobats, even to the socially cohesive group activities involved in agriculture and industry—in which some think music has its origins.

Wet dreams


This morning I found myself sitting outside a café in upper Regent Street watching passers-by sample three types of water and offer their opinions on them. ‘Three types of water’ of course begs the question, and I suspect there was nothing but one type of water involved, with trivial variations in the usual trace solutes. This was a vox-pop test for the Radio 4 consumer and lifestyle programme ‘You And Yours’, which in this item was investigating the claims being made for so-called ‘ionized water’, equipment for the production of which is being installed in health-food cafés at vast expense. When the BBC folks contacted me last Friday to ask my opinion on ionized water, I think they were a little surprised when I responded ‘what’s that?’ They’d got it confused with the deionized water available in all good labs, not to mention garages that sell it for your car battery. But as I said to them, ‘ionized’ water made no sense to me. I’m pleased to say that it was quite proper that it did not. A quick search reveals that ionized water is just the latest of the ‘altered water treatments’ being advocated for turning ordinary water into a wondrous health-giving reagent. Like all the others, it is a sham. Basically it seems to involve an electrolytic process that allegedly produces alkaline water at one electrode – not entirely implausible in itself, if there is an electrolyte present, but the claims made for the health benefits of drinking ‘alkaline water’ are nonsense, and the waffle about reactive oxygen species and cancer just the usual junk. Fortunately, Steven Lower of Simon Fraser University has prepared an excellent web site debunking this stuff, which saves me the effort.

Those who want the full nonsense can get it here. Yes, complete with special ‘water clusters’. If you want to buy a water ionizer, feel free to do so here. And I’m amused to see that Ray Kurzweil, who wants to live long enough to reach the age of immortality that is just around the corner, has bought into this stuff. Ray swears that ionized water is alkaline, because like a ‘responsible scientist’ he measured the pH. His scientific curiosity did not, however, extend to investigating, if this was so, what the counterions to the hydroxyls are – in other words, which salts had been added to the water to make alkalinity possible. We are apparently supposed to believe that it is the water itself that is alkaline, which of course is chemically impossible. Keep drinking, Ray.

In any case, I was required on You And Yours to offer scientific comment on this affair. You can judge the results for yourselves here.

Thursday, June 10, 2010

Bursting out


I have a review in Nature of Albert-László Barabási’s new book Bursts. The book is nice, but the review was necessarily truncated, and here is how I really wanted to put it.

Bursts: The Hidden Pattern Behind Everything We Do

Albert-László Barabási

Dutton, New York, 2010
310 pages
$26.95

Is human behaviour deterministic or random? Psychoanalysts, economists and behavioural geneticists, however unlikely as bedfellows, all tend to assume cause and effect: we do what we do for a discernible reason, whether obeying the dictates of the unconscious, rational self-interest or our genetic predisposition. But those assumptions have not produced anything like a predictive model of human actions, and we are daily presented with reason to suspect that our actions owe more to sheer caprice than to any formula. Given the disparity of individual decisions, perhaps our behaviour shows no more pattern than coin-tossing: maybe collectively it is dominated by the randomness encoded by the gaussian distribution, the familiar bell-curve statistics of a series of independent events whose outcomes are a matter of chance.

Albert-László Barabási’s Bursts explains how this notion of randomness has been undermined by recent research, much of it conducted by him and his collaborators, that has revealed a hitherto unexpected pattern in human activities ranging from the sending of emails (and before that, postal letters) to our movements through the world. We conduct our affairs in bursts, for example sending out several emails in a short space of time and then none for hours. Even our everyday wrist movements, when monitored with accelerometers, show this bursty behaviour, with spells of motion interspersed with periods of repose. Because the distribution of bursts differs for people who are clinically depressed, these seemingly irrelevant statistics might offer a simple diagnostic tool.

Burstiness could seem so intuitively obvious as to be trivial. That we find a moment to catch up with email responses, rather than attending to them one by one at random intervals, is scarcely puzzling or surprising. But such rationalizing narratives don’t fully account for everything: why, then, does it take us a few minutes to respond to some messages but weeks to get to others? Barabási and his coworkers explained that on the assumption that we prioritize, adding new priorities to our ‘must-do’ lists each time others are cleared.

Barabási renders observations like this, which could seem dry or frivolous, both engaging and illuminating through human stories: Einstein unwittingly stalling the career of Theodor Kaluza by taking two years to reply to a letter, or the hapless artist Hasan Elahi being taken for questioning by US Homeland Security because of his ‘suspicious movement’. Barabási shows that Elahi’s globetrotting really was anomalous – whereas the algorithm he developed in his lab to predict people’s whereabouts based on their personal bursty signature forecast everyone else’s movements with more than 80 percent accuracy, Elahi foiled the program with his genuine randomness.

Burstiness is not confined to human activity, and so is not somehow a by-product of cognition. It is seen in the foraging patterns of several animals (though not, as once claimed in this journal, albatrosses). It even fits the transcriptional activity of genes and evolutionary speciation. But Barabási cannot yet say whether this ubiquity stems from the same basic cause, or whether burstiness happens to be a statistical signature that many mechanisms can generate. The same question has been raised of the power-law statistics found for many natural and social phenomena (in fact bursts also produce power laws), and of fractal structures. To put it another way, is burstiness a discriminating and informative character, or just a common epiphenomenon of several distinct processes? We don’t yet know.

Moreover, the burstiness of human behaviour doesn’t obviously warrant the air of determinism that hangs over the book. ‘Prediction at the individual level is growing increasingly feasible’, Barabási asserts. But bursts per se don’t obviously help with the sort of detailed, moment-by-moment prediction he is discussing here – like the avalanches of self-organized criticality, they remain unpredictable as individual events, differing from gaussian randomness only because they are correlated. They simply help us get the overall statistics right.

While popular science books written by researchers presenting new ideas typically have an ex cathedra quality, Bursts shows the influence of the journalistic approach of professional writers, exemplified by James Gleick and Malcolm Gladwell, narrative-driven and replete with personality sketches. Barabási is rather good at these story-telling tricks, and his opening paragraph is a masterful example of the genre, drawing us in with a puzzle we know will be resolved only much later.

Whether his daring device of punctuating the exposition with the tale of how his Transylvanian compatriot György Székely led a peasant revolt in Hungary in 1514 works is less clear. Barabási implies that this tale illustrates some of the conclusions about burstiness and unpredictability, but that’s far from obvious. Because I am apparently Barabási’s personal Hasan Elahi, a vanishingly rare outlier who happens to have an interest both in Székely Transylvania and the peasant uprisings of the early sixteenth century, I was happy to indulge him. I suspect not everyone will do so. But they should try, because Bursts reveals Barabási to be not just an inventive and profoundly interdisciplinary scientist but an unusually talented communicator.


Tuesday, June 08, 2010

Still got music on the brain


I have a piece in the FT about the forthcoming events on ‘music and the brain’ at the Aldeburgh Festival. The piece is so unadulterated that I won’t even bother pasting the ‘pre-edited’ version here (apart, that is, from the conversion of Eckart Altenmüller from a neuroscientist to a ‘euro-scientist’, a typo that has the distinction of both being mildly amusing and remaining true). More on this to follow.

Saturday, June 05, 2010

Mind over matter?


There’s a piece in today’s Guardian Review by American author and novelist Marilynne Robinson, who bravely challenges the materialistic interpretations of the brain offered by the likes of Steven Pinker and E. O. Wilson. It is an extract from her book Absence of Mind. I say’ brave’ rather than ‘persuasive.’ I’ve got some sympathy for her criticisms of the way the pop neuro- and cognitive scientists try to explain the brain by ruling out of bounds those things that seem too intangible or difficult. And although Pinker makes a valid point by confessing that we have no reason to suppose the human brain is capable of understanding the resolution to some of the hard philosophical questions, Robinson is right to suggest that this, even if it is true, is no reason to stop asking them. (The likes of Pinker will probably be pulling their elegantly coiffeured hair out at the way Robinson casually makes Freud a part of mainstream science, but let’s put that aside.)

My main complaint is that the article is encrusted with what seems to be the characteristically clotted style of American academics of letters, which strives always to be artful at the expense of plain speaking. For example, in response to E. O. Wilson’s comment that ‘The brain and its satellite glands have now been probed to the point where no particular site remains that can reasonably be supposed to harbour a nonphysical mind’, Robinson replies: ‘To prove a negative, or to treat it as having been proved, is, oddly enough, an old and essential strategy of positivism. So I do feel obliged to point out that if such a site could be found in the brain, then the mind would be physical in the same sense that anything else with a locus in the brain is physical. To define the mind as nonphysical in the first place clearly prejudices his conclusion.’ The same point might have been made with less fuss had she simply said ‘But how can a nonphysical mind have a physical location?’

Here at least, however, her meaning is clear. But how about this: ‘What grounds can there be for doubting that a sufficient biological account of the brain would yield the complex phenomenon we know and experience as the mind? It is only the pertinacity of the mind/body dichotomy that sustains the notion that a sufficient biological account of the brain would be reductionist in the negative sense. Such thinking is starkly at odds with our awareness of the utter brilliance of the physical body.’ I have read this several times, and still doubt that I really understand any of it. Would a statement like this be permitted by an editor in a commissioned piece? I’d like to think not.

And isn’t it odd, after stating ‘What Descartes actually intended by the words "soul" and "mind" seems to me an open question for Descartes himself’, to simply sign the question off with ‘No doubt there are volumes to be consulted on this subject.’ Indeed there are – why not consult them? Better still, why not tell us what Descartes actually said? (For what it is worth, I think she is trying to complicate the matter too much. The soul, for Descartes, seems to me to be simply what motivates the body-machine: what puts its hydraulics and cogs and levers into particular motions. No big deal; except that it enabled Descartes to defend himself against charges of atheism.)

The standfirst of the piece (obviously not by the author) asks ‘What is meant by the idea of a soul?’ Robinson suggests that Pinker identifies the soul with the mind, which seems fair enough on the strength of the passage she quotes. Aristotle did likewise, at least as far as humans are concerned, for he said we are distinguished from other beings by possessing a rational soul. But then, Aristotle’s soul was always a thoroughly secular, quasi-scientific notion. All I can find as Robinson’s alternative is that the soul is ‘an aspect of deep experience’. I can see that this may be developed into some kind of meaning. She might also have usefully pointed out that this apparently deviates from the traditional Catholic notion of a soul as a non-physical badge of humanness that is slotted into the organism at conception.

But the least convincing aspect of the piece is the classic ‘just-as’ reasoning of the scientific dilettante. Robinson knows about quantum entanglement (sort of). And her point there seems to be ‘if we don’t really understand that, how can we think we can understand the brain/mind?’ But the hard thing about entanglement is not ‘understanding’ it (though we can’t claim to yet do so completely), but that it defies intuition. And please, no more allusions to the ‘quantum brain’.

Similarly, just as we don’t see a bird as a modified dinosaur (ah, do we not?), she argues that ‘there is no reason to assume our species resembles in any essential way the ancient primates whose genes we carry.’ Hmm… you might want to have another attempt at that sentence. Even if we allow that Robinson perhaps means it to apply only to aspects of brain, this is more a desperate plea to liberate us from our evolutionary past than a claim with any kind of reasoned support. ‘Might not the human brain have undergone a qualitative change’ [when the first artifact appeared], she asks? Well yes, it might, and some have called that change ‘hominization’. But this does not mean we lost all our former instincts and drives. It would doubtless have been catastrophic if we had. Even I, a sceptic of evolutionary-psychological Just So stories, can see this as an attempt to resurrect the specialness of humankind that some religious people still struggle to relinquish.

Pinker et al. will have little difficulty with this rather otiose assault.

Thursday, June 03, 2010

What's the big idea?


I’m still not sure whether I did right to join the panel for the online debate being launched by Icon Books on ‘The World’s Greatest Idea’. Well, the title says it all, no? I’m dubious about any view of history as a succession of ‘great ideas’, and the notion of ranking them – abolition of slavery vs the aerofoil vs arable farming – could seem worse than meaningless. Besides, does one rate them according to how intellectually dramatic an ‘idea’ is, or how important it has been to world civilization, or how well it has served humankind, or…? But I acceded in the end because I figured it does not do to be too po-faced about an exercise that after all is just a springboard for a potential discussion about how society produces and is changed by innovation. And there is something grandly absurd about pitching sewerage against romance against simplified Chinese characters. I’m also reassured to see that someone as discerning as Patricia Fara has also taken part. Go on, place a vote – there’s no harm in it.

Friday, May 28, 2010

Not all contemporary art is rubbish


I’m thrilled to see my friend, photographic and video artist Lindsay Seers, being given some respect in Ben Lewis’s excellent piece for Prospect on why modern art is in a decadent phase. Like Ben, I think Lindsay is doing serious and interesting stuff, and I say that not just because (perhaps even despite the fact that?) I’ve been involved in some of it. I wrote a piece for Lindsay’s book Human Camera (Article Press, 2007), which I’m now inspired to put up on my web site.

Monday, May 24, 2010

Creation myths


Artificial life? Don’t ask me guv, I was too busy last week building sandcastles in Lyme Regis. However, now making up for lost time… I have a Muse on Nature’s news site (the pre-edited text of which is below – they always remove the historical quotes), and a piece on the Prospect blog. The Venter work may, if it survives the editor’s shears, also be briefly discussed on an episode of Radio 4’s Moments of Genius that I’ve also just recorded with Patricia Fara, due to be broadcast this Sunday (30th May).

*********************************************************************
Claims of ‘synthetic life’ have been made throughout history. And each time, they are best regarded as mirroring what we think life is.

The recent ‘chemical synthesis of a living organism’ by Craig Venter and his colleagues at the J. Craig Venter Institute [1] sits within in a very long tradition. Claims of this sort have been made throughout history. That’s not to cast aspersions on the new results: while one can challenge the notion that this new bacterium, whose genome is closely modelled on that of Mycoplasma mycoides, stands apart from Darwinian evolution, the work is nonetheless an unprecedented triumph of biotechnological ingenuity. But when set in historical context, the work reflects our changing conception of what life is and how it might be made. What has been done here is arguably not so much a ‘synthesis of life’ as a (semi-)synthetic recreation of what we currently deem life to be. And as with previous efforts, it should leave us questioning the adequacy of that view.

To see that the new results reiterate a perennial theme, consider the headline of the Boston Herald in 1899: ‘Creation of Life. Lower Animals Produced by Chemical Means.’ The article described how the German biologist Jacques Loeb had caused an unfertilized sea-urchin egg to divide by treating it with salts. It was a kind of artificial parthenogenesis, and needless to say, very far from a chemical synthesis of life from scratch.

But Loeb himself was then talking in earnest about ‘the artificial production of living matter’, and he was not alone in blending his discovery with speculations about the de novo creation of life. In 1912 the physiologist Edward Albert Schäfer alluded to Loeb’s results in his presidential address to the British Association, under the rubric ‘the possibility of the synthesis of living matter.’ Schäfer was optimistic: ‘The [cell] nucleus – which may be said indeed to represent the quintessence of cell-life – possesses a chemical constitution of no very great complexity; so that we may even hope some day to see the material which composes it prepared synthetically.’

Such claims are commonly seen to imply that artificial human life is next on the agenda. It was a sign of the times that the New York Times credulously reported in 1910 that ‘Prof. Herrera, a Mexican scientist, has succeeded in forming a human embryo by chemical combination.’ It is surely no coincidence that many media reports have compared Venter to Frankenstein, or that the British Observer newspaper mistakenly suggested he has ‘succeeded in ‘creating’ human life for the first time’.
  
What is life?

Beliefs about the feasibility of making artificial organisms have been governed by the prevailing view of what life is. While the universe was seen as an intrinsically fecund matrix, permitting bees and vermin to emerge from rotten flesh by spontaneous generation, it seemed natural to imagine that sentient beings might body forth from insensate matter. The mechanical models of biology developed in the seventeenth century by René Descartes and others fostered the notion that a ‘spark of life’ – after the discovery of electricity, literally that – might animate a suitably arranged assembly of organic parts. The blossoming of chemistry and evolutionary theory spurred a conviction that it was all about getting the recipe right, so that nature’s diverse grandeur sprung from primordial colloidal jelly, called protoplasm, which Thomas Henry Huxley regarded as the ‘physical basis of life’.

Yet each apparent leap forward in this endeavour more or less coincided with a realization that the problem is not so simple. Protoplasm appeared as organic chemists were beginning on the one hand to erode the concept of vitalism and on the other to appreciate the full and baffling complexity of organic matter. The claims of Loeb and Schäfer came just before tools for visualizing the sub-cellular world, such as X-ray crystallography and the electron microscope, began to show life’s microstructure in all its complication. As H. G. Wells, his son George, and Julian Huxley explained in The Science of Life (1929-30), ‘To be impatient with the biochemists because they are not producing artificial microbes is to reveal no small ignorance of the problems involved.’

The next big splash in ‘making life’ came in 1953 when Harold Urey and Stanley Miller announced their celebrated ‘prebiotic soup’ experiment, conjuring amino acids from simple inorganic raw materials [3]. This too was obviously a very far cry from a synthesis of life, but some press reports were little troubled by the distinction: the result was regarded as a new genesis in principle if not in practice. ‘If their apparatus had been as big as the ocean, and if it had worked for a million years, instead of one week’, said Time, ‘it might have created something like the first living molecule.’ Yet that same year saw the discovery of life’s informational basis – the source of much of the ‘organization’ of organic matter that had so puzzled earlier generations – in the work of Crick and Watson. Now life was not so much about molecules at all, but about cracking, and perhaps then rewriting, the code.

Burning the book

Which brings us to Venter et al. Now that the field of genomics has fostered the belief that in sequencing genomes we are reading a ‘book of life’, whose algorithmic instructions need only be rejigged to produce new organisms, it’s easy to see why the creation of a wholly synthetic genome and its ‘booting up’ in a unicellular host should be popularly deemed a synthesis of life itself. Here the membranes, the cytoplasm, everything in fact except the genes, are mere peripherals to the hard drive of life. (The shift to a new realm of metaphor tells its own story.)

But what this latest work really implies is that it is time to lay aside the very concepts of an ‘artificial organism’ and a ‘synthesis of life’. Life is not a thing one makes, nor is it even a process that arises or is set in motion. It is a property we may choose to bestow, more or less colloquially, on certain organizations of matter. ‘Life’ in biology, rather like ‘force’ in physics, is a term carried over from a time when scientists thought quite differently, where it served as a makeshift bridge over the inexplicable.

More important than such semantics, the achievement by Venter et al. is a timely reminder that anything laying claim to the function we might call life resides not in a string of genes but in the interactions between them. Efforts to make de novo organisms of any complexity – for example, ones that can manufacture new pharmaceuticals and biofuels under demanding environmental constraints – seem likely to highlight how sketchily we understanding how those interactions operate and, most importantly, what their generic principles are. The euphoria engendered by rapid whole-genome sequencing techniques is already giving way to humility (even humiliation) about the difficulty of squaring genotype with phenotype. Yet again, our ideas of where the real business of life resides are shifting again: away from a linear ‘code’ and towards something altogether more abstract, emergent and entangled. In this regard at least, the latest ‘synthesis of life’ does indeed seem likely to repeat the historical template.

References
 1. D. G. Gibson et al. Science doi:10.1126/science.1190719 (2010)
2. E. A. Schafer, Nature 90, 7-19 (1912)
3. S. Miller, Science 117, 528 (1953)