Thursday, January 31, 2013

Love potions

I have just read in Richard Kieckhefer’s Magic in the Middle Ages several recipes for medieval aphrodisiacs. I think it is a great shame that these have fallen out of use (I assume), and so want to recommend that we revive them. You can try them all at home.
1. To arouse a woman’s lust, soak wool in the blood of a bat and put it under her pillow as she sleeps.
2. The testicles of a stag or bull will arouse a woman to sexual desire. (The recipe doesn’t specify how you get them.)
3. Putting ants’ eggs into her bath will arouse her violently (to desire, that is).
4. Write “pax + pix + abyra + syth + samasic” on a hazel stick and hit a woman with it three times on the head, then immediately kiss her, and you will be assured of her love. I’ve got a feeling that this one might really work.
5. I know, it isn’t fair that the traffic is all one way. So to arouse a man to passion, mix a herb with earthworms and put it in his food. OK, so it didn’t work for Roald Dahl’s Mr and Mrs Twit, but you never know.

Saturday, January 26, 2013

Will we ever understand quantum theory?

And finally for now… a piece for BBC Future's "Will We Ever...?" column on the conundrums of quantum theory (more to come on this, I think).

___________________________________________________________

Quantum mechanics must be one of the most successful theories in science. Developed at the start of the twentieth century, it has been used to calculate with incredible precision how light and matter behave – how electrical currents pass through silicon transistors in computer circuits, say, or the shapes of molecules and how they absorb light. Much of today’s information technology relies on quantum theory, as do some aspects of chemical processing, molecular biology, the discovery of new materials, and much more.

Yet the weird thing is that no one actually understands quantum theory. The quote popularly attributed to physicist Richard Feynman is probably apocryphal, but still true: if you think you understand quantum mechanics, then you don’t. That point was proved by a poll among 33 leading thinkers at a conference on the foundations of quantum theory in Austria in 2011. This group of physicists, mathematicians and philosophers was given 16 multiple-choice questions about the meaning of the theory, and their answers displayed little consensus. For example, about half believe that all the properties of quantum objects are (at least sometimes) fixed before we try to measure them, whereas half felt that these properties are crystallized by the measurement itself.

That’s just the sort of strange question that quantum theory poses. We’re used to thinking that the world already exists in a definite state, and that we can discover what that state is by making measurements and observations. But quantum theory (‘quantum mechanics’ is often regarded as a synonym, although strictly that refers to the mathematical methods developed to study quantum objects) suggests that, at least for tiny objects such as atoms and electrons, there may be no unique state before an observation is made: the object exists simultaneously in several states, called a superposition. Only during the measurement is a ‘choice’ made about which of these possible states the object will possess: in quantum-speak, the superposition is ‘collapsed by measurement’. Before measurement, all we can say is that there is a certain probability that the object is in state A, or B, or so on. It’s not that, before measuring, we don’t know which of these options is true – the fact is that the choice has not yet been made.

This is probably the most unsettling of all the conundrums posed by quantum theory. It disturbed Albert Einstein so much that he refused to accept it all his life. Einstein was one of the first scientists to embrace the quantum world: in 1905 he proposed that light is not a continuous wave but comes in ‘packets’, or quanta, of energy, called photons, which are in effect ‘particles of light’. Yet as his contemporaries, such as Niels Bohr, Werner Heisenberg and Erwin Schrödinger, devised a mathematical description of the quantum world in which certainties were replaced by probabilities, Einstein protested that the world could not really be so fuzzy. As he famously put it, “God does not play dice.” (Bohr’s response is less famous, but deserves to be better known: “Einstein, stop telling God what to do.”)

Schrödinger figured out an equation that, he said, expressed all we can know about a quantum system. This knowledge is encapsulated in a so-called wavefunction, a mathematical expression from which we can deduce, for example, the chances of a quantum particle being here or there, or being in this or that state. Measurement ‘collapses’ the wavefunction so as to give a definite result. But Heisenberg showed that was can’t answer every question about a quantum system exactly. There are some pairs of properties for which an increasingly precise measurement of one of them renders the other ever fuzzier. This is Heisenberg’s uncertainty principle. What’s more, no one really knows what a wavefunction is. It was long considered to be just a mathematical convenience, but now some researchers believe it is a real, physical thing. Some think that collapse of the wavefunction during measurement is also a real process, like the bursting of a bubble; others see it as just a mathematical device put into the theory “by hand” – a kind of trick. The Austrian poll showed that these questions about whether or not the act of measurement introduces some fundamental change to a quantum system still cause deep divisions among quantum thinkers, with opinions split quite evenly in several ways.

Bohr, Heisenberg and their collaborators put together an interpretation of quantum mechanics in the 1920s that is now named after their workplace: the Copenhagen interpretation. This argued that all we can know about quantum systems is what we can measure, and this is all the theory prescribes – that it is meaningless to look for any ‘deeper’ level of reality. Einstein rejected that, but nearly two-thirds of those polled in Austria were prepared to say that Einstein was definitely wrong. However, only 21 percent felt that Bohr was right, with 30 percent saying we’ll have to wait and see.

Nonetheless, their responses revealed the Copenhagen interpretation as still the favourite (42%). But there are other contenders, one of the strongest being the Many Worlds interpretation formulated by Hugh Everett in the 1950s. This proposes that every possibility expressed in a quantum wavefunction corresponds to a physical reality: a particular universe. So with every quantum event – two particles interacting, say – the universe splits into alternative realities, in each of which a different possible outcome is observed. That’s certainly one way to interpret the maths, although it strikes some researchers as obscenely profligate.

One important point to note is that these debates over the meaning of quantum theory aren’t quite the same as popular ideas about why it is weird. Many outsiders figure that they don’t understand quantum theory because they can’t see how an object can be in two places at once, or how a particle can also be a wave. But these things are hardly disputed among quantum theorists. It’s been rightly said that, as a physicist, you don’t ever come to understand them in any intuitive sense; you just get used to accepting them. After all, there’s no reason at all to expect the quantum world to obey our everyday expectations. Once you accept this alleged weirdness, quantum theory becomes a fantastically useful tool, and many scientists just use it as such, like a computer whose inner workings we take for granted. That’s why most scientists who use quantum theory never fret about its meaning – in the words of physicist David Mermin, they “shut up and calculate” [Physics Today, April 1989, p9], which is what he felt the Copenhagen interpretation was recommending.

So will we ever get to the bottom of these questions? Some researchers feel that at least some of them are not really scientific questions that can be decided by experiment, but philosophical ones that may come down to personal preference. One of the most telling questions in the Austrian poll was whether there will still be conferences about the meaning of quantum theory in 50 years time. Forty-eight percent said “Probably yes”, only 15 percent “probably no”. Twelve percent said “I’ll organize one no matter what”, but that’s academics for you.

Stormy weather ahead

Next up, a kind of book review for Prospect. In my experience as a footballer playing on an outdoor pitch through the winter, the three-day forecasts are actually not that bad at all.

________________________________________________________________

Isn't it strange how we like to regard weather forecasting as a uniquely incompetent science – as though this subject of vital economic and social importance can attract only the most inept researchers, armed with bungling, bogus theories?

That joke, however, is wearing thin. With Britain’s, and probably the world’s, weather becoming more variable and prone to extremes, an inaccurate forecast risks more than a soggy garden party, potentially leaving us unprepared for life-threatening floods or ruined harvests.

Perhaps this new need to take forecasting seriously will eventually win it the respect it deserves. Part of the reason we love to harp on about Michael Fish’s disastrously misplaced reassurance over the Great Storm of 1987 is that there has been no comparable failure since. As meteorologists and applied mathematicians Ian Roulstone and John Norbury point out in their account of the maths of weather prediction, Invisible in the Storm (Princeton University Press, 2013), the five-day forecast is, at least in Western Europe, now more reliable than the three-day forecast was when the Great Storm raged. There has been a steady improvement in accuracy over this period and, popular wisdom to the contrary, prediction has long been far superior to simply assuming that tomorrow’s weather will be the same as today’s.

Weather forecasting is hard not in the way that fundamental physics is hard. It’s not that the ideas are so abstruse, but that the basic equations are extremely tough to solve, and that lurking within them is a barrier to prediction that must defeat even the most profound mind. Weather is intrinsically unknowable more than two weeks ahead, because it is an example of a chaotic system, in which imperceptible differences in two initial states can blossom into grossly different eventual outcomes. Indeed, it was the work of the American meteorologist Edward Lorenz in the 1960s, using as set of highly simplified equations to determine patterns of atmospheric convection, that first alerted the scientific community to the notion of chaos: the inevitable divergence of all but identical initial states as they evolve over time.

It’s not obvious that weather should be susceptible to mathematical analysis in the first place. Winds and rains and blazing heat seem prone to caprice, and it’s no wonder they were long considered a matter of divine providence. Only in the nineteenth century, flushed with confidence that the world is a Newtonian mechanism, did anyone dare imagine weather prediction could be a science. In the 1840s Louis-Napoléon demanded to know why, if his celebrated astronomer Urbain Le Verrier could mathematically predict the existence of the planet Neptune, he and his peers couldn’t anticipate the storms destroying his ships. Le Verrier, as well as the Beagle’s captain Robert FitzRoy, understood that charts of barometric air pressure offered a rough and ready way of predicting storms and temperatures, but those methods were qualitative, subjective and deeply untrustworthy.

And so weather prediction languished back into disrepute until the Norwegian physicist Vilhelm Bjerknes (‘Bee-yerk-ness’) insisted that it is “a problem in mechanics and physics”. Bjerknes asserted that it requires ‘only’ an accurate picture of the state of the atmosphere now, coupled to knowledge of the laws by which one state evolves into another. Although almost a tautology, that make the problem rational, and Bjerknes’s ‘Bergen school’ of meteorology pioneered the development of weather forecasting in the face of considerable scepticism.

The problem was, however, identified by French mathematician Henri Poincaré in 1903: “it may happen that small differences in the initial outcomes produce very great ones in the final phenomena.” Then, he wrote, “prediction becomes impossible.” This was an intimation of the phenomenon now called chaos, and it unravelled the clockwork Newtonian universe of perfect predictability. Lorenz supplied the famous intuitive image: the butterfly effect, the flap of a butterfly’s wings in Brazil that unleashes a tornado in Texas.

Nonetheless, it is Newton’s laws of motion that underpin meteorology. Leonhard Euler applied them to moving fluids by imagining the mutual interactions of little fluid ‘parcels’, a kind of deformable particle that avoids having to start from the imponderable motions of the individual atoms and molecules. Euler thus showed that fluid flow could be described by just four equations.

Yet solving these equations for the entire atmosphere was utterly impossible. So Bjerknes formulated the approach now central to weather modelling: to divide the atmosphere into pixels and compute the relevant quantities – air temperature, pressure, humidity and flow speed – in each pixel. That vision was pursued by the ingenious British mathematician Lewis Fry Richardson in the 1920s, who proposed solving the equations pixel by pixel using computer – not an electronic device, but as the word was then understood, by human calculators. Sixty-four thousand individuals, he estimated (optimistically), should suffice to produce a global weather forecast. The importance of forecasting for military operations, not least the D-Day crossing, was highlighted in the Second World War, and it was no surprise that this was one of the first applications envisaged for the electronic computers such as the University of Pennsylvania’s ENIAC whose development the war stimulated.

But number-crunching alone would not get you far on those primitive devices, and the reality of weather forecasting long depended on the armoury of heuristic concepts familiar from television weather maps today, devised largely by the Bergen school: isobars of pressure, highs and lows, warm and cold fronts, cyclones and atmospheric waves, a menagerie of concepts for diagnosing weather much as a doctor diagnoses from medical symptoms.

In attempting to translate the highly specialized and abstract terminology of contemporary meteorology – potential vorticity, potential temperature and so on – into prose, Roulstone and Norbury have set themselves an insurmountable challenge. These mathematical concepts can’t be expressed precisely without the equations, with the result that this book is far and away too specialized for general readers, even with the hardest maths cordoned into ‘tech boxes’. It is a testament to the ferocity of the problem that some of the most inventive mathematicians, including Richardson, Lorenz, John von Neumann and Jule Charney (an unsung giant of meteorological science) have been drawn to it.

But one of the great strengths of the book is the way it picks apart the challenge of making predictions about a chaotic system, showing what improvements we might yet hope for and what factors confound them. For example, forecasting is not always equally hard: the atmosphere is sometimes ‘better behaved’ than others. This is evident from the way prediction is now done: by running a whole suite (ensemble) of models that allow for uncertainties in initial conditions, and serving up the results as probabilities. Sometimes the various simulations might give similar results over the next several days, but other times they might diverge hopelessly after just a day or so, because the atmosphere is in a particularly volatile state.

Roulstone and Norbury point out that the very idea of a forecast is ambiguous. If it rightly predicts rain two days hence, but gets the exact location, time or intensity a little wrong, how good is that? It depends, of course, on what you need to know – on whether you are, say, a farmer, a sports day organizer, or an insurer. Some floods and thunderstorms, let alone tornados, are highly localized: below the pixel size of most weather simulations, yet potentially catastrophic.

The inexorable improvement in forecasting skill is partly a consequence of greater computing power, which allows more details of atmospheric circulation and atmosphere-land-sea interactions to be included and pixels to become smaller. But the gains also depend on having enough data about the current state of the atmosphere to feed into the model. It’s all very well having a very fine-grained grid for your computer models, but at present we have less than 1 percent of the data needed fully to set the initial state of all those pixels. The rest has to come from ‘data assimilation’, which basically means filling in the gaps with numbers calculated by earlier computer simulations. Within the window of predictability – perhaps out to ten days or so – we can still anticipate that forecasts will get better, but this will require more sensors and satellites as well as more bits and bytes.

If we can’t predict weather beyond a fortnight, how can we hope to forecast future climate change, not least because the longer timescales also necessitate a drastic reduction in spatial resolution? But the climate sceptic’s sneer that the fallibility of weather forecasting renders climate modelling otiose is deeply misconceived. Climate is ‘average weather’, and as such it has different determinants, such as the balance of heat entering and leaving the atmosphere, the large-scale patterns of ocean flow, the extent of ice sheets and vegetation cover. Nonetheless, short-term weather can impact longer-term climate, not least in the matter of cloud formation, which remains one of the greatest challenges for climate prediction. Conversely, climate change will surely alter the weather; there’s a strong possibility that it already has. Forecasters are therefore now shooting at a moving target. They might yet have to brave more ‘Michael Fish’ moments in the future, but if we use those to discredit them, it’s at our peril.

eBay chemistry

Ah, quite a bit of stuff to post tonight. Here is the first: my latest Crucible column for Chemistry World. I have a fantasy of kitting out my cellar this way one day, although the Guy Fawkes aspect of that idea would be sure to get me banished to the garden shed instead.

__________________________________________________________________

Benzyl alcohol? Dimethyl formamide? No problem. Quickfit glassware? Choose your fitting. GC mass spectrometer? Perhaps you’d like the HP 5972 5890 model for a mere £11,500. With eBay, you could possibly kit out your lab for a fraction of the cost of buying everything new.

There are risks, of course. The 90-day warranty on that mass spectrometer is scant comfort for such an investment, and you might wonder at the admission that it is “seller refurbished”. But you can probably get your hands on almost any bit of equipment for a knockdown price, if you’re willing to take the gamble.

You don’t have to rely on the hustle of eBay. Several companies now do a brisk online trade in used lab equipment. International Equipment Trading has been selling used and refurbished instruments for “independent laboratories, small and large industries, research institutions and universities around the globe” since 1979, offering anything from electron microscopes to NMR spectrometers. The cumbersomely named GoIndustry DoveBid (“Go-Dove”) is an international “surplus asset management” company used by many British technology companies for auctioning off equipment after site closures. And LabX, founded in 1995, is establishing itself as a major clearing house for used labware, serving markets ranging from semiconductor microelectronic manufacturing to analytical chemistry and medical diagnostic labs. Companies like this don’t become successful without scrupulous attention to quality, reliability and customer satisfaction – they aren’t cowboys.

Yet these transactions probably represent the tip of the iceberg as far as used and redundant lab kit goes. Can there be a chemistry department in the developed world that doesn’t have analytical instruments standing abandoned in corners and basements, perfectly functional but a little outdated? Or jars of ageing reagents cluttering up storerooms? Or drawers full of forgotten glassware, spatulas, heatproof mats? This stuff doubtless bit painfully into grants when it was first bought, but now that investment is ignored. The gear is likely to end up one day in a skip.

With universities struggling to accommodate cuts and a keen awareness of the need to recycle, this wastage seems criminal. But there seems to be little concerted effort to do much about it. Acquiring second-hand equipment is actively discouraged in some universities – partly because of understandable concerns about its quality, but often because the bureaucracy involved in setting up an ‘approved’ purchase is so slow and complicated that no one bothers, especially for little items. (One researcher used to buy sealant for bargain-basement prices until his department forbade it.) Inter-departmental recycling could be especially valuable for chemical reagents, since you might typically have to order far more than you really need. Auctioning them is another matter, however – selling chemicals requires a license, and one insider calls this a “legal minefield”.

But universities rarely have any organized system for sharing and redistributing equipment internally, and so “lots of kit sits there doing nothing”, says one chemist I spoke to, admitting that this applies to his own lab. He also points out that the EPSRC’s scheme for funding upgrades of small-scale equipment for early-career researchers seems to include no plans for reusing the old kit. This, he says, “could be worth well over £1m, and there are many universities overseas who would love to get hold of it, and wouldn’t be concerned about fixing it themselves.”

It’s a measure of the slightly disreputable taint of the second-hand equipment market that several of the researchers I spoke to requested anonymity. Early in his career, said one, he saved a lot of money buying in this way. “For a young academic it makes sense”, he says – you can get instrumentation for perhaps a twentieth of what it would cost new, such as high-pressure liquid chromatography pumps for a few hundred pounds instead of several thousand. In equipment auctions “the prices can start at nearly nothing”, according to a chemist who helped auctioneers sell off equipment from his previous employer, the pharmaceuticals company Exelgen, when it closed its site in North Cornwall in 2009. He says some second-hand equipment is bought up by the original manufacturer simply to maintain a good market for their new products. Not everything is a bargain, however: some used gear can sell for “nearly the price of new equipment as people get into bidding”, he says. On top of that you have the auctioneer’s fees, VAT, and perhaps carriage costs for equipment needing specialized transportation, not to mention the inconvenience of having to check out the goods first. So you need to know what you’re doing. “We have bought a fair bit of equipment this way”, says another researcher, “but most items require repairs, a service or at the very least some DIY to get them going. But if you happen to have a student who enjoys playing around with kit or computers, you can save quite a lot of money.” Happy hunting!

Tuesday, January 22, 2013

The thermodynamics of images

This was a somewhat challenging topic I took on for my latest column for BBC Future.

On an unrelated matter, my talk on Curiosity at the Perimeter Institute in December is now online. The Q&A is here.

And while I am doing non-sequiturs, I am deeply troubled by the news that the Royal Institution has put its Albemarle St building up for sale to cover the debts incurred in the excessively lavish refurbishment (don’t get me started). Amol Rajan in the Independent is dead right: it would be a monstrous if this place were lost to science, and Faraday’s lecture theatre became a corporate office. It must be saved!

______________________________________________________________

One of the unforeseen boons of research on artificial intelligence is that it has revealed much about our own intelligence. Some aspects of human perception and thought can be mimicked easily, indeed vastly surpassed, by machines, while others are extremely hard to reproduce. Take visual processing. We can give a satellite an artificial eye that can photograph your backyard from space, but making machines that can interpret what they ‘see’ is still very challenging. That realization should make us appreciate our own virtuosity in making sense of a visual field crammed with objects, some overlapping, occluded, moving, or viewed at odd angles or in poor light.

This ability to deconstruct immense visual complexity is usually regarded as an exquisite refinement of the neural circuitry of the human brain: in other words, it’s all in the head. It’s seldom asked what are the rules governing the visual stimulus in the first place: we tend to regard this as simply composed of objects whose identity and discreteness we must decode. But a paper published in the journal Physical Review Letters stands the problem of image analysis on its head by asking what are the typical statistical features of natural images. In other words, what sort of problem is it, really, that we’re solving when we look at the world?

Answering that question involves a remarkable confluence of scientific concepts. There is today a growing awareness that the science of information – how data is encoded, inter-converted and transported, whether in computers, genes or the quantum states of atoms – is closely linked to the field of thermodynamics, which was originally devised to understand how heat flows in engines and other machinery. For example, any processing of information – changing a bit in a computer’s binary memory from a 1 to a 0, say – generates heat.

A team at Princeton University led by William Bialek now integrates these ideas with concepts from image processing and neuroscience. The consequences are striking. Bialek and his colleagues Greg Stephens, Thierry Mora and Gasper Tkacik find that in a pixellated monochrome image of a typical natural scene, some groups of black and white pixels are more common than other, seemingly similar ones. And they argue that such images can be assigned a kind of ‘temperature’ which reflects the way the black and white pixels are distributed across the visual field. Some types of image are ‘hotter’ than others – and in particular, natural images seem to correspond to a ‘special’ temperature.

One way to describe a (black and white) image is to break it down into ‘waves’ of alternating light and dark patches. The longest wavelength would correspond to an all-white or all-black image, the shortest to black and white alternating for every adjacent pixel. The finer the pixels, the more detail you capture. It is equivalent to breaking down a complex sound into its component frequencies, and a graph of the intensity of each wavelength plotted against its wavelength is called a power spectrum. One of the characteristics of typical natural images, such as photos of people or scenery, is that they all tend to have the same kind of power spectrum – that’s a way of saying that, while the images might show quite different things, the ‘patchiness’ of light and dark is typically the same. It’s not always so, of course – if we look at the night sky, or a blank wall, there’s very little variation in brightness. But the power spectra reveal a surprising statistical regularity in most images we encounter.

What’s more, these power spectra have another common characteristic, called scale invariance. This means that pretty much any small part of an image is likely to have much the same kind of variation of light and dark pixels as the whole image. Bialek and colleagues point out that this kind of scale-invariant patchiness is analogous to what is found in physical systems at a so-called critical temperature, where two different states of the system merge into one. A fluid (such as water) has a critical temperature at which its liquid and gas states become indistinguishable. And a magnet such as iron has a critical temperature at which it loses its north and south magnetic poles: the magnetic poles of its constituent atoms are no longer aligned but become randomized and scrambled by the heat.

So natural images seem to possess something like a critical temperature: they are poised between ‘cold’ images that are predominantly light or dark, and ‘hot’ images that are featureless and random. This is more than a vague metaphor – for a selection of woodland images, the researchers show that the distributions of light and dark patches have just the same kinds of statistical behaviours as a theoretical model of a two-dimensional magnet near its critical temperature.

Another feature of a system in such a critical state is that it has access to a much wider range of possible configurations than it does at either lower or higher temperatures. For images, this means that each one is rather unique – they share few specific features, even if statistically they are similar. Bialek and colleagues suspect this might be why data files encoding natural images are hard to compress: the fine details matter in distinguishing one image from another.

What are the fundamental patterns from which these images are composed? When the researchers looked for the most common types of pixel patches – for example, 4x4 groups of pixels – they found something surprising. Fully black or white patches are very common, but as the patches become divided into increasingly complex divisions of white and black pixels, not all are equally likely: there are certain forms that are significantly more likely than others. In other words, natural images seem to have some special ‘building blocks’ from which they are constituted.

If that’s so, Bialek and colleagues think the brain might exploit this fact to aid visual perception by filtering out ‘noise’ that occurs naturally on the retina. If the brain were to attune groups of neurons to these privileged ‘patches’, then it would be easier to distinguish two genuinely different images (made up of the ‘special’ patches) from two versions of the same image corrupted by random noise (which would include ‘non-special’ patches). In other words, natural images may offer a ready-made error-correction scheme that helps us interpret what we see.

Reference: G. J. Stephens, T. Mora, G. Tkacik & W. Bialek, Physical Review Letters 110, 018701 (2013).

Thursday, January 17, 2013

History and myth

This is my Crucible column for the January issue of Chemistry World: more on why scientists rarely make good historians.

_____________________________________________________________

The history of chemistry is a discipline founded by chemists. After all, in the days before the history of science was a recognized field of academic toil, who else would have been interested in the origins of chemistry except those who now help it bear fruit? Marcellin Berthelot was one of the first to take alchemy seriously, translating ancient manuscripts and arguing that its apparent fool’s quest for gold led to useful discoveries. J. R. Partington, often considered a founding father of the modern study of chemical history, was a research chemist at Manchester and then at Queen Mary College, where another chemist, Frank Sherwood Taylor, founded the history-of-chemistry journal Ambix in 1937.

Things are different today – and not everyone is happy about it. The history of science has become as professionalized as any branch of science itself, and is therefore likewise answerable to standards of specialized expertise that leave scant room for the amateur. As a result, some chemists who enjoy exploring the lives and works of their predecessors can feel excluded from their own past, undermined and over-ruled by historians who have their own methods, norms and agendas and yet who have perhaps never held a test-tube. Conversely, those historians may end up despairing at the over-simplified narratives that practising chemists want to tell, at their naïve attachment to the founding myths of their discipline and their determination to filter the past through the lens of the present. In short, chemists and historians of chemistry don’t always see eye to eye.

That much is clear from the comments of Peter Morris in the latest issue of Ambix [1], from the editorship of which he has just stepped down after a decade in the position. (His successor is Jennifer Rampling of the University of Cambridge.) Morris is measured and diplomatic in his remarks, but his role has evidently not been an easy one. “It is unfortunate that the last three or four decades have witnessed a separation (but not yet a divorce) between historians and chemist-historians”, he says, defining the latter as practising chemists who write history. This separation is evident from the way that, while articles in Ambix come mostly from historians, several chemistry journals, such as Angewandte Chemie and the Journal of Chemical Education, sometimes publish (or at least once did) historical pieces from chemist-historians. The editors of such journals, says Morris, rarely ask historians to write such pieces, perhaps because they don’t know any, or perhaps because they “are fearful that the professionals will transgress against the standard foundation history accepted by the scientists.”

That’s a killer punch. In other words, Morris is saying that chemists fiercely defend their myths against those who dare weigh them against the evidence. For example, Morris says that an article in Ambix [2] challenging the stock account of how Wöhler’s synthesis of urea vanquished vitalism and began organic chemistry has probably done little to dislodge this widely held belief. Some chemists doubtless still prefer the fairy tale offered in Bernard Jaffe’s Crucibles: The Story of Chemistry: “About one hundred and fifty years ago an epoch-making event took place in the laboratory of a young German still in his twenties…”

Chemists aren’t unique among scientists in displaying a certain antipathy to ‘outsiders’ deigning to dismantle their cherished fables. But at face value, it seems odd that a group who recognize a culture of expertise and value facts should resist the authority of those who actually go back to the sources and examine the past. Why should this be? In part, it merely reflects the strong Whiggish streak that infuses science, according to which the purpose of history is not to understand the past so much as to explain how we got to the present. This attitude, says Morris, is evident in the way that many chemist-historians will accept only the chemical literature as the authoritative text on history – not the secondary literature that contextualizes such (highly stylized) accounts, not the social, political or economic setting. And while for historians it is often highly revealing to examine what past scientists got wrong, for scientists those are just discredited ideas and therefore so much rubbish to be swept aside.

But, as Morris stresses, not all chemist-historians think this way, and what sometimes hinders them is simply a lack of historical training: of how to assemble a sound historical argument. The trouble is, they may not be interested in acquiring it. “Many chemists, although by no means all, are loathe to take instruction from historians, whom they perceive as being non-chemists”, he says. They might write jargon-strewn, ploddingly chronological papers with no thesis or argument, and refuse to alter a word on the advice of historians. That kind of intellectual arrogance will only widen the divide.

Morris expresses optimism that “with good will and mutual understanding” the breach can be healed. Let’s hope so, because every chemistry student can benefit from some understanding of their subject’s evolution, and they deserve more than comforting myths.

1. P. J. T. Morris, Ambix 59,189-96 (2012).
2. P. J. Ramberg, Ambix 47, 170-195 (2000).

Tuesday, January 15, 2013

What's it all about Albert?

Here’s the pre-edited version of a story for Nature News about a fun poll among specialists of what quantum theory means. It seems quite possible that this material will spawn some further pieces about current work on quantum foundations, not least the ‘reconstruction’ projects that are attempting to reconstruct the theory from scratch using a few simple axioms.

_____________________________________________________________________

New poll reveals diverse views about foundational questions in physics

Quantum theory was first devised over a hundred years ago, but even experts still have little idea what it means, according to a poll at a recent meeting reported in a preprint on the physics arXiv server [1].

The poll of 33 key thinkers on the fundamentals of quantum theory shows that opinions on some of the most profound questions are fairly evenly split over several quite different answers.

For example, votes were roughly evenly split between those who believe “physical objects have their properties well defined prior to and independent of measurement” in some cases, and those who believe they never do. And despite the famous idea that observation of quantum systems plays a key role in determining their behaviour, 21 percent felt that “the observer should play no fundamental role whatsoever.”

Nonetheless, “I was actually surprised that there was so much agreement on some questions”, says Anton Zeilinger of the University of Vienna, who organized the meeting in Austria in July 2011 at which the poll was taken.

The meeting, supported by the Templeton Foundation, brought together physicists, mathematicians and philosophers interested in the meanings of quantum theory. Zeilinger, together with Maximilian Schlosshauer of the University of Portland in Oregon and Johannes Kofler of the Max Planck Institute of Quantum Optics in Garching, Germany, devised the poll, in which attendees were given 16 multiple-choice questions on key foundational issues in quantum theory.

Disagreements over the theory’s interpretation have existed ever since it was first developed, but Zeilinger and colleagues believe this may be the first poll of the full range of views held by experts. A previous poll at a 1997 meeting in Baltimore asked attendees the single question of which interpretation of quantum theory they favoured most [2].

Probably the most famous dispute about what quantum theory means was that between Einstein and his peers, especially the Danish physicist Niels Bohr, on the question of whether the world was fundamentally probabilistic rather than deterministic, as quantum theory seemed to imply. One of the few issues in the new poll on which there was something like a consensus was that Einstein was wrong – and one of the few answers that polled zero votes was “There is a hidden determinism [in nature]”.

Bohr, along with Werner Heisenberg, offered the first comprehensive interpretation of quantum theory in the 1920s: the so-called Copenhagen interpretation. This proposed that the physical world is unknowable and in some sense indeterminate, and the only meaningful reality is what we can access experimentally. As at the earlier Baltimore meeting, the Austrian poll found the Copenhagen interpretation to be favoured over others, but only by 42 percent of the voters. However, 42 percent also admitted that they had switched interpretation at least once. And whereas a few decades ago the options were very few, says Schlosshauer, “today there are more ‘sub-views’.”

Perhaps the most striking implication of the poll is that, while quantum theory is one of the most successful and quantitatively accurate theories in science, interpreting it is as fraught now as it was at the outset. “Nothing has really changed, even though we have seen some pretty radical new developments happening in quantum physics, from quantum information theory to experiments that demonstrate quantum phenomena for ever-larger objects”, says Schlosshauer. “Some thought such developments would push people one way or the other in their interpretations, but I don't think there’s much evidence of that happening.”

However, he says there was pretty good agreement on some questions. “More than two-thirds believed that there is no fundamental limit to quantum theory – that it should be possible for objects, no matter how big, to be prepared in quantum superpositions like Schrödinger’s cat. So the era where quantum theory was associated only with the atomic realm appears finally over.”

Other notable views were that 42 percent felt it would take 10-25 years to develop a useful quantum computer, while 30 percent placed the estimate at 25-50 years. And the much debated role of measurement in quantum theory – how and why measurements affect outcomes – split the votes many ways, with 24 percent regarding it as a severe difficulty and 27 percent as a “pseudoproblem”.

Zeilinger and colleagues don’t claim that their poll is rigorous or necessarily representative of all quantum researchers. John Preskill, a specialist in quantum information theory at the California Institute of Technology, suspects that “a broader poll of physicists might have given rather different results.” [There is an extended comment from Preskill on the poll here]

Are such polls useful? “I don’t know”, says Preskill, “but they’re fun.” “Perhaps the fact that quantum theory does its job so well and yet stubbornly refuses to answer our deeper questions contains a lesson in itself”, says Schlosshauer. Maybe the most revealing answer was that 48 percent believed there will still be conferences on quantum foundations in 50 years time.

References
1. Schlosshauer, M., Kofler, J. & Zeilinger, A. preprint http://www.arxiv.org/abs/1301.1069 (2013).
2. Tegmark, M. Fortschr. Phys. 46, 855 (1998).

Tuesday, January 01, 2013

Give graphene a bit of space

Here’s a piece published in last Saturday’s Guardian. I see little has changed in Comment is Free, i.e “A worthless uninformed negative article. You don't know what you are talking about. Why do you get paid for writing rubbish like this?” One even figured that the article is “anti-science.” Another decides that, because he feels steel-making is still not really a science (tell that to those now doing first-principles calculations on metal alloys), the whole article is invalidated. But this is par for the course. Back in the real world, Laurence Eaves rightly points out that his recent articles in Science and Nature Nanotech with Geim and Novoselov show that there’s hope of a solution to the zero-band-gap problem. Whether it’s be economical to make microprocessors this way is a question still far off, but I agree that there’s reason for some optimism. If carbon nanotubes are any guide, however, it’s going to be a long and difficult road. Some apparently regard it as treasonous to say so, but I'm pretty sure that Andre Geim, for one, would prefer to get on with the hard work without the burden of unreasonable expectation on their shoulders. And I know that the folks at IBM are keeping those expectations very modest and cautious when it comes to graphene.

______________________________________________________________

Wonder materials are a peculiarly modern dream. Until the nineteenth century we had to rely almost entirely on nature for the fabrics from which we built our world. Not until the 1850s was steel-making a science, and the advent of the first synthetic polymers – celluloid and vulcanised rubber – around the same time, followed later by bakelite, ushered in the era of synthetic materials. As The Man in the White Suit (1951) showed, there were mixed feelings about this mastery of manmade materials: the ads might promise strength and durability, but the economy relies on replacement. When, four years later, synthetic diamond was announced by General Electric, some felt that nature had been usurped.

Yet the ‘miracle material’ can still grab headlines and conjure up utopian visions, as graphene reveals. This ultra-tough, ultra-thin form of carbon, just one atom thick and made of sheets of carbon atoms linked chicken-wire fashion into arrays of hexagons, has been sold as the next big thing: the future of electronics and touch-screens, a flexible fabric for smart clothing and the electrodes of energy-storage devices. It’s a British discovery (well, sort of), and this time we’re not going to display our habitual dilatoriness when it comes to turning bright ideas into lucrative industries. George Osborne has announced £22m funding for commercialising graphene, the isolation of which won the 2010 Nobel prize in physics for two physicists at the University of Manchester.

It would be madness to carp about that. But let’s keep it in perspective: this investment will be a drop in the ocean if a pan-European graphene project currently bidding for a €1 bn pot from the European Union, to be decided early in 2013, is successful. All the same, it’s serious money, and those backing graphene have got a lot to live up to.

It’s not obvious that they will. With an illustrious history of materials innovation, Britain is well placed to put this carbon gossamer to work – not least, Cambridge boasts world-leading specialists in the technology of flexible, polymer-based electronics and display screens, one of the areas in which graphene looks most likely to make a mark. But overseas giants such as Samsung and Nokia are already staking out that territory, and China is making inroads too.

Perhaps more to the point, graphene might not be all it is talked up to be. No matter how hard the Manchester duo Andre Geim and Konstantin Novoselov stress that the main attraction so far is the remarkable physics of the substance and not its potential uses, accusations of hype have been flung at those touting this wonder material. The idea that all our microchips will soon be based on carbon rather than silicon circuits looks particularly dodgy, since it remains all but impossible to switch a graphene transistor (the central component of integrated circuits) fully off. They leak, leading one expert to call graphene “an extremely bad material that an electronics designer would not touch with a ten-foot pole”. Even optimists don’t forecast the graphene computer any time soon.

But here graphene is perhaps a victim of its own success: it’s such strange, interesting stuff that there’s almost a collective cultural wish to believe it can do anything. That’s the curse of the ‘miracle material’, and we have plastics to blame for it.

For plastics were the first of these protean substances. Before that, materials tended to have specific, specialized uses, their flaws all too evident. Steel was strong but heavy, stone hard but brittle. Leather and wood rotted. But plastics? Stronger than steel, hard, soft, eternal, biodegradable, insulating, conductive, sticky, non-stick, they tethered oil rigs and carried shopping. They got us used to the idea that a single fabric can be all things to all people. As a result, a new material is expected to multi-task. High-temperature superconductors, which nabbed a Nobel in 1987, would give us maglev trains and loss-free power lines. Carbon nanotubes (a sort of tubular graphene discovered in 1991) would anchor a Space Elevator and transform microelecronics. These things haven’t materialized, partly because it is really, really hard to secure a mass market overnight for high-tech, expensive new materials, especially when that means displacing older, established materials. They are instead finding their own limited niche. Graphene will do too. But miracle materials? They don’t really exist.

Sunday, December 23, 2012

The prospects for economics? Don't bank on them

Perhaps I am simply trying to discharge all my rants before Christmas. But I did get a bit peeved at the rather mindless way in which the Queen’s ‘briefing’ on the financial crisis during her visit to the Bank of England was reported by all and sundry – which triggered this blog piece for Prospect. The comment from Peter Whipp is priceless. I’m not sure even Milton Friedman would have gone quite that far.

Thursday, December 20, 2012

Maths of the pop-up tent

Here’s my latest news story for Nature. This image shows a wood sculpture made by the paper’s authors to illustrate their principles. It’s worth seeing the video of the folding process on the Nature site too – this is one of those problems that is much more easily expressed in images than in words.

____________________________________________________

Ever wrestled with a pop-up tent, trying to fold it back up to fit in the bag? Help is at hand, in the form of a mathematical theory to describe the shapes adopted by the kinds of stressed flexible rings from which these tents are made [1]. As a result, says team leader Alain Jonas of the Catholic University of Louvain in Belgium, “we have found the best way to fold rings”.

Rings buckled into three-dimensional shapes crop up in many everyday contexts. They are used not just for pop-up tents but to make laundry baskets, small soccer goals, and some wood and creased-origami sculptures, as well as appearing inadvertently in bicycle wheels with too-tight spokes.

Jonas and his collaborators also report microscopic versions made from rings less than a millimetre across carved by electron beams out of thin double-layer films of aluminium and silicon nitride. Because the packing of atoms in the two materials doesn’t match, the films become strained when stuck back to back, inducing buckling.

In all these cases, the shapes adopted by the rings look rather similar. Typically, two opposite edges might buckle upwards, producing a kind of saddle shape. In pop-up tents, these buckles can be bent further by hand to fold the single large ring into a coil of smaller rings.

The researchers show that all these shapes can be predicted accurately with a theory that invokes a single key mathematical concept: what they call ‘overcurvature’, which is the amount by which a circular ring is made more curved than a perfect circle. For a folded, coiled pop-up tent, for example, the final coils have more total curvature than the unfolded single ring would have.

Equivalently, one can introduce overcurvature by adding segments of arc to a circle. The researchers do this experimentally by cutting out coils of a Slinky spring and joining them together in a single ring. This allows them to study the shapes that overcurvature produces, and compare it to their mathematical theory describing the stresses that appear in such a ring and make it buckle. They can figure out, say, how many arcs need to be joined together to guarantee buckling into a specific shape – just the thing that a bent-wood sculptor might like to know.

“They find a universal family of shapes that can be produced in frustrated rings”, explains Basile Audoly of the Institute de Mécanique d’Alembert in Paris. “This is why the folded tent looks like the Slinky and the creased origami.”

The results can be used to work out the easiest folding pathways to collapse a single overcurved ring into a small coil – the problem of folding a pop-up tent. “It’s not trivial to find this pathway empirically”, says Jonas. “You naturally start by deforming the ring in two lobes, since this is easiest. But then you have to deform the ring further into shapes that require a lot of energy.”

In contrast, he says, “if you take the pathway we propose, you have to use more energy at the start, but then have to cross lower energy barriers to reach the energy valley of the ring coiled in three” – you don’t get trapped by following the path of initial least resistance. The researchers provide a detailed route for how best to reach the three-ring compact form.

They also show that such a ring can be made even more compact, folded into five rings instead of three. “This is more difficult, because the energy barriers are higher”, Jonas admits, saying that for a tent it would be best to have three people on the job. He sees no reason why this shouldn’t work for real tents, provided that the pole material is flexible and strong enough.

Jonas thinks that the results might also apply on the molecular scale to the shapes of some relatively stiff molecular rings, such as small, circular bacterial chromosomes called plasmids. Their shapes look similar to those predicted for some ring-shaped polymers squashed into spherical shells [2].

“There is a lot of interest currently in this kind of fundamental mechanical problem”, says Audoly, who points out that rather similar and related findings have been reported by several others [3-7]. For example, he says, the same question has been related to the buckled fringes at the edges of plant leaves, where tissue growth can outstrip the overall growth rate of the leaf to create excess ‘edge’ that must become folded and rippled as a result [3,4]. However, Jonas says that, compared to earlier work on such problems, finding that just the single parameter of overcurvature will describe the mechanical problem “has the virtue of allowing us to find general laws and provide easy-to-use designing tools.”

References
1. Mouthuy, P.-O., Coulombier, M., Pardoen, T., Raskin, J.-P. & Jonas, A. M. Nat. Commun. 3, 1290 (2012).
2. Ostermeir, K., Alim, K. & Frey, E. Phys. Rev. E 81, 061802 (2010).
3. Sharon, E., Roman, B., Marder, M., Shin, G.-S. & Swinney, H. L. Nature 419, 579 (2002).
4. Marder, M., Sharon, E. Smith, S. & Roman, B. Europhys. Lett. 62, 498-504 (2003).
5. Moulton, D. E., Lessinnes, T. & Goriely, A. J. Mech. Phys. Solids doi/10.1016/j.jmps.2012.09.017 (2012).
6. Audoly, B. & Boudaoud, A. Comptes Rendus Mecanique 330, 831-836 (2002).
7. Dias, M. A., Dudte, L. H., Mahadevan, L. & Santangelo, C. D. Phys. Rev. Lett. 109, 114301 (2012).

Tuesday, December 18, 2012

The problem with opera

I have been enjoying David Moser’s classic rant about the difficulties of learning Chinese, to which Alan Mackay directed me, presumably after my recent piece for Nature on how reading ideograms and alphabetical words use essentially the same parts of the brain. A lot of what Moser says rings bells – my Chinese teachers too occasionally forget how to write (rare) characters, and more often make little slips with the pinyin system (so is that tan or tang?). And I too had tended to dismiss these as just universal lapses of memory, rather overlooking the fact that this was their language, not mine. I’m glad Moser agrees that abolishing characters isn’t a viable solution, not least because the Chinese orthographic system is so beautiful. But what chimes most is that this is a problem that simply doesn’t register with Chinese people.

And that, it strikes me – and to change the subject rather lurchingly – is just how it is too with fans of opera. Reading a nice review by Philip Hensher of a new history of opera by Carolyn Abbate and Roger Parker, the penny dropped that this is how I struggle with opera. It has its moments, but in musical, theatrical and literary terms opera as we have received it has some very deep-seated problems that seem to remain utterly invisible to aficionados. That is why it was a huge relief to see Hensher, who is evidently an avid opera buff, bring these out into the open. Ask many fans what they love in opera, and they are likely to start talking about how it brings together the highest art forms – music, literature and theatre – in one glorious package. It astounds me that they remain oblivious to the profound difficulties that union presents – if not inevitably, then certainly in practice.

For a start: opera traditionally has crap plots and terrible acting. It’s not, I think, ignorant philistinism that prompts me to say this, since the view is shared by Jonathan Miller, who says that 90 percent of operas aren’t worth bothering with. Miller makes no bones about the efforts he has had to make, in directing operas, to suppress the ridiculous gestures that his performers would insist on making. His comments remind me of once watching a trained dancer in an acting class. The chap was asked to walk across the stage in a neutral way. He couldn’t do it. His body insisted on breaking into the most contrived and stylized preening, even though he’d walk down the corridor after the class just like anyone else. His training, like that of opera singers, was doubtless exquisite. It was, however, a training evidently bent on obliterating his ability to move like a normal human being. Now, opera lovers will insist that things have got better over the past several decades – opera singers actually learn something about acting now, not simply a catalogue of absurd symbolic gestures – and this is true. But it’s a slow process, and in some ways you could regard it as a ‘de-operafication’ of opera.

The same with voice. Even Hensher seems to regard opera singing as the highest pinnacle of refinement in the use of the human voice. It’s very, very hard, to be sure, but it is also utterly stylized. This is not how people sing, it is how opera is sung. That seems abundantly obvious, but opera buffs seem to have no notion of it, even though no one sings that way until they have been trained to. There are reasons for it (which I’m coming to) – but operatic singing is a highly contrived, artificial form of human vocal performance, and as such it is emotionally constrained as much as it is expressive – the emotions, like the physical gestures, are stylized conventions. That’s not necessarily a bad thing, but it is bizarre that this evident fact is not even noticed by most opera lovers. Hensher puts it, gnomically, like this: “Opera survives in a safe, hermetic, sealed condition of historic detachment, where emotion can be expressed directly because it is incomprehensible, remote and stylised.” I’m still working on that sentence – how can emotion can be especially ‘direct’ precisely because it is ‘remote’ and ‘stylised’?

Plot – oh, don’t get me started. It’s too easy a target. Even opera lovers admit that most of the plots suck. Now, it’s often said that this is one of the necessary sacrifices of the art: if all the lines are sung, the action must be pretty simple. If that is so, we must already then concede that there’s some erosion of the theatrical substance. However, it doesn’t have to be that way. People can sing complex dialogue quite audibly in principle. They do it in musicals all the time. If you want to hear complex, incredibly moving emotion sung, you need only listen to Billie Holliday (or Nina Simone) singing "Strange Fruit". The fact is, however, that these things can’t be done if sung operatically, in particular because the operatic voice reduces the audibility of the words. As Hensher asks, “why harness a drama to music of such elaboration that it will always get in the way of understanding?” (though actually it’s not the music but the vocalization). He doesn’t answer that question, but I’m mighty glad he raises it. Composers themselves have acknowledged this problem, even if indirectly: it has been shown, for example, that Wagner seems to have intentionally matched the pitch of vowels in his (self-penned) libretti to the frequencies at which the vocal tract resonates when they are spoken in normal speech, making them somewhat more intelligible. (At the very highest frequencies of a female soprano, all vowels tend to sound like ‘ah’.) In other words, Wagner was wrestling with the limitations on communication that opera had chosen to impose on itself.

Why did it do that? Because of a misconceived idea, in the early development of opera in the sixteenth century, that the cadences of speech can be rendered in music. They can’t: the irregular rhythms, indefinite contours and lack of melody make it unlike musical melody, even if there are other intriguing parallels. Opera has kind of accepted this, which is why, as Hensher points out, the form became one “in which lyric utterances of a single significance alternate with brisk, less melodic passages”. Or to put it another way, we get some fabulous arias in which nothing much is said beyond “I’m heartbroken” or “I love you”, interrupted by unmusical recitative which audiences have so much learned to put up with that they barely seem to register that they are at such times not having a ‘musical’ experience at all, but rather, an operatic one. The nineteenth century music critic Eduard Hanslick puts it delicately: “in the recitative music degenerates into a mere shadow and relinquishes its individual sphere of action altogether.” To put it another way, as “music” those parts are gibberish.

Again, this is a choice. It is one that has historical reasons, of course – but for many opera fans it seems again simply to have become invisible, which strikes me as at least a little odd. Hensher says it nicely: “If you were going to design an art form from scratch, you'd be able to improve in a good few ways on opera as we have inherited it.” And I accept that we don’t have the option of starting again from scratch – but we do have the luxury of acknowledging the room for improvement.

In her review of the same book in Prospect, Wendy Lesser seems initially to be demonstrating the same refreshing sensibility: “Opera must be one of the weirdest forms of entertainment on the planet. Its exaggerated characters bear little relation to living people, and its plots are often ludicrous.” But it soon becomes clear that Lesser doesn’t really get the point at all. She quotes Abbate and Parker as saying “the whole business is in so many ways fundamentally unrealistic, and can’t be presented as a sensible model for leading one’s life or understanding human behaviour.” Hang on – you are going to the opera for that? Is that why we go to the theatre either, for goodness’ sake? Lesser soon shows that her talent for ducking the issue is remarkable; she becomes just like the Chinese people who frustrate Moser with their pride at the sheer difficulty of the language – “Yes, it’s the hardest in the world!” But, he longs to say, doesn’t that strike you as a problem? Ludicrous plots, exaggerated characters – hey, aren’t we strange to like this stuff? Well no, it’s just that there seems no particular reason to celebrate those flaws, even if you like opera in spite of them. Lesser presents the “huge acoustic force” of the opera voice as another lovable oddity, but doesn’t seem to recognize that the historical necessity of attaining such volume creates a distance from regular human experience and compromises intelligibility.

I know people who have large collections of opera recordings. Perhaps they use them to compile collections of their favourite arias – why not? But my impression is that they hardly ever put on an opera and listen to it all the way through, as we might a symphony. Now, call me old-fashioned, but I still have this notion that music is something you want to listen to because it works as a coherent whole. Opera is something else: a thing to be experienced in the flesh, as stylized and refined as Noh or Peking opera (but a fair bit more expensive). Opera is indeed an experience, and Hensher encapsulates the intensity and romance of that experience brilliantly. I only ask that, especially since opera dominates the classical music reviews to such a degree, we remember to ask: what sort of experience is it, exactly? Neither primarily musical, nor lyrical, nor theatrical – but operatic.

Maybe I’m just frustrated that in the end I know it is my loss that I sit entranced through the Prelude of Tristan and Isolde and then roll my eyes when the singing starts. I know from enough people whose judgement I trust what delights await in the operatic repertoire (you’re pushing at an open door as far as Peter Grimes is concerned). It’s just the failure of opera lovers to notice the high cost of entry (in all respects) that confounds me.

Friday, December 14, 2012

New articles

I have an article in Nature on thermal management of computers, which is also available online via Scientific American. Before I spoke to folks at IBM about this, I’d have imagined it to be deadly dull. I hope you’ll agree that it isn’t at all – in fact, it strikes me as perhaps the big potential roadblock for computing, though talked about far less than the question of how to keep on miniaturizing.

I also have an article on supercapacitors in MRS Bulletin, which can be seen here. But I have just put a longer version on my website (under Materials) which contains the references chopped out of the published version. This follows on from an article in the MRS Bulletin September Energy Quarterly on the use of supercapacitors in transport in Germany, which can be downloaded here.

In fact I have just put a few new articles up on my web site, hopefully with more to follow. Oh, and as well as writing for the ultra-nerdy MRS Bulletin, I have done a piece on emotion in music for the ‘supermarket magazine’ The Simple Things, for which you can see a sampler here. Nothing like variety. I’ll stick the pdf up on my website soon.

Tuesday, December 11, 2012

Crystallography's fourth woman?

Here is a book review just published in Nature, with some bits reinserted that got lost in the edit.

___________________________________________

I Died For Beauty: Dorothy Wrinch and the Cultures of Science by Marjorie Senechal
Oxford University Press
3 Dec 2012 (UK Jan 2013)
304 pages
$34.95

X-ray crystallography and the study of biomolecular structure was one of the first fields of modern science in which women scientists came to the fore. Dorothy Hodgkin, Rosalind Franklin and Kathleen Lonsdale are the best known of the women who made major contributions in the face of casual discrimination and condescension. In I Died for Beauty Marjorie Senechal suggests that there was nearly a fourth: Dorothy Wrinch, a name that few now recognize and that is often derided by those who do.

The late protein chemist Charles Tanford, for instance, has poured scorn on Wrinch’s best-known work, the ‘cyclol theory’ of protein structure, proposed in the 1930s. It was, he said, “not really worth more than a footnote, a theory built on nothing, no training, no relevant skills”, which gained visibility only thanks to the “sheer bravura (chutzpah) of the author”. Of Wrinch herself, he proclaimed “she was arrogant and felt persecuted when criticized, but in retrospect her miseries seem self-inflicted.”

In an attempt to rebalance such attacks, Senechal, a former assistant of Wrinch’s at Smith College in Massachusetts and now coeditor of The Mathemetical Intelligencer, has written no hagiography, but rather, a sympathetic apologia. Whatever one feels about Wrinch and her research, she is a fascinating subject. Her circle of friends, colleagues and correspondents reads like a who’s who of early twentieth-century science and philosophy. Wrinch, a Cambridge-trained mathematician, was a student of Bertrand Russell, was championed by D’Arcy Thompson and Irving Langmuir, worked alongside Robert Robinson and knew Niels Bohr, G. H. Hardy, Kurt Gödel and John von Neumann. Several of them considered her brilliant, although one wonders how much this reflected her ambition and force of personality than actual achievements. Nonetheless, calling for mathematicians to interest themselves in biology, Thompson says in 1931 that “I do not know of anyone so well qualified as Dr Wrinch.” The polymathic mathematician and geophysicist Harold Jeffreys developed some of his ideas on statistical reasoning in collaboration with Wrinch at Cambridge, and wrote in Nature in 1976 of “the substantial contribution she made to this [early] work, which is the basis of all my later work on scientific inference.”

Senechal’s central question is: what went wrong? Why did so apparently promising a figure, a member of the pioneering Theoretical Biology club that included Joseph Needham, J.D.Bernal and Conrad Waddington, end up relegated to obscurity?

The too-easy answer is: Linus Pauling. When Pauling, in a 1939 paper, comprehensively destroyed Wrinch’s cyclol theory – which argued that globular proteins are polyhedral shells, in which amino acids link into a lattice of hexagonal rings – he finished her career too. Senechal clearly feels Pauling was bullying and vindictive, although her attempt at revenge via Pauling’s cavalier dismissal of Dan Schechtman’s quasicrystals doesn’t make him any less right about proteins.

But a more complex reason for Wrinch’s downfall emerges as the story unfolds. Part of her undoing was her magpie mind. Seemingly unable to decide how to use her substantial abilities, Wrinch never really made important contributions to one area before flitting to another — from Bayesian statistics to seismology, topology to mitosis. Warren Weaver, the astute director for natural sciences at the Rockefeller Foundation that funded Wrinch for some years, offered an apt portrait: “W. is a queer fish, with a kaleidoscopic pattern of ideas, ever shifting and somewhat dizzying. She works, to a considerable extent, in the older English way, with heavy dependence on ‘models’ and intuitive ideas.”

Senechal presents a selection of opinions the Foundation collected on her while assessing her funding application, many deeply unflattering: she is a fool, she is mad or ‘preachy’, she dismisses facts that don’t fit and poaches others’ ideas. Frustratingly, we’re left to decide for ourselves told how much of this is justified, but even Senechal admits that a little of Wrinch went a long way. Her wearisome habits were noted by science historian George Sarton’s daughter in an account of a London tea in 1937: “Dorothy Wrinch was there in one of her strange, simpering showing off moods, talking about herself constantly.” The evidence for a problematic personality gradually piles up.

She certainly had a talent for making enemies. “Everyone in England in on or near the protein field is more than antagonistic to her,” said one of the Rockefeller interviewees. Bernal was incensed when Wrinch tried to argue that the diffraction data obtained by his student Hodgkin supported her cyclol theory – an assertion that was sloppy at best, and perhaps dishonest. In retaliation Wrinch called Bernal “jealous, brutal and treacherous”. (Hodgkin, true to form, was charitably forgiving.)

Underlying all of this is the position of Wrinch as a female scientist. Like many educated women of the 1930s, she felt motherhood as a burden and barrier that only extreme measures could relieve. Her eugenic inclinations and call, in her pseudonymous The Retreat from Parenthood (1930), for a state-run Child Rearing Services that farmed out children to professional carers reinforce the fact that Aldous Huxley was only writing what he heard. Alarming though her behaviouralist approach to parenting might now sound (Senechal rather sidesteps Wrinch’s relationship with her daughter Pamela, who died tragically in a house fire aged 48), it is shameful that the professional structures of science have hardly made it any easier for mothers some 80 years on.

Her central problem, it seems, was that, working at a time when most male scientists assumed that women thought differently from them, Wrinch seemed to conform to their stereotype: headstrong, stubborn, strident, reliant on intuition rather than facts. It is clear in retrospect that those complaints could also be made of Wrinch’s arch-enemy Pauling: Senechal rightly observes that “Dorothy and Linus were more alike than either of them ever admitted.” She sees injustice in the way Pauling’s blunders, such as denying quasicrystals, were forgiven while Wrinch’s were not.

Was there a hint of sexism here? In this case I doubt it – Pauling of course, unlike Wrinch, hit more than enough bullseyes to compensate. But Senechal’s imagined scene of braying men and their snickering wives poring over Pauling’s devastating paper has a depressing ring of truth.

Primarily a mathematician herself, Senechal doesn’t always help the reader understand what Wrinch was trying to do. Her interest in “the arrangement of genes on the chromosome” sounds tantalizingly modern, but it’s impossible to figure out what Wrinch understood by that. Neither could one easily infer, from Senechal’s criticisms of Pauling’s attack, that the cyclol theory was way off beam even then. Tanford has pointed out that it predicted protein structures that were “sterically impossible” – the atoms just wouldn’t fit (although cyclol rings have now been found in some natural products). Fundamentally, Wrinch was in love with symmetry – to which the title, from an Emily Dickinson poem, alludes. It was this that drew her to crystallography, and her 1946 book Fourier Transforms and Structure Factors is still esteemed by some crystallographers today. But such Platonic devotion to symmetrical order can become a false refuge from the messiness of life, both in the biochemical and the personal sense.

Senechal’s prose is mannered, but pleasantly so — a welcome alternative to chronological plod. Only occasionally does this grate. Presenting the battle with Pauling in the form of an operatic synopsis is fun, but muddies truth and invention. The account of Wrinch’s first husband John Nicholson’s breakdown in 1930 is coy to the point of opacity.

It’s tremendous that Senechal has excavated this story. She offers a gripping portrait of an era and of a scientist whose flaws and complications acquire a tragic glamour. It’s a cautionary tale for which we must supply the moral ourselves.

Friday, November 30, 2012

Massive organ

I know, this is what Facebook was invented for. But I haven't got my head round that yet, so here it is anyway. It will be a Big Noise. Café Oto is apparently the place to go in London for experimental music, and there's none more experimental than this. Andy Saunders of Towering Inferno has put it together. Who? Look up their stunning album Kaddish: as Wiki has it, "It reflects on The Holocaust and includes East European folk singing [the peerless Márta Sebestyén], Rabbinical chants, klezmer fiddling, sampled voices (including Hitler's), heavy metal guitar and industrial synthesizer. Brian Eno described it as "the most frightening record I have ever heard"." Come on!

The American invasion

I have a little muse on the Royal Society Winton Science Book Prize on the Prospect blog. Here it is. It was a fun event, and great to see that all the US big shots came over for it. My review of Gleick’s book is here.

_____________________________________________________________

Having reviewed a book favourably tends to leave one with proprietary feelings towards it, which is why I was delighted to see James Gleick’s elegant The Information (Fourth Estate) win the Royal Society Winton Science Book Prize last night. Admittedly, Gleick is not an author who particularly needs this sort of accolade to guarantee good sales, but neither did most of the other contenders, who included Steven Pinker, Brian Greene and Joshua Foer. Pinker’s entry, The Better Angels of Our Nature (Penguin) was widely expected to win, and indeed it is the sort of book that should: bold, provocative and original. But Gleick probably stole the lead for his glorious prose, scarcely done justice by the judging panel’s description as having “verve and fizz”. For that, go to Foer.

Gleick has enjoyed international acclaim ever since his first book in 1987, Chaos, which introduced the world to the ‘butterfly effect’ – now as much of a catchphrase for our unpredictable future as Malcolm Gladwell’s ‘tipping point’. But in between then and now, Gleick’s style has moved away from the genre-fiction potted portraits of scientists (“a tall, angular, and sandy-haired Texas native”, “a dapper, black-haired Californian transplanted from Argentina”), which soon became a cliché in the hands of lesser writers, and has matured into something approaching the magisterial.

And might that, perhaps, explain why five of the six finalists for this year’s prize were American? (The sixth, Lone Frank, is a Danish science writer, but sounds as though she learnt her flawless English on the other side of the pond.) There have been American winners before, Greene among them, but most (including the past four) have been British. Maybe one should not read too much into this American conquest – it just so happened that three of the biggest US hitters, as well as one new Wunderkind, had books out last year. But might the American style be better geared to the literary prize?

There surely is an American style: US non-fiction (not just in science writing) differs from British, just as British does from continental European. (Non-British Europeans have been rare indeed in the science book shortlists.) They do grandeur well, in comparison to which even our popular-science grandees, such as Richard Dawkins, Steve Jones and Lewis Wolpert, seem like quiet, diligent academics. The grand style can easily tip into bombast, but when it works it is hard to resist. Just reading the list of winners of the Pulitzer Prize for Non-Fiction makes one feel exhausted – no room here for the occasional quirkiness of the Samuel Johnson.

This year’s science book prize shortlist was irreproachable – indeed, one of the strongest for years. But it will be interesting to see whether, in this straitened time for writers, only the big and bold will survive.

Tuesday, November 27, 2012

The universal reader



This is the pre-edited version of my latest, necessarily much-curtailed news story for Nature.

_____________________________________________________________

New study suggests the brain circuits involved in reading are the same the world over

For Westerners used to an alphabetic writing system, learning to read Chinese characters can feel as though it is calling on wholly new mental resources. But it isn’t, according to a new study that uses functional magnetic-resonance imaging (fMRI) to examine people’s brain activity while they read. The results suggest that the neural apparatus involved in reading might be common to all cultures, despite their very different writing systems, and that culture simply fine-tunes this.

Stanislas Dehaene of the National Institute of Health and Medical Research in Gif-sur-Yvette, France, and his coworkers say that reading involves two neural subsystems: one that recognizes the shape of the words on the page, and the other that decodes the physical motor gestures used to make the marks.

In their tests of French and Chinese subjects, they found that both groups use both systems while reading their native language, but with different emphases that reflect the different systems of writing. They describe their findings today in the Proceedings of the National Academy of Sciences USA [1].

“Rather than focusing on ear and eye in reading, the authors rightly point out that hand and eye are critical players”, says Uta Frith, a cognitive neuroscientist at University College London. “This could lead into novel directions – for instance, it might provide answers why many dyslexics also have very poor handwriting and not just poor spelling.”

Understanding how the brain decodes symbols during reading might not only offer clues into the origin of learning impairments such as dyslexia, but also inform learning strategies for general literacy and how these might be attuned to children or adults.

It has been unclear whether the brain networks responsible for reading are universal or culturally distinct. Some previous studies have suggested that alphabetic (such as French) and logographic (such as Chinese, where single characters represent entire words) writing systems might engage different networks.

There is evidence that all cultures use a shape-recognition region in the brain’s posterior left hemisphere, including in particular a so-called visual word-forming area (VWFA). But some research has implied that Chinese readers also use other brain networks that are unimportant for Western readers – perhaps because the Chinese logographic system places great emphasis on the order and direction of the strokes that make up a character, thereby engaging a ‘motor memory’ for writing gestures.

Dehaene and colleagues suspected that such motor aspects of reading are universal. Some educators have long advocated this: the Montessori method, for example, uses sandpaper letters that children can trace with their fingers to reinforce the gestural aspects of letter recognition. Motor processing is evidently universal for writing, involving a brain region known as Exner’s area, and the researchers postulated that this is activated in reading too, to interpret the gestures assumed to have gone into making the marks.

To examine what the brain is up to during reading, Dehaene and colleagues used fMRI to monitor brain activity in French and Chinese subjects reading words and characters in their own language in cursive script. They asked the subjects to recognize the words and recorded their response times.

However, unbeknown to the subjects, their responses were being manipulated in subtle ways by a process called ‘priming’. Before the word itself was presented on a screen, the subjects saw other words or symbols flashed up in just 50 milliseconds – too short a time, in general, for them to be registered consciously.

These subliminal images prepared the brain for the target word. If one of them was identical to the target word itself, subjects recognized the true target more quickly. The ‘masked’ images could also show ‘nonsense’ words written with the strokes progressing in the usual (forward) direction, or as the reverse (backward) of the usual gestural direction. Moreover, the targets could be shown either as static images or dynamically unfolding as though being written – both forwards and backwards. Finally, the target could also be distorted, for example with the letters unnaturally bunched up or the strokes slightly displaced.

The researchers used these manipulations both to match the amount of stimulus given to the subjects for the very different scripts of French and Chinese, and to try to isolate the different brain functions involved in reading. For example, spatial distortion of characters disrupts the VWFA involved in shape recognition, while words appearing dynamically stimulates Exner’s area (the motor network), but this network gets thrown if the words seem to be being written with backwards gestures. In each case, such disruptions slow the response time.

Dehaene and colleagues found that the same neural networks – the VWFA and Exner’s area – were indeed activated in both French and Chinese subjects, and could be isolated using the different priming schemes. But there were cultural differences too: for example, static distortion of the target slowed down recognition for the French subjects more than the Chinese, while the effects of gestural direction were stronger for the Chinese.

The researchers suspect that the gestural system probably plays a stronger role while the VWFA has not fully matured – that is, in young children, supporting the idea that reinforcement via the motor system can assist reading. “So far the motor decoding side has been rather neglected in reading education,” says Frith.

“It is conceivable that you find individuals where one system is functioning much better than the other”, she adds. “This may be a source of reading problems not yet explored. In the past I have studied people who can read very well but who can't spell. Perhaps the spelling aspect is more dependent on kinetic memories?”

However, psycholinguist Li-Hai Tan at the University of Hong Kong questions how far these results can be generalized to non-cursive printed text. “Previous studies using printed non-cursive alphabetic words in general have not reported activity in the gesture recognition system of the brain”, he says. “However, this gesture system has been found in fMRI studies with non-cursive Chinese characters. The motor system plays an important role in Chinese children's memory of characters, whether cursive or not.”

The universality of the ‘reading network’, say Dehaene and colleagues, also supports suggestions that culturally specific activities do not engage new parts of the brain but merely fine-tune pre-existing circuits. “Reading thus gets a free ride on ancient brain systems, and some reading systems are more user-friendly for the brain”, says Frith.

Reference

1. Nakamura, K. et al., Proc. Natl Acad. Sci. USA doi:10.1073/pnas.1217749109 (2012).

Monday, November 26, 2012

Faking Moby's fragrance

Here’s my latest piece for the BBC’s Future site. God, it is nice to have the luxury of indulging in some nice context without having to get to the news in the first breath. Indeed, it’s part of the thesis of this column that context can be key to the interest of a piece of work.

___________________________________________________________________

Smelling, as the New York Times put it in 1895, “like the blending of new-mown hay, the damp woodsy fragrance of a fern-copse, and the faintest possible perfume of the violet”, the aromatic allure of ambergris is not hard to understand. In the Middle East it is an aphrodisiac, in China a culinary delicacy. King Charles II is said to have delighted in dining on it mixed with eggs. Around the world it has been a rare and precious substance, a medicine and, most of all, a component of musky perfumes.

You’d never think it started as whale faeces, and smelling like it too. As Herman Melville said in that compendium of all things cetacean Moby Dick, it is ironic that “fine ladies and gentlemen should regale themselves with an essence found in the inglorious bowels of a sick whale”.

But vats of genetically modified bacteria could one day be producing the expensive chemical craved by the perfume industry for woody, ambergris-like scents, if research reported by biochemists at the Swiss fragrance and flavourings company Firmenich in Geneva comes to fruition. Their results are another demonstration that rare and valuable complex chemicals, including drugs and fuels, can be produced by sophisticated genetic engineering methods that convert bacteria into microscopic manufacturing plants.

Made from the indigestible parts of squid eaten by sperm whales, and usually released only when the poor whale dies from a blocked and ruptured intestine and has been picked apart by the sea’s scavengers, ambergris matures as it floats in the brine from a tarry black dung to a dense, pungent grey substance with the texture of soft, waxy stone.

Because ambergris needs this period of maturation in the open air, it couldn’t be harvested from live sperm whales even in the days when hunting was sanctioned. It could be found occasionally in whale carcasses – in Moby Dick the Pequod’s crew trick a French whaler into abandoning a whale corpse so that they can capture its ambergris. But most finds are fortuitous, and large pieces of ambergris washed ashore can be worth many thousands of dollars.

The perfume industry has long accepted that it can’t rely on such a scarce, sporadic resource, and so it has found alternatives to ambergris that smell similar. One of the most successful is a chemical compound called Ambrox, devised by Firmenich’s fragrance chemists in the 1950s and featured, I am told, in Dolce & Gabbana’s perfume Light Blue. One perfume website describes it, with characteristically baffling hyperbole, as follows: “You're hit with something that smells warm, oddly mineral and sweetly inviting, yet it doesn't exactly smell like a perfumery or even culinary material. It's perfectly abstract, approximating a person's aura rather than a specific component”.

To make Ambrox, chemists start with a compound called sclareol, named after the southern European herb Salvia sclarea (Clary sage) from which it is extracted. In other words, to mimic a sperm whale’s musky ambergris, you start with an extract of sage. This is par for the course in the baffling world of human olfaction. Although in this case Ambrox has a very similar structure to the main smelly molecules in ambergris, that doesn’t always have to be so: two odorant molecules can smell almost identical while having very different molecular structures (they are all generally based on frameworks of carbon atoms linked into rings and chains). That’s true, for example, of two other ambergris-like odorants called timberol and cedramber. Equally, two molecules that are almost identical, even mirror images of one another, can have very different odours. Quite how such molecules elicit a smell when they bind to the proteins in the olfactory membrane of the nasal cavity is still not understood.

Clary sage is easier to get hold of than ambergris, but even so the herb contains only tiny amounts of sclareol, and it is laborious to extract and purify. That’s why Firmenich’s Michel Schalk and his colleagues wanted to see if they could take the sclareol-producing genes from the herb and put them in the gut bacterium Escherichia coli, the ubiquitous single-celled workhorse of the biotechnology industry whose fermentation for industrial purposes is a well-developed art.

Sclareol belongs to a class of organic compounds called terpenes, many of which are strong-smelling and are key components of the essential-oil extracts of plants. Sclareol contains two rings of six carbon atoms each, formed when enzymes called diterpene synthases stitch together parts of a long chain of carbon atoms. The Firmenich researchers show that the formation of sclareol is catalysed in two successive steps by two different enzymes.

Schalk and colleagues extracted and identified the genes that encode these enzymes, and transplanted them into E. coli. That alone, however, doesn’t necessarily make the bacteria capable of producing lots of sclareol. For one thing, it has to be able also to make the long-chained starting compound, which can be achieved by adding yet another gene from a different species of bacteria that happens to produce the stuff naturally.

More challengingly, all of the enzymes have to work in synch, which means giving them genetic switches to regulate their activity. This approach – making sure that the components of a genetic circuit work together like the parts of a machine to produce the desired chemical product – is known as metabolic engineering. This is one level up from genetic engineering, tailoring microorganisms to carry out much more demanding tasks than those possible by simply adding a single gene. It has already been used for bacterial production of other important natural compounds, such as the anti-malarial drug artemisinin.

With this approach, the Firmenich team was able to create an E. coli strain that could turn cheap, abundant glycerol into significant quantities (about 1.5 grams per litre) of sclareol. So far this has just been done at a small scale in the lab. If it can be scaled up, you might get to smell expensively musky without the expense. Or at least, you would if price did not, in the perfume business, stand for an awful lot more than mere production costs.

Reference: M. Schalk et al., Journal of the American Chemical Society doi:10.1021/ja307404u (2012).

Saturday, November 17, 2012

Pseudohistory of science

I have just seen that my article for Aeon, the new online “magazine of ideas and culture”, has been live for some time. This magazine seems a very interesting venture; I hope it thrives. My article changed rather little in editing and is freely available, so I’ll just give the link. All that was lost was some examples at the beginning of scientists being rude about other disciplines: Richard Dawkins suggesting that theology is not an academic discipline at all, and Stephen Hawking saying that philosophy is dead (never have I seen such profundity being attributed to a boy poking his tongue out).