Sunday, May 13, 2012

Science and wonder

This piece appeared in the 30 April issue of the New Statesman, a “science special”, for which I was asked to write about whether “science has lost it sense of wonder.”
___________________________________________________

The day I realised the potential of the internet was infused with wonder. Not wonder at the network itself, however handy it would become for shovelling bits, but at what it revealed, televised live by NASA, as I crowded round a screen with the other staff of Nature magazine on 16 July 1994. That was the day the first piece of Comet Shoemaker-Levy 9 smashed into Jupiter, turning our cynicism about previous astronomical fireworks promised but not delivered into the carping of ungrateful children. There on our cosmic doorstep bloomed a fiery apocalypse that left an Earth-sized hole in the giant planet’s baroquely swirling atmosphere. This was old-style wonder: awe tinged with horror at forces beyond our comprehension.

Aristotle and Plato didn’t agree on much, but they were united in identifying wonder as the origin of their profession: as Aristotle put it, “It was owing to their wonder that men began to philosophize”. This idea appeals to scientists, who frequently enlist wonder as a goad to inquiry. “I think everyone in every culture has felt a sense of awe and wonder looking at the sky”, wrote Carl Sagan, locating in this response the stirrings of a Copernican desire to know who and where we are.

But that’s not the only direction in which wonder may take us. To Thomas Carlyle, wonder sits at the beginning not of science but of religion. That’s is the central tension in forging an alliance of wonder and science: will it make us curious, or induce us to prostrate ourselves in pitiful ignorance?

We had better get to grips with this question before too hastily appropriating wonder to sell science. That’s surely what is going on when pictures from the Hubble Space Telescope are (unconsciously?) cropped and coloured to recall the sublime iconography of Romantic landscape painting, or the Human Genome Project is wrapped in Biblical rhetoric, or the Large Hadron Collider’s proton-smashing is depicted as “replaying the moment of creation”. The point is not that such things are deceitful or improper, but that if we want to take that path, we should first consider the complex evolution of science’s relation to wonder.

For Sagan, wonder is evidently not just an invitation to be curious but a delight: it is wonderful. Maybe the ancients felt this too; the Latin equivalents admiratio and mirabilia seem to have their roots in an Indo-European word for ‘smile’. But this was not the wonder enthusiastically commended by medieval theologians, which was more apt to induce fear, reverence and bewilderment. Wonder was a reminder of God’s infinite, unknowable power – and as such, it was the pious response to nature, as opposed to the sinful prying of ‘curiosity’, damned by Saint Augustine as a ‘lust of the eyes’.

In that case, wonder was a signal to cease questioning and fall to your knees. Historians Lorraine Daston and Katharine Park argue that wonder and curiosity followed mirror-image trajectories between the Middle Ages and the Enlightenment, from good to bad and vice versa, conjoining symbiotically only in the sixteenth and seventeenth centuries – not incidentally, the period in which modern science was born.

It’s no surprise, then, to find the early prophets of science uncertain how to manage this difficult emotion of wonder. Francis Bacon admitted it only as a litmus test of ignorance: wonder signified “broken knowledge”. The implicit aim of Bacon’s scientific programme was to make wonders cease by explaining them, a quest that began with medieval rationalists such as Roger Bacon and Albertus Magnus. That which was understood was no longer wonderful.

Undisciplined wonder was thought to induce stupefaction. Descartes distinguished useful wonder (admiration) from useless (astonishment, literally a ‘turning to stone’ that “makes the whole body remain immobile like a statue”). Useful wonder focused the attention: it was, said Descartes, “a sudden surprise of the soul which makes it tend to consider alternatively those objects which seem to it rare and extraordinary”. If the ‘new philosophers’ of the seventeenth century admitted wonder at all, it was a source of admiration, not debilitating fear. The northern lights might seem “frightful” to the “vulgar Beholder”, said Edmond Halley, but to him they would be “a most agreeable and wish’d for Spectacle”.

Others shifted wonder to the far side of curiosity: something that emerges only after the dour slog of study. In this way, wonder could be dutifully channelled away from the phenomenon itself and turned into esteem for God’s works. “Wonder was the reward rather than the bait for curiosity”, say Daston and Park, “the fruit rather than the seed.” It is only after he has carefully studied the behaviour of ants to understand how elegantly they coordinate their affairs that Dutch naturalist Jan Swammerdam admits to his wonder at how God could have arranged things thus. “Nature is never so wondrous, nor so wondered at, as when she is known”, wrote Bernard Fontenelle, secretary of the French Academy of Sciences. This is a position that most modern scientists, even those of a robustly secular persuasion, are comfortable with: “The science only adds to the excitement and mystery and awe of a flower”, said physicist Richard Feynman.

This kind of wonder is not an essential part of scientific practice, but may constitute a form of post hoc genuflection. It is informed wonder that science generally aims to cultivate today. The medieval alternative, regarded as ignorant, gaping wonder, was and is denounced and ridiculed. That wonder, says social historian Mary Baine Campbell, “is a form of perception now mostly associated with innocence: with children, the uneducated (that is, the poor), women, lunatics, and non-Western cultures… and of course artists.” Since the Enlightenment, Daston and Park concur, uncritical wonder has become “a disreputable passion in workaday science, redolent of the popular, the amateurish, and the childish.” Understanding nature was a serious business, requiring discipline rather than pleasure, diligence rather than delight.

Descartes’ informed, sober wonder re-emerged as an aspect of Romanticism, whether in the Naturphilosophie of Schilling and Goethe or the passion of English Romantics like Coleridge, Shelley and Byron, who had a considerable interest in science. Now it was not God but nature herself who was the object of awe and veneration. While natural theologians such as William Paley discerned God’s handiwork in the minutiae of nature, the grander marvels of the Sublime – wonder’s “elite relative” as Campbell aptly calls it – exposed the puny status of humanity before the ungovernable forces of nature. The divine creator of the Sublime was no intricate craftsman who wrought exquisite marvels, but worked only on a monolithic scale, with massive and inviolable laws. He (if he existed at all) was an architect not of profusion but of a single, awesome order.

Equally vexed during science’s ascension was the question of what was an appropriate object for wonder. The cognates of the Latin mirabilia – marvels and miracles – reveal that wonder was generally reserved for the strange and rare: the glowing stone, the monstrous birth, the fabulous beast. No mere flower would elicit awe like Feynman’s – it would have to be misshapen, or to spring from a stone, or have extraordinary curative powers. This was a problem for early science, because it threatened to misdirect curiosity towards precisely those objects that are the least representative of the natural order. When the early Royal Society sought to amass specimens for its natural history collection, it was frustrated by the inclination of its well-meaning donors throughout the world to donate ‘wonderful’ oddities, thinking that only exotica were worthy gifts. If they sent an egg, it would be a ‘monstrous’ double-shelled one; if a chicken, it had four legs. What they were supposed to do with the four-foot cucumber of one benefactor was anyone’s guess.

This collision of the wondrous with the systematic was evident in botanist Nehemiah Grew’s noble efforts to catalogue the Society’s chaotic collection in the 1680s. What this “inventory of nature” needed, Grew grumbled, were “not only Things strange and rare, but the most known and common amongst us.” By fitting strange objects into his complex classification scheme, Grew was attempting to neutralize their wonder. Underlying that objective was a growing conviction that nature’s order (or was it God’s?) brooked no exceptions. In earlier times, wondrous things took their significance precisely from their departure from the quotidian: monstrous births were portents, as the term itself implied (monstrare: to show). Aristotle had no problem with such departures from regular laws – but precisely because they were exceptions, they were of little interest. Now, in contrast, these wonders became accommodated into the grand system of the world. Far from being aberrations that presaged calamity and change, comets obeyed the same gravitational laws as the planets.

There is perhaps a little irony in the fact that, while attempting to distance themselves from a love of wonders found in the tradition of collectors of curiosities, these early scientists discovered wonders lurking in the most prosaic and unlikely of places, once they were examined closely enough. Robert Hooke’s Micrographia (1665), a gorgeously illustrated book of microscopic observations, was a compendium of marvels equal to any fanciful medieval account of journeys in distant lands. Under the microscope, mould and moss became fantastic gardens, lice and fleas were intricate armoured brutes, and the multifaceted eyes of a fly reflect back ten thousand images of Hooke’s laboratory. Micrographia shows us a determined rationalist struggling to discipline his wonder into a dispassionate record.

Stern and disciplined reason triumphed: it came to seem that science would bleach the world of wonder. Thence the disillusion in Keats’ Lamia:
Do not all charms fly
At the mere touch of cold philosophy?
There was an awful rainbow once in heaven:
We know her woof, her texture; she is given
In the dull catalogue of common things.

But science today appreciates that the link between curiosity and wonder should not and probably cannot be severed, for true curiosity – as opposed, say, to obsessive pedantry, acquisitiveness or problem-solving – grinds to a halt when deprived of wonder’s fuel. You might say that we first emancipated curiosity at the expense of wonder, and then re-admitted wonder to take care of public relations. Yet in the fear of the subjective that characterizes scientific discourse, wonder is one of the casualties; excitement and fervour remain banished from the official records. This does not mean they aren’t present. Indeed, the passions involved in wonder and curiosity, as an aspect of the motivations for research, are a part of the broader moral economy of science that, as Lorraine Daston says, “cannot dictate the products of science in their details [but is] the framework that gives them coherence and value.”

Pretending that science is performed by people who have undergone a Baconian purification of the emotions only deepens the danger that it will seem alien and odd to outsiders, something carried out by people who do not think as they do. Daston believes that we have inherited a “view of intelligence as neatly detached from emotional, moral, and aesthetic impulses, and a related and coeval view of scientific objectivity that brand[s] such impulses as contaminants.” It’s easy to understand the historical motivations of this attitude: the need to distinguish science from credulous ‘enthusiasm’, to develop an authoritative voice, to strip away the pretensions of the mystical Renaissance magus acquiring knowledge by personal revelation. But we no longer need this dissimulation; worse, it becomes a defensive reflex that exposes scientists to the caricature of the emotionally constipated boffin, hiding within thickets of jargon.

They were never really like this, despite their best efforts. Reading Robert Boyle’s account of witnessing phosphorus for the first time, daubed on the finger of a German chemical showman to trace out “Domini” on his sister’s expensive carpet in Pall Mall, you can’t miss the wonder tinged with fear in his account of this “mixture of strangeness, beauty and frightfulness”.

That response to nature’s spectacle remains. It’s easy to mock Brian Cox’s spellbound admiration as he looks heavenward, but the spark in his eyes isn’t just there for the cameras. You only have to point binoculars at the crescent moon on a clear night, seeing as Galileo did the sunlit peaks and shadowed valleys where lunar day becomes night, to see why there is no need to manufacture a sense of wonder about such sights.

Through a frank acknowledgement of wonder – admitting it not just for marketing, but into the very inception of scientific inquiry – it might be possible to weave science back into ordinary experience, to unite the objective with the subjective. Sagan suggested that “By far the best way I know to engage the religious sensibility, the sense of awe, is to look up on a clear night.” Richard Holmes locates in wonder a bridge between the sentiments of the Romantic poets and that of their scientific contemporaries.

Science deserves this poetry, and needs it too. When his telescope showed the Milky Way to be not a cloudy vapour but “unfathomable… swarms of small stars placed exceedingly close together”, Galileo already did better than today’s astronomers in conveying his astonishment and wonder without compromising the clarity of his description. But look at what John Milton, who may have seen the same sight through Galileo’s own telescope when he visited the old man under house arrest in Arcetri, made of this vision in Paradise Lost:
A broad and ample road, whose dust is gold,
And pavement stars, as stars to thee appear
Seen in the galaxy, that milky way
Which nightly as a circling zone thou seest
Powdered with stars.

Not even Carl Sagan could compete with that.

Who knew?

I don’t really understand science reporting in the mainstream media. They tend to set a very high bar of originality and novelty, which is fair enough, but will then go and publish stuff that seems ancient news. I guess that occasionally there’s an argument that what seems extremely old hat to those who follow science will be new to a more general readership, which may explain Jeff Forshaw’s (perfectly good) piece on quantum computing in last week’s Observer. (There was an excuse for this, a recent Nature paper on a quantum simulator consisting of 300 beryllium atoms in an electromagnetic trap – but that nice work was deeply exotic, and so was skated over very briefly.) But the article in the New York Times on the uncertainties of cloud feedbacks on climate, and Richard Lindzen’s sceptical line on it, could have been written circa the turn of the millennium. Far be it from me to complain about a piece that does a good job of setting the record straight on Lindzen’s campaign of confusion, but it seems mighty odd to be talking about it now, and I couldn’t even see an attempt at a topical peg. I’m not complaining, I’m just very puzzled about how these decisions are made.

Thursday, May 10, 2012

The start of curiosity

Here’s essentially a brief overview of my new book Curiosity, published this month. The piece appears in the latest issue of New Humanist.
____________________________________________________________
The Abel Prize, the “mathematics Nobel” awarded by the Norwegian Academy of Sciences, always goes to some pretty head-scratching stuff. But the arcane number theory of this year’s winner, Endre Szemerédi, has turned out to have important applications in computer science: a validation, according to the Academy’s president Nils Stenseth, of purely “curiosity-driven” research.

It’s a common refrain in science: questions pursued purely from a desire to know about the world have unforeseen practical applications. This argument has been advanced to justify the $6 bn Large Hadron Collider at the European particle-physics centre CERN, which, according to CERN’s former Director General Robert Aymar is “continuing a tradition of human curiosity that’s as old as mankind itself.” At a time when the UK physical sciences research council is starting to demand absurd “impact assessments” for grant applications, this defence of science motivated by nothing more than inquisitiveness is essential.

But Aymar’s image of a long-standing “tradition of curiosity”, although widely shared by scientists, is too simplistic. There’s evidently an evolutionary benefit in wanting to explore our environment – we’re not the only animals to do that. But curiosity is a much more subtle, many-faceted notion, and our relationship to it has fluctuated over the ages. We are unlikely to do justice to what curiosity in science could and should mean today unless we understand this history.

For one thing, the word itself has had many meanings – too many, in fact, to identify any core concept at all. A “curious” person could indeed be an inquisitive one, but could equally be one who simply took care (Latin cura) in what they did. Not just people but objects too might be described as “curious”, and this might mean that they were rare, exotic, elegant, collectable, valuable, small, hidden, useless, expensive – but conversely, in certain contexts, common, useful or cheap. From the late sixteenth century, European nobles and intellectuals indulged a cult of curiosities, amassing vast collections of weird and wonderful objects which they displayed in room-sized ‘cabinets’. A typical cabinet of curiosities, like that of Charles I’s gardener John Tradescant in Lambeth, might contain all manner of rare beasts, shells, furs, minerals, ethnographic objects and exquisite works of craftsmanship. This spirit of collecting, usually biased towards the strange and wonderful rather than the representative, infused early science – the Royal Society had its own collection – and it gave rise to the first public museums. But it also made some early scientists focus on peculiar rather than ordinary phenomena, which threatened to turn them into bauble collectors rather than investigators of nature.

This enthusiasm for curiosities was something new, and arose outside of the mainstream academic tradition. Until the late Renaissance, curiosity in the sense that is normally implied today – investigation driven purely by the wish to know – was condemned. In ancient Greece it was seen as an unwelcome distraction rather than an aid to knowledge. For Aristotle, curiosity (periergia) had little role to play in philosophy: it was a kind of aimless, witless tendency to pry into things that didn’t concern us. Plutarch considered curiosity the vice of those given to snooping into the affairs of others: the kind of busybody known in Greek as a polypragmon.

In early Christianity it was worse than that. Now curiosity was not merely frowned upon but deemed sinful. “We want no curious disputation after possessing Christ Jesus”, wrote the second-century Christian apologist Tertullian, “no inquisition after enjoying the gospel.” The Bible told us all we needed – and should expect – to know.

Scripture made it clear that there were some things we were not supposed to know. God was said to have created Adam last so that he would not see how the rest of the job was done. Desire for forbidden knowledge led to the Fall. The transgressive aspect of curiosity is an insistent theme in Christian theology, which time and again demanded that one respect the limits of inquiry and be wary of too much learning. ‘The secret things belong to the Lord our God’, proclaims Deuteronomy, while Solomon declares in Ecclesiastes that we should “be not curious in unnecessary matters, for more things are shewed unto thee than men understand.”

In the hands of Augustine, curiosity became a “disease”, one of the vices or lusts at the root of all sin. “It is in divine language called the lust of the eyes”, he wrote. “From the same motive, men proceed to investigate the workings of nature, which is beyond our ken – things which it does no good to know and which men only want to know for the sake of knowing.” He claimed that curiosity is apt to pervert, to foster an interest in “mangled corpses, magical effects and marvellous spectacles.”

There was, then, a lot of work to be done before the early modern scientists of the seventeenth century – men like Galileo, Johannes Kepler, Robert Boyle, Robert Hooke and Isaac Newton – could give free rein to their curiosity. Needless to say, despite popular accounts of the so-called Scientific Revolution which imply that these men began to ask questions merely because of their great genius, there were many factors that emancipated curiosity. Not least was the influence of the tradition of natural magic, which insisted that nature was controlled by occult forces (literally invisible, such as magnetism and gravity) that could furnish a rational explanation of even the most marvellous things. This tradition had a strong experimental bias, denied the cosy tautologies of academic Aristotelianism, and was determined to uncover the “secrets” of nature.

The discovery of the New World, and the age of exploration in general, also opened minds with its demonstration that there was far more in the world than was described in the books of revered ancient philosophers. Accounts of investigations with telescopes and microscopes by the likes of Galileo and Hooke make reference to the “new worlds” that these devices reveal at both cosmic and minute scales, often presenting these studies as voyages of discovery – and conquest – comparable to that of Columbus.

But this liberation of curiosity was more complicated than is sometimes implied. For one thing, it forced the issue of how to assess evidence and reports – whose word could be trusted? Scientists like Boyle began to develop what historian Steven Shapin has called a “literary technology” designed to convey authority with rhetorical tricks, such as the dispassionate, disembodied tone that now characterizes, some might say blights, the scientific literature. Curiosity became apt to be laughed at rather than condemned: during the Restoration and the early Enlightenment, writers such as Thomas Shadwell, Samuel Butler and Jonathan Swift wrote satires mocking the Royals Society’s apparent fascination with trivia, such as the details of a fly’s eye.

And the problem with curiosity is that it can be voracious: the questions never cease. Everything Hooke put into his microscope looked new and strange. Boyle lamented that curiosity provoked disquiet and anxiety because it goaded people on without any prospect of comprehending all of nature in one person’s lifetime. Like others, he drew up “to do” lists that are more or less random and incontinent, showing how hard it was to discipline curiosity into a coherent research programme.

Today we continue this slightly uneasy dance with curiosity. Not just curiosity but also its mercurial cousin wonder are enlisted in support of huge projects like the LHC and the Hubble Space Telescope. But, however well motivated they are, one has to ask how much space is left in huge, costly international collaborations like this for the sort of spontaneous curiosity that would allow Hooke and Boyle to follow their noses: can we really have “curiosity by committee”? That’s why we shouldn’t let Big Science blind us to the virtues of Small Science, of the benchtop experiment, often with cheap, improvised equipment, that leaves space for trying out hunches and wild ideas, revelling in little surprises, and indulging in science as a craft. Such experiments may turn out to be fantastically useful, or spectacularly useless. They are each little acts of homage to curiosity, and in consequence, to our humanity.

Tuesday, May 08, 2012

Comment is free, for better or worse

I’ve been meaning for ages to say something about the brief experience of writing a column for the Guardian. I’m prompted to do it belatedly now in the light of the current discussion (here and here, for example) about online comments/poisonous tweets/trolling. Not that I felt I was at the sharp end of all that, and certainly not in comparison to some poor souls (although do I really mean Louise Mensch?). But what hasn’t received a great deal of comment in the latest debates is the general tone of online comments which forms the backdrop against which the more obvious acts of nastiness and lunacy get played out.

The column was prematurely terminated, as I always knew it might be, for cost-cutting reasons. And I had somewhat mixed feelings about that. It was undoubtedly disappointing, because I was getting into my stride and had topics that I’d hoped to be able to cover. But it was also something of a relief. The column went onto the Comment is Free site, which meant that it got a lot of web feedback. And this is always a somewhat odd beast, but I hadn’t experienced it to quite this degree before. I had been encouraged to engage with the responses, to the extent of making comments of my own. But I’d begun to find that rather draining.

This was not simply a matter of time. I was finding the tone of the discussion wearying, not least because I found myself responding in the same spirit. And that disturbed me.

I’d discovered before the typical tenor of web feedback when a piece I wrote on economics for the FT was picked up and debated – or rather, dissected and derogated – on some economics blogs. On that occasion I’d been naively surprised at how aggressive some of the posts were. As it happened, I responded on that occasion to one to these threads, which opened up a debate with the (extremely well informed) blogger Dave Altig that ended up being productive and constructive: I felt that we’d both listened and taken on board some aspects of the other point of view. This left me thinking that it can be valuable to engage with critics online – I’ve subsequently discovered that that is sometimes true and sometimes not.

All the same, that episode gave me a glimpse of the snarky, embittered tone that characterises quite a lot of online feedback. By no means all of the Guardian comments were of that nature. Some were very thoughtful and informed, particularly in my piece on science funding. But after having written several of these columns, some common themes among the critical comments began to emerge.

The most sobering was this. I have tended, again naively, to assume that when one writes something in public, people read it and then decide whether they agree or not. Some might decide you’ve written a pile of tosh, and might tell you so. That’s fine. But now I realise that this isn’t how it works. It seems that many readers – at least the ones who post comments, which is of course an extremely particular and self-selecting group – don’t read what you’ve said in the first place. I don’t mean that in the sense of that annoying rhetorical accusation that “you obviously didn’t even read what I said”. I mean that they read the words through such a cloud of preconceptions that the real meaning simply cannot register. Many readers, it seems, read just what they want/expect to read, which is often a ready-made version of an idea that they disagree with. The disagreement then comes not so much from a difference of opinion but from a lack of comprehension. And let me say that this comprehension doesn’t seem to have any correlation with education or professional status – I’m shocked at how poorly some scientists seem able to understand basic English. It almost makes me wonder how the scientific literature functions.

There are some other recurring strategies and tropes. Chief among them is a sense of immense resentment – who the hell are you to be writing this stuff? You call yourself a scientist/journalist/expert, but you don’t even know the most basic facts! It’s again very sobering to discover that there has presumably always been this burning rancour against people who write in the media that only now has been given a means of expression. And so the feedback becomes a litany of one-upmanship, like the chap who couldn’t possibly imagine that anyone writing a science column could have managed the awesome feat of actually reading Jonathan Swift. It was this vying for the intellectual high ground – or rather, a crude “I know more than you” – that I could see myself succumbing to, and I didn’t like it.

Then there are the comments that are clearly meant to be gems of caustic wit but which are merely incomprehensible onanism. “This article is pure rot. The knitting of shreddies, however small, by Grandmothers can only be seen as a force for good. Cold milk and plenty of sugar.” Yes, well thank you. This isn’t a big deal, but it is strangely irritating.

And there’s the question of anonymity. I can’t help feeling (and it’s been said countless times, I know) that the tone of the feedback and the fact that it is presented by folk who conceal their names (even just a given name – I’m heartened to see that most of the [by definition] lovely folks who comment on my blog are comfortable with that), and who choose instead macho monikers such as “CrapRadar”, is all of a piece. Why this issue of utter concealment? So full marks to those appearing (I assume) as themselves and not some cartoon character. I don’t mean to imply that anyone who doesn’t use their own name is some kind of craven bully, but just that this aspect of web culture is not without its problems.

Oh, I know it could be so much worse. What we’ve heard recently about online misogyny includes some very grim stuff. In comparison, I’m not sure that anything on CiF qualifies as trolling exactly, and some of the comments are interesting and funny. But I do find it a little dispiriting to discover that so much of what passes as debate on these forums is really a jaded effort to be as cynical, dismissive and superior as one can be. And that presumably this attitude had always been out there, longing for the platform that it now has.

Lip-reading the emotions

And another BBC Future piece… I was interviewed on related issues recently (not terribly coherently, I fear) for the BBC’s See Hear programme for the deaf community. This in turn was a spinoff from my involvement in a really splendid documentary by Lindsey Dryden on the musical experiences of people with partial hearing, called Lost and Sound, which will hopefully get a TV airing some time soon.
____________________________________________________________

I have no direct experience with cochlear implants (CIs) – electronic devices that partly compensate for severe hearing impairment – but listening to a simulation of the sound produced is salutary. It is rather like hearing things underwater: fuzzy and with an odd timbre, yet still conveying words and some other identifiable sounds. It’s a testament to the adaptability of the human brain that auditory information can be recognizable even when the characteristics of the sound are so profoundly altered. Some people with CIs can appreciate and even perform music.

The use of these devices can provide insights into how sound is processed in people with normal hearing – insights that can help us to identify what can potentially go wrong and how it might be fixed. That’s evident in a trio of papers buried in the recondite but infallibly fascinating Journal of the Acoustical Society of America, a publication whose scope ranges from urban noise pollution to whale song and the sonic virtues of cathedrals.

These three papers examine what gets lost in translation in CIs. Much of the emotional content, as well as some semantic information, in speech is conveyed by the rising and falling of voice – what is called prosody. In English, prosody can distinguish a question from a statement (at least before the rising inflection became fashionable). It can tell us if the speaker is happy, sad or angry. But because the pitch of sounds, as well as their ‘spectrum’ of sound frequencies, are not well conveyed by CIs, users may find it harder to identify such cues – they can’t easily tell a question from a statement, say, and they rely more on visual than auditory information to gauge a speaker’s emotional state.

Takayuki Nakata of Future University Hakodate in Japan and his coworkers have verified that Japanese children who are congenitally deaf but use CIs are significantly less able to identify happy, sad, and angry voices in tests in which normal hearers of the same age have virtually total success [1]. They went further than previous studies, however, in asking whether these difficulties inhibit a child’s ability to communicate emotion through prosody in their own speech. Indeed they do, regardless of age – an indication both that we acquire this capability by hearing and copying, and that CI users face the additional burden of being less likely to have their emotions perceived.

Difficulties in hearing pitch can create even more severe linguistic problems. In tonal languages such as Mandarin Chinese, changes in pitch may alter the semantic meaning of a word. CI users may struggle to distinguish such tones even after years of using the device, and hearing-impaired Mandarin-speaking children who start using them before they can speak are often scarcely intelligible to adult listeners – again, they can’t learn to produce the right sounds if they can’t hear them.

To understand how language tones might be perceived by CI users, Damien Smith and Denis Burnham of the University of Western Sydney in Australia have tested normal hearers with audio signals of spoken Mandarin altered to simulate CIs. The results were surprising [2].

Both native Mandarin speakers and English-speaking subjects do better in identifying the (four) Mandarin tones when the CI-simulated voices are accompanied by video footage of the speakers’ faces. That’s not so surprising: it’s well known that we use visual cues to perceive speech. But all subjects did better than random guessing with the visuals alone, and in this case non-Mandarin speakers did better than Mandarin speakers. In other words, native speakers learn to disregard visual information in preference for auditory. What’s more, these findings suggest that CI users could be helped by training them to recognize the visual cues of tonal languages: if you like, to lip-read the tones.

There’s still hope for getting CIs to convey pitch information better. Xin Luo of Purdue University in West Lafayette, Indiana, in collaboration with researchers from the House Research Institute, a hearing research centre in Los Angeles, has figured out how to make CIs create a better impression of smooth pitch changes such as those in prosody [3]. CIs do already offer some pitch sensation, albeit very coarse-grained. The cochlea, the pitch-sensing organ of the ear, contains a coiled membrane which is stimulated in different regions by different sound frequencies – low at one end, high at the other, rather like a keyboard. The CI creates a crude approximation of this continuous pitch-sensing device using a few (typically 16-22) electrodes to excite different auditory-nerve endings, producing a small set of pitch steps instead of a smooth pitch slope. Luo and colleagues have figured out a way of sweeping the signal from one electrode to the next such that pitch changes seem gradual instead of jumpy.

The cochlea can also identify pitches by, in effect, ‘timing’ successive acoustic oscillations to figure out the frequency. CIs can simulate this method of pitch discrimination too, but only for frequencies up to about 300 Hertz, the upper limit of a bass singing voice. Luo and colleagues say that a judicious combination of these two ways of conveying pitch, enabled by signal-processing circuits in the implant, creates a synergy that, with further work, should offer much improved pitch perception for users: enough, at least, to allow them to capture more of the emotion-laden prosody of speech.

References
1. T. Nakata, S. E. Trehub & Y. Kanda, Journal of the Acoustical Society of America 131, 1307 (2012).
2. D. Smith & D. Burnham, Journal of the Acoustical Society of America 131, 1480 (2012).
3. X. Luo, M. Padilla & D. M. Landsberger, Journal of the Acoustical Society of America 131, 1325 (2012).

Thursday, May 03, 2012

The entropic sieve

Here’s another of my (pre-edited) earlier pieces for the BBC Future site. Must catch up on these now – there are several more.
_______________________________________________________

Sorting out tiny particles and molecules of different sizes is necessary for various technologies, from gene sequencing to nanotechnology. But it sounds like a pretty tedious business, right?

It’s no surprise, then, that a recent paper describing a new technique for doing this garnered no headlines. But it’s well worth a closer look. For one thing, it sounds like sheer magic.

Physicist Peter Hänggi at the University of Augsburg in Germany and his colleagues show that you can take a tube containing a mixture of big and small particles, apply some force to pull them through in one direction (an electric field would do the job for charged particles, say), and then give it a shake. And hey presto – the small particles will drop out of the end towards which the force pulls them, whereas the big particles drop out of the other end (D. Reguera et al., Phys. Rev. Lett. 108, 020604 (2012)).

Not only is this trick very clever but it’s also rather profound, touching on some of the most fundamental principles of physics. The device stems from a loophole proposed in the nineteenth century for evading the second law of thermodynamics, in effect making a perpetual motion machine. Needless to say, the new particle separator isn’t that, but the explanation of why not requires an excursion into the recondite field of information theory. Deep stuff from what is basically a grain sorter.

There are already ways to separate molecules by size. You can literally sieve them using solid materials with tiny pores of uniform size, such as the zeolite minerals used to separate and selectively alter some hydrocarbons in crude oil. And a technique called gel electrophoresis is used to separate strands of DNA chopped into different lengths – a standard procedure for sequencing genes – according to their size-dependent speed of being dragged along by an electric field. These techniques work well enough for most purposes. But that devised by Hänggi and colleagues is potentially more efficient.

Like all good magic, you have to look inside to see how it’s done. The tube is divided into a series of funnel-shaped chambers connected by narrow necks – looked at in cross-section, it resembles two saw blades with the teeth not quite touching. This sawtooth profile is all it takes to make the large and small particles move in opposite directions in response to a combination of the force and the shaking.

The tube is what physicists call a Brownian ratchet. The name derives from Brownian motion, the random movement of tiny particles such as pollen grains in water, or indeed water molecules themselves, due to the jiggling of heat. (For pollen grains, it’s actually the random collisions of jiggling water molecules that cause the movement.) Normal Brownian motion doesn’t favour any direction over any other – the particles just wander at random. But a bias can be introduced by putting the particles in asymmetric surroundings, such as lodging them in a series of grooves with a sawtooth profile, the slopes steeper in one direction than the other.

When Brownian ratchets were first proposed, they caused consternation because they seemed to violate the laws of thermodynamics and allow perpetual motion. In 1912 the Polish physicist Marian Smoluchowski suggested that a tiny ratchet-and-pawl might be induced to turn in just one direction by random thermal shaking. 50 years later, Richard Feynman showed why it wouldn’t work, if the temperature of the apparatus is the same everywhere.

But Brownian ratchets aren’t easily dismissed. They seem to represent an example of a Maxwell demon, which also violates thermodynamics. In the nineteenth century, James Clerk Maxwell suggested how heat might travel from cold to hot, in contradiction of the second law of thermodynamics, if a little ‘demon’ opened a trapdoor between two compartments each time a ‘hot’ molecule happened to reach it, thereby accumulating heat in one compartment. It wasn’t until the 1980s that the reason prohibiting Maxwell’s demon was understood: you have to take into account the information that the demon uses to make its choices, which itself incurs a cost in entropy – in disorder – that balances the thermodynamic books.

Yet Brownian ratchets can work if they don’t rely on random thermal ‘noise’ alone – if there is some other factor that tips the balance, so that the system is out of thermodynamic equilibrium. It seems likely that Brownian ratchets exist in molecular biology, inducing directional motion of components of the cell driven by a combination of biochemical energy and noise.

What makes the ratchet described theoretically by Hänggi’s team different from previous incarnations is that they have shown how to make the different particles move in wholly different directions. Normally they’d just move in the same direction at different speeds, because the small particles find it easier to ‘climb’ the steep slopes than the big particles. Another way of saying this is that the big particles are more strongly repelled by entropy from the vicinity of the steep slopes. The researchers show that the force pulling the particles against the ratcheting flow can be adjusted to a level just big enough to overcome the tendency of the small molecules’ random jiggling to move them preferentially in the direction of the shallow slopes, but not big enough to counteract this for the big molecules. And voilà: they head off in opposite directions, separated by entropy. The team show that, after several passes through the tube, a mixture of two particles of slightly different sizes – two chopped-up, screwed-up strands of DNA like those encountered in gene sequencing, say – can be segregated damned near perfectly.

Below the surface

Here’s my Crucible column for the May issue of Chemistry World. Arguably a bit parochial, but hopefully not without some resonance outside the UK. _________________________________________________________________________
According to the UK’s Engineering and Physical Sciences Research Council’s latest announcement in their “Shaping capability” initiative, surface science is to receive reduced funding in future. It’s a perplexing decision.

This is just one of the several controversial aspects of the directions that EPSRC is taking. But when you look at the topic-by-topic ratings made by the council (each is designated ‘maintain’, ‘grow’ or ‘reduce’), it is hard not to feel a little sympathy. Almost every subject is earmarked for receiving the current level of support, or more. Among the latter category are many well motivated choices, such as energy storage and photonics. Obviously not every subject can enjoy this privilege, and so hard decisions must be made. Whatever it ‘reduces’, the EPSRC is bound to incur criticism from those affected. The decision to reduce synthetic organic chemistry will surely also provoke dismay among RSC members. All the same, compromising surface science seems especially short-sighted given the apparent desire to focus on subjects that might boost economic growth.

It’s true that one of the most industrially important aspects of surface science – catalysis – is covered by a separate category that will not suffer the same fate. But there’s plenty more to the subject that deserves strong support. As Peter Knight, president of the Institute of Physics, has said in response to the announcement, “surface science is an area of interdisciplinary research, often the most fertile source of new scientific breakthroughs”.

The EPSRC argues that it doesn’t regard the importance of surface science as having declined, but rather, that it is becoming assimilated into other topics. The funding cut is intended to accelerate this transition: the EPSRC seems to be proposing that the previous system is no longer the best way to allocate funds for surface science. Or to put it another way, the topic has become a victim of its own success in making itself so pervasive

The council says that “we would expect future surface science research to make significant contributions to other disciplines and key societal challenges”, and identifies nanotechnology and microelectronic device engineering among these. Some surface scientists have already suggested that ‘rebadging’ into such areas will rescue them.

But can applications like these be severed from the wellspring of basic science that makes them possible? Take the development of scanning probe microscopes in the 1980s, pioneered at IBM’s laboratories in Zurich. These tools, now fundamental to nanoscience and biophysics (for example), were devised purely as a means of high-resolution surface imaging, although their potential for nanoscale manipulation of matter, probing surface forces, and exploring quantum phenomena quickly became apparent. IBM has emphasized these fundamental aspects of the methodology ever since, most recently by demonstrating that charge distributions of single molecules can be imaged directly (Mohn, F., Gross, L., Moll, N. & Meyer, G. Nature Nanotechnol. online publication doi:10.1038/nnano.2012.20.) – an advance that could conceivably offer new insights into chemical bond formation.

This is just one example of how the development of new techniques in surface science is rarely problem-specific. Whether it is low-energy electron diffraction, surface-enhanced Raman spectroscopy, scanning optical microscopy or countless other methods, these techniques are hungrily adopted by many different fields. In fairness, the EPSRC says that a priority for surface science “in the reduced environment is the development of novel and sophisticated new tools and techniques for the study of surfaces”. But how can that objective avoid seeming diminished by its ‘reduced environment’?

And furthermore, can the core of surface science really be just methodological? I doubt it. The conceptual foundations, laid down by the likes of J. D. van der Waals and Irving Langmuir, lie with notions of surface free energy, intermolecular forces, adsorption, wetting and two-dimensional phases that are of undiminished relevance today, whether one is talking about chemical vapour deposition or biomolecular hydration. There is an intellectual unity to the discipline that transcends its rich variety of techniques.

This raises an almost philosophical question of whether or not a discipline can exist and perhaps even thrive when largely divorced from an over-arching label. At the very least, it’s a gamble. But what seems most alarming is the message that this sends out at a time when the study of surfaces and interfaces is looking ever more vital to so many areas of science and technology. The days when surface science meant looking at single molecular phases on perfect crystal faces in high vacuum are disappearing. Now we are starting to get to grips with interfaces in all their scary – as Wolfgang Pauli saw it, diabolical – complexity. Real surface processes are often dominated by impurities and mixed phases, by inhomogeneous solvation, by roughness, curvature, charge accumulation, defects. Understanding these things will tell us important things about cell and molecular biology, corrosion, atmospheric aerosols and cloud microphysics, nanoelectronics, biomaterials and much more.

That seems to be understood elsewhere. A new initiative for ‘solvation science’ in Germany, for example, recognizes the cross-cutting features of studying interfaces. And despite excelling in this area, the UK lacks a dedicated surface-science body like the Surface Science Society of Japan. Such considerations suggest that it would be more opportune to be strengthening foundations rather than chipping away at them.

Sunday, April 29, 2012

Fantastic colours

I have an article on physical colours in nature, and their mimicry in artificial systems, in the latest issue of Scientific American. All you can get online without a subscription is a ‘preview’. But I shall put an extended version of the piece on my website soon.

Friday, April 27, 2012

Bad faith

I have a new Muse piece up on Nature news – very little done in editing, so I’ll just give the link. I fear that there will be more griping about my being soft on religion, but I don’t see it that way at all. The fact that so many religious people have so little interest in the intellectual tradition of religion should cause far more concern among religious leaders than it does. Of course, maybe some of them like it that way, their followers passive and unquestioning. Anyway, the point is that you can disagree with Aquinas et al., but it is absurd to suggest that they were just deluded or lacking in analytical acumen. That isn’t in any way the implication of the Science paper discussed here, but I imagine some interpretations will take that angle.

Saturday, April 21, 2012

Imagine that!

I was a bit tetchy about Steven Poole’s criticisms in his review of The Music Instinct in the Guardian, although subsequent discussions with him helped me to understand why he raised them. But now I see I escaped lightly. In today’s Guardian Review, Steve comprehensively demolishes Johan Lehrer’s new book Imagine, calling it a prime example of the sort of ‘neuroscientism’ which purports to explain everything about everyone with a few brightly coloured MRI scans. There is some seriously cruel stuff here: “‘For Shakespeare’, Lehrer affects to know, ‘the act of creation was inseparable from the act of connection.’” I confess to a degree of guilty pleasure in reading this unrelenting dissection, though I’d feel bad for Jonah if it wasn’t evident that it would take much more than this to tarnish his growing reputation as the next Malcolm Gladwell. I suspect there is an element here of Steve’s contrarian nature rebelling against the way Lehrer has been otherwise universally hailed as a Wunderkind. But it’s not just that. I fully recognize Steve’s complaint about the current simplistic infatuation with neuroscientific jargon and imagery, as if saying that an activity activates the anterior superior temporal gyrus is equivalent to having explained it. I’ve not read Jonah’s book, and so have to reserve judgement about whether it really is a prime offender in this regard. But it’s certainly high time this tendency were put in its place. A couple of reviewers of The Music Instinct who are neuroscientists were a bit sniffy about how it didn’t make more of the wonderful advances in understanding of musical activity that brain imaging has yielded. Now, there certainly have been significant discoveries made using those technologies – I think in particular of, say, Robert Zatorre’s work on the activation of reward centres when people experience ‘musical chills’, or Petr Janata’s amazing demonstration of harmonic maps imprinted on the grey matter (both of which I mention). But Dan Levitin, while generally quite nice to the book, seemed to want more about how “listening to music activates reward and pleasure circuits in brain regions such as the nucleus accumbens, ventral tegmental area and amygdala”. Ah, so that’s how music works! This was the kind of thing I intentionally omitted, rather than overlooked, because I think that at present it does little more than fool the easily impressed reader into thinking that we’ve really ‘got inside the brain’, while in truth we often have very little idea what these increases in blood flow signify about cognition. I have to add, though, that this is the second book review I’ve read recently (the first being Richard Evans’ review in the New Statesman of A. N. Wilson’s little book on Hitler, which triggered an entertaining spat) that makes me wonder whether the Hatchet Award has upped the ante. I’m sure I’m not alone in my anxiety. [By the way, how do you put paragraph breaks into this new-look blogger tool?]

Sunday, April 15, 2012

Architectural designs

I have a paper on pattern formation in the March/April issue of Architectural Design, a special issue devoted to ‘material computation’. My piece is fairly old stuff, I confess, although this is a topic that architects are becoming increasingly interested in. I will put a version on my website, once I have figured out why it seems to have (temporarily?) vanished from the webosphere. But there’s a lot of other interesting stuff in this issue, some of which I have written about in my next column (May) for Nature Materials.

Friday, April 13, 2012

Something for the weekend

I was on BBC Radio 4’s Start the Week programme this week, still accessible here (for just a day or two) on BBC iPlayer. And a copy of the book just arrived in the post – it’s a fatty, out at the beginning of May. I’m currently most of the way through Peter Carey’s The Chemistry of Tears, and enjoying it as much as I knew I would.

Thursday, April 12, 2012

Touchy-feel chemistry

Here’s my latest Crucible column for Chemistry World.
___________________________________________________________

What does it feel like to be a molecule? Anthropomorphizing molecules is a familiar enough pedagogical trick – we’ve all seen those cutesy grinning balls-and-sticks in children’s texts on chemistry, and I’ve indulged in this exercise myself to explain the hydrogen-bonding arrangements of water. But perhaps we might stand to learn more from the opposite manoeuvre: not humanizing molecules, but molecularizing humans.

I was set thinking about this after seeing Jaron Lanier, the computer-science pioneer who coined the term ‘virtual reality’ and has done much to develop it as a technology, speak in New York about where VR may be headed. While describing exploratory research in which people are given non-human avatars (could you control a lobster body, say?), Lanier dropped one of those apercus that reveal why he is where he is. This isn’t just an extravagant computer game, he said – in such manifestations VR can be considered to be exploring the pre-adaptations of the human brain. That’s to say, it shows us what kinds of physicality, beyond the bounds of the human body, our brains are equipped to adapt themselves to. This sort of pre-adaptation is a crucial aspect of evolution: a genetic mutation might not simply alter an existing function, for better or worse, but can sometimes unleash the potentiality already latent in the organism’s genetic program. That is quite probably how Hox genes came to facilitate whole new ranges of body plans.

And then one might ask – as Lanier did – whether, as well as lobsters, our brains have the capacity to make themselves at home in a ‘molecule’s body’. Of course, molecules, unlike lobsters, don’t move of their own volition. But might our brains in some sense be able to perceive and intuit the forces that molecules experience: to assemble such sensory data into a coherent image of the molecular world?

Why ask such a seemingly arcane question? Lanier suspects that the embodied experience of VR, by engaging more sensory processes than, say, just vision or logical thinking alone, can offer us new routes to understanding and problem-solving. This is demonstrably true. Lanier, an accomplished musician, pointed out how improvising instrumentalists find their fingers accessing solutions to harmonic or melodic problems – how do I get from here to there – that would be far harder to identify by just sitting down and thinking it out.

Chemists probably need less persuading of this than other scientists. You don’t tend to work out a complex synthesis in your head: you draw out the molecular structures, and the visual information doesn’t just record your thoughts but informs them. For some problems you need to get even more tactile, building molecular models and moving them around, turning and twisting to see if they will fit together as you’d like. That has surely been evident ever since John Dalton devised his wooden ball-and-stick models.

There are already signs that molecular science wants to take this notion of ‘feeling molecules’ to a deeper level. Some years ago I tried out the ‘haptic’ (touch-based) interface of an atomic force microscope developed by Metin Sitti’s group at Carnegie Mellon University in Pittsburgh. This allows the user to feel a representation, in real time, of what the AFM tip is ‘felling’, such as the atomic topography of a surface and the forces that adsorbed molecules exert. It was certainly instructive – so much so that I remember the sensation vividly years later, just as I have never forgotten the feeling of putting my finger into mercury as a child. The haptic AFM felt quite different from the impression you’d get from an animation of what the instrument does: jerkier, somehow grittier.

Chemists have not so far made very extensive use of a more all-embracing VR. One exception is the Duke immersive Virtual Environment (DiVE) developed by the RISE science-education program at the Duke University Medical Center in Durham, North Carolina. This software can be used online, but is best experienced by the user fitted out with VR goggles and joystick manipulator in a small cube-shaped ‘theatre’ with images projected onto the walls and ceiling – a version of the CAVE created at the University of Illinois at Chicago.

Among the projects run for DiVE is ‘DiVe into Alcohol’, an experience that lets you follow the progress of ethanol molecules as they travel through an avatar’s gastrointestinal tract and become oxidized by the enzyme alcohol dehydrogenase in the liver. If you’re in Durham NC you can literally see for yourself: the RISE team offers an open house to all comers on Thursdays.

But Lanier seems to have something more ambitious in mind: the sensation of actually being a molecule. That sounds a little scary: what is it like to be oxidized by having your hydrogens pulled off? But who knows what insights we might gather in the process? Lanier is even exploring how to make such realizations governed by quantum rather than semiclassical rules. Might it be that the famously counterintuitive principles of quantum physics would become less so if we can actually experience them?

Thursday, April 05, 2012

Dreaming of ferroelectric sheep

Here’s the pre-edited version of another of my pieces for BBC Future (again, this link will only work outside the UK).
___________________________________________________________

There are some scientific discoveries that you never get to hear about simply because they’re too perplexing to bring news writers running. That’s likely to be true of findings reported by mechanical engineer Jiangyu Li of the University of Washington in Seattle, Yanhang Zhang of Boston University, and their colleagues. They’ve found that the tough, flexible tissue that makes up the aorta of pigs has the surprising property of ferroelectricity.

This arcane but technologically useful behaviour is found in certain crystals and liquid crystals. It’s a sort of electrical equivalent of magnetism. Indeed, that analogy explains why the phenomenon is called ferroelectricity, despite the absence of iron (ferrum) in materials that show it, because of the similarities with what is technically called ferromagnetism, as displayed by magnetic iron.

A ferroelectric substance is electrically polarized: one side has a positive electrical charge and the other a negative charge. This polarization can be switched to the opposite direction by placing the substance in an electric field that reorients the charges. It has its origin in an uneven distribution of electrical charges in the arrangement of constituent atoms or molecules. Just as a magnetic field can make a magnetized compass needle change direction, so an electric field can pull all the little electrical charges into a different alignment.

The switchability is why ferroelectric crystals are being studied for use in electronic memory devices, where binary data would be encoded in the electrical polarization of the memory elements. They are also used in heat sensors (the switching can be very sensitive to temperature), vibration sensors and switchable liquid-crystal displays.

Li usually works on synthetic materials like these for applications such as energy harvesting and storage. He and his colleagues discovered ferroelectricity in pig aorta by placing a thin slice of it in a special microscope containing a sensitive needle tip that could detect the electrical polarization. They found that they could switch this polarization with an electric field.

Why on earth should any animal tissue be ferroelectric? Well, the living world does make use of some unexpected material properties. Bone, for example, is piezoelectric: it becomes electrically polarized, and so sets up an electric field, when squeezed. Piezoelectricity is also a useful kind of behaviour in technology: it is exploited, for instance, in pressure and vibration sensors like those in your computer keyboard. It seems that bony creatures use this principle too: the electrical response to squeezing of bone helps tissues gauge the forces they experience. In seashells, meanwhile, piezoelectricity helps prevent fracture by dissipating the energy of a shock impact as electricity.

OK – but ferroelectricity? Who needs that? Commenting on the findings, engineers Bin Chen and Huajian Gao have speculated that the property might supply another way for the tissue to register forces, and thus perhaps to monitor blood pressure. Or perhaps to sense blood temperature, or again to dissipate mechanical energy and prevent damage. Or even to act as a sort of ‘tissue memory’ in conjunction with (electrically active) nerves. Li, meanwhile, speculates that switching of the ferroelectricity might alter the way cholesterol, sugars or fats stick to and harden blood vessels.

Notice how these researchers have no sooner identified a new characteristic of a living organism than they start to wonder what it is for. The assumption is that there must be some purpose: that evolution has selected the property because it confers some survival benefit. In other words, the property is assumed to be adaptive. This is a good position to start from, because most material properties of tissues are indeed adaptive, from the flexibility of skin to the transparency of the eye’s cornea. But it’s possible that ferroelectricity could be just a side-effect of some other adaptive function of the tissue – a result of the way the molecules just happen to be arranged, which, if does not interfere with other functions, will go unnoticed by evolution. Not every aspect of biology has a ‘purpose’.

All the same, tissue ferroelectricity could be handy. If Li is right to suspect that ferroelectricity can influence the way blood vessels take up fats, sugars or lipids, then switching it with an applied electric field might help to combat conditions such as thrombosis and atherosclerosis.

Paper
: Y. Liu et al., Physical Review Letters 108, 078103 (2012).

Saturday, March 24, 2012

On looking good

I seem to have persuaded Charlotte Raven, and doubtless now many others too, that Yves Saint Laurent’s Forever Youth Liberator skin cream is the real deal. I don’t know if my ageing shoulders can bear the weight of this responsibility. All I can say in my defence is that it would have been unfair to YSL to allow my instinct to scoff override my judgement of the science - on which, more here. But as Ms Raven points out, ultimately I reserve judgement. Where I do not reserve judgement is in saying, with what I hope does not seem like cloying gallantry, that it is hard to see why she would feel the need to consider shelling out for this gloop even if it does what it says on the tin. Or rather, it is very easy to understand why she feels the pressure to do so, given the ways of the world, but also evident that she has every cause to ignore it. No look, I’m not saying that if she didn’t look so good without makeup and rejuvenating creams then she’d be well advised to start slapping them on, it’s just that… Oh dear, this is a minefield.

Thursday, March 22, 2012

Wonders of New York

Here’s a piece about an event in NYC in which I took part at the end of last month.
______________________________________________________________________

It was fitting that the ‘Wonder Cabinet’, a public event at a trendy arthouse cinema in New York at the end of February, should have been opened by Lawrence Weschler, director of New York University’s Institute for the Humanities under whose auspices the affair was staged. For Weschler’s Pulitzer-shortlisted Mr Wilson’s Cabinet of Wonder (1995) tells the tale of David Wilson’s bizarre Museum of Jurassic Technology in Los Angeles, in which you can never quite be sure if the bizarre exhibits are factual or not (they usually are, after a fashion). And that was very much the nature of what followed in the ten-hour marathon that Weschler introduced.

Was legendary performance artist Laurie Anderson telling the truth, for instance, about her letter to Thomas Pynchon requesting permission to create an opera based on Gravity’s Rainbow? Expecting no reply from the famously reclusive author, she was surprised to receive a long epistle in which Pynchon proclaimed his admiration for his work and offered his enthusiastic endorsement of her plan. There was just one catch: he insisted that it be scored for solo banjo. Anderson took this to be a uniquely gracious way of saying no. I didn’t doubt her for a second.

The day had a remarkably high quota of such head-scratching and jaw-dropping revelations, the intellectual equivalent of those celluloid rushes in the movies of Coppola, Kubrick and early Spielberg. Even if you thought, as I did, that you know a smattering about bowerbirds – the male of which constructs an elaborate ‘bower’ of twigs and decorates it with scavenged objects to lure a female – seeing them in action during a talk by ornithologist Gail Patricelli of the University of California at Davis was spectacular. Each species has its own architectural style and, most strikingly, its own colour scheme: blue for the Satin Bowerbird (which went to great lengths to steal the researchers’ blue toothbrushes), bone-white and green for the Great Bowerbird. Some of these constructions are exquisite, around a metre in diameter. But all that labour is only the precursor to an elaborate mating ritual in which the most successful males exhibit an enticing boldness without tipping into scary aggression. This means that female choice selects for a wide and subtle range of male social behaviour, among which are sensitivity and responsiveness to the potential mate’s own behavioural signals. And all this for the most anti-climactic of climaxes, an act of copulation that lasts barely two seconds.

Or take the octopus described by virtual-reality visionary (and multi-instrumentalist) Jaron Lanier, which was revealed by secretly installed CCTV to be the mysterious culprit stealing the rare crabs from the aquarium in which it was kept. The octopus would climb out of its tank (it could survive out of water for short periods), clamber into the crabs’ container, help itself and return home to devour the spoils and bury the evidence. And get this: it closed the crab-tank lid behind it to hide its tracks – and in doing so, offered what might be interpreted as evidence for what developmental psychologists call a ‘theory of mind’, an ability to ascribe autonomy and intention to other beings. Octopi and squid would rule the world, Lanier claimed, if it were not for the fact that they have no childhood: abandoned by the mother at birth, the youngsters are passed on none of the learned culture of the elders, and so (unlike bower birds, say) must always begin from scratch.

All this orbited now close to, now more distant from the raison d’être of the event, which was philosopher David Rothenberg’s new book Survival of the Beautiful, an erudite argument for why we should take seriously the notion that non-human creatures have an aesthetic sense that exceeds the austere exigencies of Darwinian adaptation. It’s not just that the bower bird does more than seems strictly necessary to get a mate (although what is ‘necessary’ is open to debate); the expression of preferences by the female seems as elaborately ineffable and multi-valent as anything in human culture. Such reasoning of course stands at risk of becoming anthropomorphic, a danger fully appreciated by Rothenberg and the others who discussed instances of apparent creativity in animals. But it’s conceivable that this question could be turned into hard science. Psychologist Ofer Tchernichovski of the City University of New York hopes, for example, to examine whether birdsong uses the same musical tricks (basically, the creation and subsequent violation of expectation) to elicit emotion, for example by measuring in the song birds the physiological indicators of a change in arousal such as heartbeat and release of the ‘pleasure’ neurotransmitter dopamine that betray an emotional response in humans. Even if you want to quibble over what this will say about the bird’s ‘state of mind’, the question is definitely worth asking.

But it was the peripheral delights of the event, as much as the exploration of Rothenberg’s thesis, that made it a true cabinet of wonders. Lanier elicited another ‘Whoa!’ moment by explaining that the use of non-human avatars in virtual reality – putting people in charge of a lobster’s body, say – amounts to an exploration of the pre-adaptation of the human brain: the kinds of somatic embodiments that it is already adapted to handle, some of which might conceivably be the shapes into which we will evolve. This is a crucial aspect of evolution: it’s not so much that a mutation introduces new shapes and functions, but that it releases a potential that is already latent in the ancestral organism. Beyond this, said Lanier, putting individuals into more abstract avatars can be a wonderful educational tool by engaging mental resources beyond abstract reasoning, just as the fingers of an improvising pianist effortlessly navigate a route through harmonic space that would baffle the logical mind. Children might learn trigonometry much faster by becoming triangles; chemists will discover what it means to be a molecule.

Meanwhile, the iPad apps devised by engagingly modest media artist Scott Snibbe provided the best argument I’ve so far seen for why this device is not simply a different computer interface but a qualitatively new form of information technology, both in cognitive and creative terms. No wonder it was to Snibbe that Björk (“an angel”, he confided) went to realise her multimedia ‘album’ Biophilia. Whether this interactive project represents the future of music or an elaborate game remains to be seen; for me, Snibbe’s guided tour of its possibilities evoked a sensation of tectonic shift akin to that I vaguely recall now on being told that there was this thing on the internet called a ‘search engine’.

But the prize for the most arresting shaggy dog story went again to Anderson. Her attempts to teach her dog to communicate and play the piano were already raised beyond the status of the endearingly kooky by the profound respect in which she evidently held the animal. But when in the course of recounting the dog’s perplexed discovery during a mountain hike that death, in the form of vultures, could descend from above – another 180 degrees of danger to consider – we were all suddenly reminded that we were in downtown Manhattan, just a few blocks from the decade-old hole in the financial district. And we suddenly felt not so far removed from these creatures at all.

Wednesday, March 21, 2012

The beauty of an irregular mind

Here’s the news story on this year’s Abel Prize that I’ve just written for Nature. You’ve always got to take a deep breath before diving into the Abel. But it is fun to attempt it.
___________________________________________________________

Maths prize awarded for elucidating the links between numbers and information.

An ‘irregular mind’ is what has won this year’s Abel Prize, one of the most prestigious awards in mathematics, for Endre Szemerédi of the Alfred Rényi Institute of Mathematics in Budapest, Hungary.

This is how Szemerédi was described in a book published two years ago to mark his 70th birthday, which added that “his brain is wired differently than for most mathematicians.”

Szemerédi has been awarded the prize, worth 6m Norwegian krone (about US$1m), “for his fundamental contributions to discrete mathematics and theoretical computer science, and in recognition of the profound and lasting impact of these contributions on additive number theory and ergodic theory”, according to the Norwegian Academy of Science and Letters, which instituted the prize as a kind of ‘mathematics Nobel’ in 2003.

Mathematician Timothy Gowers of Cambridge University, who has worked some of the same areas as Szemerédi, says that he has “absolutely no doubt that the award is extremely appropriate.”

Nils Stenseth, president of the Norwegian Academy of Science, who announced the award today, says that Szemerédi’s work shows how research that is purely curiosity-driven can turn out to have important practical applications. “Szemerédi’s work supplies some of the basis for the whole development of informatics and the internet”, he says, “He showed how number theory can be used to organize large amounts of information in efficient ways.”

Discrete mathematics deals with mathematical structures that are made up of discrete entities rather than smoothly varying ones: for example, integers, graphs (networks), permutations and logic operations. Crudely speaking, it entails a kind of digital rather than analogue maths, which helps to explain its relationship to aspects of computer theory.

Szemerédi was spotted and mentored by another Hungarian pioneer in this field, Paul Erdös, who is widely regarded as one of the greatest mathematicians of the 20th century – even though Szemerédi began his training not in maths at all, but at medical school.

One of his first successes was a proof of a conjecture made in 1936 by Erdös and his colleague Paul Turán concerning the properties of integers. They aimed to establish criteria for whether a series of integers contains arithmetic progressions – sequences of integers that differ by the same amount, such as 3, 6, 9…

In 1975 Szeremédi showed that subsets of any large enough string of integers must contain arithmetic progressions of almost any length [1]. In other words, if you had to pick, say, 1 percent of all the numbers between 1 and some very large number N, you can’t avoid selecting some arithmetic progressions. This was the Erdös-Túran conjecture, now supplanted by Szemerédi’s theorem.

The result connected work on number theory to graph theory, the mathematics of networks of connected points, which Erdös had also studied. The relationship between graphs and permutations of numbers is most famously revealed by the four-colour theorem, which states that it is possible to colour any map (considered as a network of boundaries) with four colours such that no two regions with the same colour share a border. The problem of arithmetic progressions can be considered analogous if one considers giving numbers in a progression the same colour.

Meanwhile, relationships between number sequences become relevant to computer science via so-called sorting networks, which are hypothetical networks of wires, like parallel train tracks, that sort strings of numbers into numerical sequence by making pairwise comparisons and then shunting them from one wire to another. Szemerédi and his Hungarian collaborators Miklós Ajtai and Janós Komlós discovered an optimal sorting network for parallel processing in 1983 [2], one of several of Szemerédi’s contributions to theoretical computer science.

When mathematicians discuss Szemerédi’s work, the word ‘deep’ often emerges – a reflection of the connections it often makes between apparently different fields. “He shows the advantages of working on a whole spectrum of problems”, says Stenseth.

For example, Szemerédi’s theorem brings number theory in contact with the theory of dynamical systems: physical systems that evolve in time, such as a pendulum or a solar system. As Israeli-American mathematician Hillel Furstenberg demonstrated soon after the theorem was published [3], it can be derived in a different way by considering how often a dynamical system returns to a particular state: an aspect of so-called ergodic behaviour, which relates to how thoroughly a dynamical system explores the space of possible states available to it.

Gowers says that many of Szemerédi’s results, including his celebrated theorem, are significantly not so much for what they prove but because of the fertile ideas developed in the course of the proof. For example, Szemerédi’s theorem made use of another of his key results, called the Szemerédi regularity lemma, which has proved central to the analysis of certain types of graphs.


References
1. E. Szemerédi, "On sets of integers containing no k elements in arithmetic progression", Acta Arithmetica 27: 199–245 (1975).

2. M. Ajtai, J. Komlós & E. Szemerédi, "An O(n log n) sorting network", Proceedings of the 15th Annual ACM Symposium on Theory of Computing, pp. 1–9 (1983).

3. H. Fürstenberg, "Ergodic behavior of diagonal measures and a theorem of Szemerédi on arithmetic progressions", J. D’Analyse Math. 31: 204–256 (1977).

Friday, March 16, 2012

Genetic origami

Here’s another piece from BBC Future. Again, for non-UK readers the final version is here.
_______________________________________________________________

What shape is your genome? It sounds like an odd question, for what has shape got to do with genes? And therein lies the problem. Popular discourse in this age of genetics, when the option of having your own genome sequenced seems just round the corner, has focused relentlessly on the image of information imprinted into DNA as a linear, four-letter code of chemical building blocks. Just as no one thinks about how the data in your computer is physically arranged in its microchips, so our view of genetics is largely blind to the way the DNA strands that hold our genes are folded up.

But here’s an instance where an older analogy with computers might serve us better. In the days when data was stored on magnetic tape, you had to worry about whether the tape could actually be fed over the read-out head: if it got tangled, you couldn’t get at the information.

In living cells, DNA certainly is tangled – otherwise the genome couldn’t be crammed inside. In humans and other higher organisms, from insects to elephants, the genetic material is packaged up in several chromosomes.

The issue isn’t, however, simply whether or not this folding leaves genes accessible for reading. For the fact is that there is a kind of information encoded in the packaging itself. Because genes can be effectively switched off by tucking them away, cells have evolved highly sophisticated molecular machinery for organizing and altering the shape of chromosomes. A cell’s behaviour is controlled by manipulations of this shape, as much as by what the genes themselves ‘say’. That’s clear from the fact that genetically identical cells in our body carry out completely different roles – some in the liver, some in the brain or skin.

The fact that these specialized cells can be returned to a non-specialized state that performs any function – as shown, for example, by the cloning of Dolly the sheep from a mammary cell – indicates that the genetic switching induced by shape changes and other modifications of our chromosomes is at least partly reversible. The medical potential of getting cells to re-commit to new types of behaviour – in cloning, stem-cell therapies and tissue engineering – is one of the prime reasons why it’s important to understand the principles behind the organization of folding and shape in our chromosomes.

In shooting at that goal, Tom Sexton, Giacomo Cavalli and their colleagues at the Institute of Human Genetics in Montpellier, France, in collaboration with a team led by Amos Tanay of the Weizmann Institute of Science in Israel, have started by looking at the fruitfly genome. That’s because it is smaller and simpler than the human genome (but not too small or simple to be irrelevant to it), and also because the fly is genetically the best studied and understood of higher creatures. A new paper unveiling a three-dimensional map of the fly’s genome is therefore far from the arcane exercise it might seem – it’s a significant step in revealing how genes really work.

Scientists usually explore the shapes of molecules using techniques for taking microscopic snapshots: electron microscopes themselves, as well as crystallography, which considers how beams of X-rays, electrons or neutrons are reflected by molecules stacked into crystals. But these methods are hard or impossible to apply to molecular structures as complex as chromosomes. Sexton and colleagues use a different approach: a method that reveals which parts of a genome sit close together. This allows the entire map to be patched together piece by piece.

It’s no surprise that the results show the fruitfly genome to be carefully folded and organized, rather than just scrunched up any old how. But the findings put flesh on this skeletal picture. The chromosomes are organized on many levels, rather like a building or a city. There are ‘departments’ – clusters of genes – that do particular jobs, sharply demarcated from one another by boundaries somewhat equivalent to gates or stairwells, where ‘insulator’ proteins clinging to the DNA serve to separate one domain from the next. And inactive genes are often grouped together, like disused shops clustered in a run-down corner of town.

What’s more, the distinct physical domains tend to correspond with parts of the genome that are tagged with chemical ‘marker’ groups, which can modify the activity of genes, rather as if buildings in a particular district of a city all have yellow-painted doors. There’s evidently some benefit for the smooth running of the cell in having a physical arrangement that reflects and reinforces this chemical coding.

It will take a lot more work to figure out how this three-dimensional organization controls the activity of the genes. But the better we can get to grips with the rules, the more chance we will have of imposing our own plans on the genome – silencing or reawakening genes not, as in current genetic engineering, by cutting, pasting and editing the genetic text, but by using origami to hide or reveal it.

Reference: T. Sexton et al., Cell 148, 458-472 (2012); doi:10.1016/j.cell.2012.01.010

Tuesday, March 13, 2012

Under the radar

I have begun to write a regular column for a new BBC sci/tech website called BBC Future. The catch is that, as it is funded (not for profit) by a source other than the license fee, you can’t view it from the UK. If you’re not in the UK, you should be able to see the column here. It is called Under the Radar, and will aim to highlight papers/work that, for one reason or another (as described below), would be likely to be entirely ignored by most science reporters. The introductory column, the pre-edited version of which is below, starts off by setting out the stall. I have in fact 3 or 4 pieces published here so far, but will space them out a little over the next few posts.
_____________________________________________________________________

Reading science journalism isn’t, in general, an ideal way to learn about what goes on in science. Almost by definition, science news dwells on the exceptional, on the rare advances that promise (even if they don’t succeed) to make a difference to our lives or our view of the universe. But while it’s always fair to confront research with the question ‘so what?’, and while you can hardly expect anyone to be interested in the mundane or the obscure, the fact is that behind much if not most of what is done by scientists lies a good, often extraordinary, story. Yet unless they happen to stumble upon some big advance (or at least, an advance that can be packaged and sold as such), most of those stories are never told.

They languish beneath the forbidding surface of papers published by specialized journals, and you’d often never guess, to glance at them, that they have any connection to anything useful, or that they harbour anything to spark the interest of more than half a dozen specialists in the world. What’s more, science then becomes presented as a succession of breakthroughs, with little indication of the difficulties that intervene between fundamental research and viable applications, or between a smart idea and a proof that it’s correct. In contrast, this column will aim to unearth some of those buried treasures and explain why they’re worth polishing.

Another reason why much of the interesting stuff gets overlooked is that good ideas rarely succeed all at once. Many projects get passed over because at first they haven’t got far enough to cross a reporter’s ‘significance threshold’, and then when the work finally gets to a useful point, it’s deemed no longer news because much of it has been published already.

Take a recent report by Shaoyi Jiang, a chemical engineer at the University of Washington in Seattle, and his colleagues in the Germany-based chemistry journal journal Angewandte Chemie. They’ve made an antimicrobial polymer coating which can be switched between a state in which it kills bacteria (eliminating 99.9% of sprayed-on E. coli) and one where it shrugs off the dead cells and resists the attachment of new ones. That second trick is a valuable asset for a bug-killing film, since even dead bacteria can trigger inflammation.

The thing is, they did this already three years ago. But there’s a key difference now. Before, the switching was a one-shot affair: once the bacteria were killed and removed, you couldn’t get the bactericidal film back. So if more bacteria do slowly get a foothold, you’re stuffed.

That’s why the researchers have laboured to make their films fully reversible, which they’ve achieved with some clever chemistry. They make a polymer layer sporting dangling molecular ‘hairs’ like a carpet, each hair ending in a ring-shaped molecule deadly to bacteria. If the surface is moistened with water, the ring springs open, transformed into a molecular group to which bacteria can’t easily stick. Just add a weak acid – acetic acid, basically vinegar – and the ring snaps closed again, regenerating a bactericidal surface as potent as before.

This work fits with a growing trend to make materials ‘smart’ – able to respond to changes in their environment. Time was when a single function was all you got: a non-adhesive ‘anti-fouling’ film, say, or one that resists corrosion or reduces light reflection (handy for solar cells). But increasingly, we want materials that do different things at different times or under different conditions. Now there’s a host of such protean substances: materials that can be switched between transparent and mirror-like say, or between water-wettable and water-repelling.

Another attraction of Jiang’s coating is that these switchable molecular carpets can in principle be coated onto a wide variety of different surfaces – metal, glass, plastics. The researchers say that it might be used on hospital walls or on the fabric of military uniforms to combat biological weapons. That sort of promise is generally where the journalism stops and the hard work begins, to turn (or not) this neat idea into mass-produced materials that are reliable, safe and affordable.

Reference: Z. Cao et al., Angewandte Chemie International Edition online publication doi:10.1002/anie.201106466.

Thursday, March 08, 2012

Science and politics cannot be unmixed

One of the leaders in this week’s Nature is mine; here’s the original draft.
____________________________________________________

Paul Nurse will not treat his presidency of the Royal Society as an ivory tower. He has made it clear that he considers that scientists have duties to fulfil and battles to fight beyond the strictly scientific, for example to “expose the bunkum” of politicians who abuse and distort science. This social engagement was evident last week when Nurse delivered the prestigious Dimbleby Lecture, instituted in memory of the British broadcaster Richard Dimbleby. Previous scientific incumbents have included George Porter, Richard Dawkins and Craig Venter.

Nurse identified support for the National Health Service, the need for an immigration policy that attracts foreign scientists, and inspirational science teaching in primary education as some of the priorities for British scientists. These and many of the other issues that he raised, such as increasing scientists’ interactions with industry, commerce and the media, and resisting politicization of climate-change research, are relevant around the globe.

All the more reason not to misinterpret Nurse’s insistence on a separation of science and politics: as he put it more than once, “first science, then politics”. What Nurse rightly warned against here is the intrusion of ideology into the interpretation and acceptance of scientific knowledge, as for example with the Soviet Union’s support of the anti-Mendelian biology of Trofim Lysenko. Given recent accounts of political (and politically endorsed commercial) interference in climate research in the US (see Nature 465, 686; 2010), this is a timely reminder.

But it is all too easy to render this equation too simplistically. For example, Nurse also cited the rejection of Einstein’s “Jewish” relativistic physics by Hitler. But that is not quite how it was. “Jewish physics” was a straw man invented by the anti-Semitic and pro-Nazi physicists Johannes Stark and Philipp Lenard, partly because of professional jealousies and grudges. The Nazi leaders were, however, largely indifferent to what looked like an academic squabble, and in the end lost interest in Stark and Lenard’s risible “Aryan physics” because they needed a physics that actually worked.

Therein lies one reason to be sceptical of the common claim, repeated by Nurse, that science can only flourish in a free society. Historians of science in Nazi Germany such as Kristie Macrakis (in Surviving the Swastika; 1993) have challenged this assertion, which is not made true simply because we would like it to be so. Authoritarian regimes are perfectly capable of putting pragmatism before ideology. The scientific process itself is not impeded by state control in China – quite the contrary – and the old canard that Chinese science lacks innovation and daring is now transparently nonsense. During the Cold War, some Soviet science was vibrant and bold. Even the most notorious example of state repression of science – the trial of Galileo – is apt to be portrayed too simplistically as a conflict of faith and reason rather than a collision of personalities and circumstances (none of which excuses Galileo’s scandalous persecution).

There is a more edifying lesson to be drawn from Nazi Germany that bears on Nurse’s themes. This is that, while political (and religious) ideology has no place in deciding scientific questions, the practice of doing science is inherently political. In that sense, science can never come before politics. Scientists enter into a social contract, not least because they are not their own paymasters. Much if not most scientific research has social and political implications, often broadly visible from the outset. In times of economic and political crisis (like these), scientists must respond intellectually and professionally, and not merely by safeguarding their funding, important though that is.

The consequences of imagining that science can remain aloof from politics became acutely apparent in Germany in 1933, when the consensus view that politics was, as Heisenberg put it, an unseemly “money business” meant that most scientists saw no reason to mount concerted resistance to the expulsion of Jewish colleagues – regarded as a political rather than a moral matter. This ‘apolitical’ attitude can now be seen as a convenient myth that led to acquiescence and made it easy for the German scientists to be manipulated. It would be naïve to imagine that only totalitarianism could create such a situation.

The rare and most prominent exception to ‘apolitical’ behaviour was Einstein, whose outspokenness dismayed even his principled friends Max Planck and Max von Laue. “I do not share your view that the scientist should observe silence in political matters”, he told them. “Does not such restraint signify a lack of responsibility?” There was no hint of such a lack in Nurse’s talk. But we must take care to distinguish the political immunity of scientific reasoning from the political dimensions and obligations of doing science.

Wednesday, March 07, 2012

The unavoidable cost of computation

Here’s the pre-edited version of my latest news story for Nature. I really liked this work. I was lucky to meet Rolf Landauer before he died, and discovered him to be one of those people who is so genial, wry and unaffected that you aren’t awed by how phenomenally clever they are. He was also extremely helpful when I was preparing The Self-Made Tapestry, setting me straight on the genesis of notions about dissipative structures that sometimes assign the credit in the wrong places. Quite aside from that, it is worth making clear that this is in essence the first experimental proof of why Maxwell’s demon can’t do its stuff.
________________________________________________

Physicists have proved that forgetting is the undoing of Maxwell’s demon.

Forgetting always takes a little energy. A team of scientists in France and Germany has now demonstrated exactly how little.

Eric Lutz of the University of Augsburg and his colleagues have found experimental proof of a long-standing claim that erasing information can never be done for free. They present their result in Nature today [1].

In 1961, physicist Rolf Landauer argued that to reset one bit of information – say, to set a binary digit to zero regardless of whether it is initially 1 or 0 – must release at least a certain minimum amount of heat, proportional to the temperature, into the environment.

“Erasing information compresses two states into one”, explains Lutz, currently at the Free University of Berlin. “It is this compression that leads to heat dissipation.”

Landauer’s principle implies a limit on how low the energy dissipation – and thus consumption – of a computer can be. Resetting bits, or equivalent processes that erase information, are essential for operating logic circuits. In effect, these circuits can only work if they can forget – for how else could they perform a second calculation once they have done a first?

The work of Lutz and colleagues now appears to confirm that Landauer’s theory was right. “It is an elegant laboratory realization of Landauer's thought experiments”, says Charles Bennett, an information theorist at IBM Research in Yorktown Heights, New York, and Landauer’s former colleague.

“Landauer's principle has been kicked about by theorists for half a century, but to the best of my knowledge this paper describes the first experimental illustration of it”, agrees Christopher Jarzynski, a chemical physicist at the University of Maryland.

The result doesn’t just verify a practical limit on the energy requirement of computers. It also confirms the theory that safeguards one of the most cherished principles of physical science: the second law of thermodynamics.

This law states that heat will always move from hot to cold. A cup of coffee on your desk always gets cooler, never hotter. It’s equivalent to saying that entropy – the amount of disorder in the universe – always increases.

In the nineteenth century, the Scottish scientist James Clerk Maxwell proposed a scenario that seemed to violate this law. In a gas, hot molecules move faster. Maxwell imagined a microscopic intelligent being, later dubbed a demon, that would open and shut a trapdoor between two compartments to selectively trap ‘hot’ molecules in one of them and cool ones in the other, defying the tendency for heat to spread out and entropy to increase.

Landauer’s theory offered the first compelling reason why Maxwell’s demon couldn’t do its job. The demon would need to erase (‘forget’) the information it used to select the molecules after each operation, and this would release heat and increase entropy, more than counterbalancing the entropy lost by the demon’s legerdemain.

In 2010, physicists in Japan showed that information can indeed be converted to energy by selectively exploiting random thermal fluctuations, just as Maxwell’s demon uses its ‘knowledge’ of molecular motions to build up a reservoir of heat [2]. But Jarzynski points out that the work also demonstrated that selectivity requires the information about fluctuations to be stored.

He says that the experiment of Lutz and colleagues now completes the argument against using Maxwell’s demon to violate the second law, because it shows that “the eventual erasure of this stored information carries a thermodynamic penalty” – which is Landauer's principle.

To test this principle, the researchers created a simple two-state bit: a single microscopic silica particle, 2 micrometres across, held in a ‘light trap’ by a laser beam. The trap contains two ‘valleys’ where the particle can rest, one representing a 1 and the other a 0. It could jump between the two if the energy ‘hill’ separating them is not too high.

The researchers could control this height by the power of the laser. And they could ‘tilt’ the two valleys to tip the bead into one of them, resetting the bit, by moving the physical cell containing the bead slightly out of the laser’s focus.

By very accurately monitoring the position and speed of the particle during a cycle of switching and resetting the bit, they could calculate how much energy was dissipated. Landauer’s limit applies only when the resetting is done infinitely slowly; otherwise, the energy dissipation is greater.

Lutz and colleagues found that, as they used longer switching cycles, the dissipation got smaller, but that this value headed towards a plateau equal to the amount predicted by Landauer.

At present, other inefficiencies mean that computers dissipate at least a thousand times more energy per logic operation than the Landauer limit. This energy dissipation heats up the circuits, and imposes a limit on how small and densely packed they can be without melting. “Heat dissipation in computer chips is one of the major problems hindering their miniaturization”, says Lutz.

But this energy consumption is getting ever lower, and Lutz and colleagues say that it’ll be approaching the Landauer limit within the next couple of decades. Their experiment confirms that, at that point, further improvements in energy efficiency will be prohibited by the laws of physics. “Our experiment clearly shows that you cannot go below Landauer’s limit”, says Lutz. “Engineers will soon have to face that”.

Meanwhile, in fledgling quantum computers, which exploit the rules of quantum physics to achieve greater processing power, this limitation is already being confronted. “Logic processing in quantum computers already is well within
the Landauer regime, and one has to worry about Landauer's principle
all the time”, says physicist Seth Lloyd of the Massachusetts Institute of Technology.

References
1. Bérut, A. et al., Nature 483, 187-189 (2012).
2. Toyabe, S., Sagawa, T., Ueda, M., Muneyuki, E. & Sano, M. Nat. Phys. 6, 988-992 (2010).