Tuesday, October 30, 2012

Atheists and monsters

I have a letter in New Humanist responding to Francis Spufford’s recent defence of his Christian belief, a brief resume of the case he lays out in his new book. The letter was truncated to the second paragraph, my first and main point having been made in the preceding letter from Leo Pilkington. Here it is anyway.

And while I’m here: I have some small contributions in a nice documentary on Channel 4 tomorrow about Mary Shelley’s Frankenstein. I strolled past Boris Karloff’s blue plaque today, as I often do, erected on the wall above my local chippy. He was a Peckham Rye boy named William Henry Pratt. Happy Halloween.

______________________________________________________________

Since I’m the sort of atheist who believes that we can and should get on with religious folk, and because I have such high regard for Francis Spufford, I am in what I suspect is the minority of your readers in agreeing with the essence and much of the substance of what he says. It’s a shame, though, that he slightly spoils his case by repeating the spurious suggestion that theists and atheists are mirror images because of their yes/no belief in God. The null position for a proposition that an arbitrary entity exists for which there is no objective evidence or requirement and no obvious way of testing is not to shrug and say “well, I guess we just don’t know either way.” We are back to Russell’s teapot orbiting the Sun. The reason why the teapot argument won’t wash for religious belief is, as Spufford rightly says, because a belief in God is about so many other feelings, values and notions (including doubt and uncertainty), not ‘merely’ the issue of whether one can make the case objectively. While this subjectivity throws the likes of Sam Harris into paroxysms, it’s a part of human experience that we have to deal with.

Spufford is also a little too glib in dismissing the anger that religion arouses. The Guardian’s Comment is Free is a bad example, being a pathological little ecosystem to itself. Some of that anger stems from religious abuses to human rights and welfare, interference in public life, denial of scientific evidence, and oppression, conformity and censure. All of these malaises will, I am sure, be as deplored by Spufford as they are by non-believers. When religions show themselves capable of putting their own houses in order, it becomes so much easier for atheists to acknowledge (as we should) the good that they can also offer to believers and non-believers alike.

Thursday, October 25, 2012

Balazs Gyorffy (1938-2012)


I just heard that the solid-state physicist Balazs Gyorffy, an emeritus professor at Bristol, has died from cancer after a short illness. Balazs was a pioneer of first-principles calculations of electronic structure in alloys, and contributed to the theory of superconductivity in metals. But beyond his considerable scientific achievements, Balazs was an inspirational person, whose energy and passion made you imagine he would be immortal. He was a former Olympic swimmer, and was apparently swimming right up until his illness made it impossible. He was interested in everything, and was a wonderfully generous and supportive man. His attempts to teach me about Green’s functions while I was at Bristol never really succeeded, but he was extremely kind with his time and advice on Hungary when I was writing my novel The Sun and Moon Corrupted. Balazs was a refugee from the 1956 Hungarian uprising, and was an external member of the Hungarian Academy of Sciences. He was truly a unique man, and I shall be among the many others who will miss him greatly.

An old look at Milan


I have no reason for posting this old photo of Milan Cathedral except that I found it among a batch of old postcards (though the photo's an original) and think it is fabulous. I tend to like my cathedrals more minimalist, but this one is fabulously over the top.

Why cancer is smart

This is my most recent piece on BBC Future, though another goes up tomorrow.

_______________________________________________________________

Cancer is usually presented as a problem of cells becoming mindless replicators, proliferating without purpose or restraint. But that underestimates the foe, according to a new paper, whose authors argue that we’ll stand a better chance of combating it if we recognize that cancer cells are a lot smarter and operate as a cooperating community.

One of the authors, physicist Eshel Ben-Jacob of Tel Aviv University in Israel, has argued for some time that many single-celled organisms, whether they are tumour cells or gut bacteria, show a rudimentary form of social intelligence – an ability to act collectively in ways that adapt to the prevailing conditions, learn from experience and solve problems, all with the ‘aim’ of improving their chances of survival. He even believes there is evidence that they can modify their own genomes in beneficial ways.

Some of these ideas are controversial, but others are undeniable. One of the classic examples of a single-celled cooperator, the soil-dwelling slime mold Dictyostelium discoideum, survives a lack of warmth or moisture by communicating from cell to cell and coordinating their behaviour. Some cells send out pulses of a chemical attractant which diffuse into the environment and trigger other cells to move towards them. The community of cells then forms into complex patterns, eventually clumping together into multicelled bodies that look like weird mushrooms. Some of these cells become spores, entering into a kind of suspended animation until conditions improve.

Many bacteria can engage in similar feats of communication and coordination, which can produce complex colony shapes such as vortex-like circulating blobs or exotic branching patterns. These displays of ‘social intelligence’ help the colonies survive adversity, sometimes to our cost. Biofilms, for example – robust, slimy surface coatings that harbour bacteria and can spread infection in hospitals – are manufactured through the cooperation of several different species.

But the same social intelligence that helps bacteria thrive can be manipulated to attack pathogenic varieties. As cyberwarfare experts know, disrupting communications can be deadly. Some strategies for protecting against dangerous bacteria now target their cell-to-cell communications, for example by introducing false signals that might induce cells to eat one another or to dissolve biofilms. So it pays to know what they’re saying to one another.

Ben-Jacob, along with Donald Coffey of the Johns Hopkins University School of Medicine in Baltimore and ‘biological physicist’ Herbert Levine of Rice University in Houston, Texas, think that we should be approaching cancer therapy this way too: not by aiming to kill off tumour cells with lethal doses of poisons or radiation, but by interrupting their conversations.

There are several indications that cancer cells thrive by cooperating. One trick that bacteria use for invading new territory, including other organisms, is to use a mode of cell-to-cell communication called quorum sensing to determine how densely populated their colony is: above a certain threshold, they might have sufficient strength in numbers to form biofilms or infect a host. Researchers have suggested that this process is similar to the way cancer cells spread during metastasis. Others think that group behaviour of cancer cells might explain why they can become so quickly resistant to drugs.

Cancer cells are very different from bacteria: they are rogue human cells, so-called eukaryotic cells which have a separate compartment for the genetic material and are generally deemed a more advanced type of cell than ‘primitive’ bacteria, in which the chromosomes are just mixed up with everything else. Yet it’s been suggested that, when our cells turn cancerous and the normal processes regulating their growth break down, more primitive ‘single-celled’ styles of behaviour are unleashed.

Primitive perhaps – but still terrifyingly smart. Tumours can trick the body into making new blood vessels to nourish them. They can enslave healthy cells and turn them into decoys to evade the immune system. They seem even able to fool the immune system into helping the cancer to develop. It’s still not clear exactly how they do some of these things. The anthropomorphism that makes cancer cells evil enemies to be ‘fought’ risks distorting the challenge, but it’s not hard to see why researchers succumb to it.

Cancer cells resistant to drugs can and do emerge at random by natural selection in a population. But they may also have tricks that speed up mutation and boost the chances of resistant strains appearing. And they seem able to generate dormant, spore-like forms, as Dictyostelium discoideum and some bacteria do, that produce ‘time-bomb’ relapses even after cancer traces have disappeared in scans and blood tests.

So what’s to be done? Ben-Jacob and colleagues say that if we can crack the code of how cancer cells communicate, we might be able to subvert it. These cells seem to exchange chemical signals, including short strands of the nucleic acid RNA which is known to control genes. They can even genetically modify and reprogramme healthy cells by dispatching segments of DNA. The researchers think that it might be possible to turn this crosstalk of tumour cells against them, inducing the cells to die or split apart spontaneously.

Meanwhile, if we can figure out what triggers the ‘awakening’ of dormant cancer cells, they might be tricked into revealing themselves at the wrong time, after the immune system has been boosted to destroy them in their vulnerable, newly aroused state. Ben-Jacob and colleagues suggest experiments that could probe how this switch from dormant to active cells comes about. Beyond this, perhaps we might commandeer harmless or even indigenous bacteria to act as spies and agent provocateurs, using their proven smartness to outwit and undermine that of cancer cells.

The ‘warfare’ analogy in cancer treatment is widely overplayed and potentially misleading, but in this case it has some value. It is often said that the nature of war has changed over the past several decades: it’s no longer about armies, superior firepower, and battlefield strategy, but about grappling with a more diffuse foe – indeed one loosely organized into ‘cells’ – by identifying and undermining channels of recruitment, communication and interaction. If it means anything to talk of a ‘war on cancer’, then perhaps here too we need to think about warfare in this new way.

Reference: E. Ben-Jacob, D. S. Coffey & H. Levine, Trends in Microbiology 20, 403-410 (2012).

Tuesday, October 16, 2012

Sweets in Boots

Here’s a piece I just wrote for the Guardian’s Comment is Free. Except in this case it isn’t, because comments have been prematurely terminated. That may be rectified soon, if you want to join the rush.

________________________________________

In the 13th century, £164 was an awful lot of money. But that’s how much the ailing Edward I spent on making over two thousand pounds in weight of medicinal syrups. Sugar was rare, and its very sweetness was taken as evidence of its medicinal value. Our word ‘treacle’ comes from theriac, a medieval cure-all made from roasted vipers, which could prevent swellings, unblock intestinal blockages, remove skin blemishes and sores, cure fevers, heart trouble, dropsy, epilepsy and palsy, induce sleep, improve digestion, restore lost speech, convey strength and heal wounds. No wonder town authorities monitored the apothecaries who made it, to make sure they didn’t palm people off with substandard stuff.

We like a good laugh at medieval medicine, don’t we? Then we walk into the sweetie shops for grown-ups known as Boots to buy lozenges, pastilles and syrups (hmm, suspiciously olde words, now that I think about it) for our aches, coughs and sneezes. Of course, some of us consider this sugaring of the pill to be prima face evidence of duping by the drug companies, and we go instead for the bitter natural cures, the Bach remedies and alcoholic tinctures which, like the medieval syphilis cure called guaiac, are made from twigs and wood, cost the earth, and taste vile.

Each to his own. I quite like the sugar rush. And I’m not surprised that Edward I did – on a medieval diet, a spoonful of sugar would probably work wonders for your metabolism, you’d feel like a new person for a few hours until your dropsy kicked in again. This, I surmise, must be why there is Benylin in my medicine cabinet. Because surely I didn’t – did I? – buy it because I thought it would make my cough any better?

An ‘expert panel’ convened by Which? Magazine has just announced that “We spend billions on over-the-counter pharmacy products each year but we’ve found evidence of popular products making claims that our experts judged just aren’t backed by sufficient evidence.” Cough syrups are among the worst offenders. They sell like crazy in winter, are mostly sugar (including treacle), and probably do sod all, despite the surreally euphemistic claims of brands such as Benylin that they will make your cough “more productive”.

Let’s be fair – Boots, at least, never claimed otherwise. Its “Web MB” admits that “The NHS says there’s not much scientific evidence that cough medicines work… The NHS says there are no shortcuts with coughs caused by viral infections. It just takes time for your body to fight off the infection.” Sure, if the syrup contains paracetamol, it might ease your aching head; if there’s any antihistamine in there, your streaming nose and eyes might dry up a bit. But if you want to soothe your throat, honey and lemon is at least as good – the Guardian’s told you that already.

The Which? report also questioned evidence that Seven Seas Jointcare tablets, Adios Herbal Slimming Tablets and Bach Rescue Remedy spray (to “restore inner calm”) do any good. Are you shocked yet?

Consumers deserve protection against charlatans, for sure. But as far as the over-the-shelf pharmacy counter is concerned, you might as well be expecting scientific evidence for palm reading. Can we, in this post-Ben Goldacre age, now ditch the simplistic view that medicine is about the evidence-based products of the pharmaceutical industry versus the crystal healers? That modern conceit ignores the entire history of medicine, in which folk belief, our wish for magical remedies, placebos, diet, fraud, abuse of authority, and the pressures of commerce have always played at least as big a role as anything resembling science. Modern drugs have made life longer and more bearable, but drug companies are no more above fixing the ‘evidence’ than some alternative cures are above ignoring it.

We’re right to be outraged at Big Pharma misbehaving, especially when their evasions and elisions concern drugs with potentially serious side-effects. But the sniffles and coughs that send us grazing in Boots are the little slings and arrows of life, and all we’re doing there is indulging in some pharmacological comfort eating. I’m a fan of analgesics, and my summers are made bearable by antihistamines, but a lot of the rest is merely lifestyle-targeted placebo. There’s no harm in that, but if we are going to be affronted when we find that those saccharine pills and potions won’t cure us, we’ve misunderstood the nature of the transaction.

The nobel art of matchmaking

I have a Nature news story on the economics Nobel prize. Here’s the pre-edited version.

________________________________________________

Two economists are rewarded for the theory and application of how to design markets for money-free transactions

The theory and practice of matching resources to those who need them, in cases where conventional market forces cannot determine the outcome, has won the Nobel prize in economics for Lloyd Shapley of the University of California at Los Angeles and Alvin Roth of Harvard University.

Their work on matching “has applications everywhere”, says economist Atila Abdulkadiroglu of Duke University in Durham, North Carolina. “Shapley's work laid the groundwork, and Roth's work brought the theory to life.”

“This is terrific prize to a pair of very deserving scholars”, says economist Paul Milgrom of Stanford University in California.

The work of Shapley and Roth shows how to find optimal matches between people or institutions ‘trading’ in commodities that money can’t buy: how to allocate students to schools or universities, say, or to match organ donors to patients.

Universities can’t determine which students enrol simply by setting their fees arbitrarily high, since these are capped. And payments for organ donation are generally prohibited on ethical grounds. In such situations, how can one find matches that are stable, in the sense that no one considers they can do better by seeking a different match?

In the 1960s Shapley and his coworker David Gale analysed the most familiar match-making problem: marriage. They asked how ten men and ten women could be matched such that none would see any benefit in breaking the partnership to make a better match.

The answer was to let one group (men or women) choose their preferred partner, and then let those who were rejected by their first choice make their second-best selection. This process continues until none of the choosers wishes to make another proposal, whereupon the group holding the proposals finally accepts them.

Shapley and Gale (who died in 2008) proved that this process will always lead to stable matching [1]. They also found, however, that it works to the advantage of the choosers – that is, those who make the proposals do better than those receiving them.

“Without the framework Shapley and Gale introduced, we would not be able to think about these problems in sound theoretical terms”, says Abdulkadiroglu.

However, their work was considered little more than a neat academic result until, about 20 years later, Roth saw that it could be applied to situations in the real world. He found that the US National Resident Matching Program, a clearing house for allocating medical graduates to hospitals, used an algorithm similar to Shapley and Gale’s, which prevented problems caused by the fact that hospitals might need to offer students internships before they even knew which area they were going to specialize in [2].

But he discovered that the same problem in the UK was addressed with quite different matching algorithms in different regions, some of which were stable and some not [3]. His work persuaded local health authorities to abandon inefficient, unstable practices.

Roth also helped to tailor such matching strategies to specific market conditions – for example, to adapt the allocation of students to hospitals to the constraint that, as more women graduated, students might often be looking for places as a couple. And he showed how to make these matching schemes immune to manipulation by either party in the transaction.

Roth and his coworkers also applied the Gale-Shapley algorithm to the allocation of pupils among schools. “He directly employs the theory in real-life problems”, says Abdulkadiroglu. “This is not a trivial task. Life brings in complications and institutional constraints that are difficult to imagine or study within the abstract world of theory.”

Shapley extended his analysis to cases where one of the parties in the transaction is passive, expressing no preferences – for example, in the allocation of student rooms. David Gale devised a scheme for finding a stable allocation called ‘top trading’, in which agents are given one object each but can swap them for their preferred choice. Satisfied swappers leave the market, and the others continue the swapping until everything has been allocated. In 1974 Shapley and Herbert Scarf showed that this process always led to stable solutions [4]. Roth has subsequently used this approach to match patients with organ donors.

All of these situations are examples of so-called cooperative game theory, in which the agents seek to align the choices, make matches and coalitions – as opposed to the more familiar non-cooperative game theory that won Nobels for John Nash (1994), Thomas Schelling (2005) and others, in which agents act independently. “In my view, Shapley has made more than one prize-worthy contribution to game theory”, says Milgrom, “but his work on matching has the greatest economic significance.”

With economic theory signally failing to bring order and stability to the world’s financial markets, it’s notable that the Nobel committee has chosen to reward work that offers practical solutions in ‘markets’ in which money is of little consequence. The work of Shapley and Roth shows that there is room for economic theory outside the ruthless cut-and-thrust of money markets – and perhaps, indeed, that in a more cooperative world it can be more effective.

References
1. Gale, D. & Shapley, L. S. American Mathematical Monthly 69, 9-15 (1962).
2. Roth, A. E. Journal of Political Economy 92, 991-1016 (1984).
3. Roth, A. E. American Economic Review 81, 415-40 (1991).
4. Shapley, L.S. & Scarf, H. Journal of Mathematical Economics 1, 23-37 (1974).

Monday, October 15, 2012

A little help for my friends

It’s sometimes said in defence of J. K. Rowling that even indifferent writing can take children towards better fare. I have no idea, from very limited contact with Rowling, if that is likely to apply there, but the principle worked for me in the case of Michael Moorcock, except to say that even when he was working at his fastest and pulpiest in the early 1970s, with Elric doing his angst-ridden thing to keep the wolf from Moorcock’s door, his writing was never actually indifferent but bursting with bonkers energy and always managing to imply that (as with Ballard, in a very different way) there was a hefty mind behind what the garish book covers tried to sell as science-fantasy schlock. And so it was, as Jerry Cornelius pointed to Burroughs (William, not Edgar Rice) and modernist experimentation, and Behold the Man heads towards Jung, the Dancers at the End of Time bring up Peake and Goethe, thus to Dickens and Dostoevsky and after that you’re on your own. Which kind of means that when Moorcock started writing literary novels like Mother London, that was no more than his fans expected.

Which is perhaps a verbose way of saying that, when my friend Henry Gee garners praise from Moorcock (who he’d managed to convince to write a Futures piece for Nature) for his sci-fi trilogy The Sigil, I get a thrill of vicarious pleasure. It’s grand enough already that Henry has conceived of a blockbusting space-opera trilogy now graced with E. E. ‘Doc’ Smith-style covers and with what seems to be the sort of outrageously cosmic plotline that could only have been hatched by a Lovecraft fan ensconced in the wilds of Cromer (I’ve seen only the first volume, so don’t know where the story ends up, but only that this is a Grand Concept indeed). But to see it praised by Moorcock, Kim Stanley Robinson and Ian Watson is a great pleasure. And so here, because my blog is among other things unashamedly a vehicle for puffing my friends, is an advert for Henry’s deliciously retro literary triple album.

And while I am singing praises, I have been long overdue in giving a plug for the album Raga Saga, which features string quartet arrangements of South Indian classical music by V. S. Narasimhan and V. R. Sekar. The CD’s title is perhaps dodgy; the rest is certainly not. This is a fascinating blend of Indian classical tradition and Western orchestration. I’m nervous that my unfamiliarity with this tradition – I know a little about the theory behind some of this music, but have very little exposure to it – leave me vulnerable to that common Western trait of revelling in a vague “exoticism” without any deep appreciation of what is actually happening in the music. I’ve no doubt my enjoyment has an element of this. But it does seem to me that this particular example of “east meets west” brings something interesting and valuable to both. Narasimhan’s brother Vasantha Iyengar told me about this recording, and he says that:
“My brother lives in Chennai, India and is a professional violinist and composer. He works for the film industry to make a living but is passionate about and has been trained in Western and Indian music. Because of this combination, he always heard the beautiful Indian melodies with harmony in his head and started trying out this idea. He has been working on this kind of style since the year 2000. In 2005, to his utter pleasant surprise, he got email from world class musicians, Yo Yo Ma, Zubin Mehta and his violinist hero, Vengerov, appreciating his quartet work. He has been very encouraged about continuing with this pioneering work. It is still difficult to spread the message that great music can be made with this kind of blend and of course to get attention from companies like Sony for a recording. So my son, just out of business school has taken it upon himself to help the uncle out to bring out his vision: he has built the website stringtemple and is doing his best.”

I hope this effort is still working out: it deserves to.

Sunday, October 14, 2012

Quantum optics strikes again

Here’s a piece I wrote for the Prospect blog on the physics Nobel. For my Prospect article on the renaissance of interest in the foundations of quantum theory, see here.

____________________________________________________________

There’s never been a better time to be a quantum physicist. The foundations of quantum theory were laid about a hundred years ago, but the subject is currently enjoying a renaissance as modern experimental techniques make it possible to probe fundamental questions that were left hanging by the subject’s originators, such as Albert Einstein, Niels Bohr, Erwin Schrödinger and Werner Heisenberg. We are now not only getting to grapple with the alleged weirdness of the quantum world, but also putting its paradoxical principles to practical use.

This is reflected in the fact that three physics Nobel prizes have been awarded since 1997 in the field of quantum optics, the most recent going this year to Serge Haroche of the Collège de France in Paris and David Wineland of the National Institute of Standards and Technology in Boulder, Colorado. It’s ‘quantum’ because the work of these two scientists is concerned with examining the way atoms and other small particles are governed by quantum rules. And it’s ‘optics’ because they use light to do it. Indeed, light is itself described by quantum physics, being composed (as Einstein’s Nobel-winning work of 1905 showed) of packets of energy called photons. The word ‘quantum’ was coined by Max Planck in 1900 to describe this discrete ‘graininess’ of the world at the scale of atoms.

The basic principle of a quantum particle is that its energy is constrained to certain discrete amounts, rather than being changeable gradually. Whereas a bicycle wheel can spin at any speed (faster speeds corresponding to more energy), a quantum wheel may rotate only at several distinct speeds. And it may jump between them only if supplied with the right amount of energy. Atoms make these ‘quantum jumps’ between energy states when they absorb photons with the right energy – this in turn being determined by the photon’s wavelength (light of different colours has different wavelengths).

Scientists since Planck’s time have been using light to study these quantum states of atoms. The trouble is that this entails changing the state in order to observe it. Haroche and Wineland have pioneered methods of probing quantum states without destroying them. That’s important not just to examine the fundamentals of quantum theory but for some applications of quantum behaviour, such as high-precision atomic clocks (central to GPS systems) and superfast quantum computers.

Wineland uses ‘atom traps’ to capture individual electrically charged atoms (ions) in electric fields. One counter-intuitive conclusion of quantum theory is that atoms can exist in two or more different quantum states simultaneously, called superpositions. These are generally very delicate, and destroyed when we try to look at them. But Wineland had mastered ways to probe superpositions of trapped ions with laser light without unravelling them. Haroche does the opposite: he traps individual photons of light between two mirrors, and fires atoms through the trap that detect the photon’s quantum state without disturbing it.

‘Reading out’ quantum states non-destructively is a trick needed in quantum computers, in which information is encoded in quantum superpositions so that many different states can be examined at once – a property that would allow some problems to be solved extremely fast. Such a ‘quantum information technology’ is steadily becoming reality, and it is doubtless this combination of fundamental insight and practical application that has made quantum optics so popular with Stockholm. Quantum physics might still seem other-worldly, but we’ll all be making ever more use of it.

Friday, October 12, 2012

Don't take it too hard

This one appeared yesterday on Nature news.

__________________________________________________

A study of scientific papers’ histories from submission to publication unearths some unexpected patterns

Just had your paper rejected? Don’t worry – that might boost its eventual citation tally. An excavation of the usually hidden trajectories of scientific papers from journal to journal before publication has found that papers published in a journal after having first been submitted and rejected elsewhere receive significantly more citations on average than ones submitted only to that journal.

This is one of the unexpected insights offered by the study, conducted by Vincent Calcagno of the French Institute for Agricultural Research in Sophia-Antipolis and his colleagues [1]. They have tracked the submission histories of 80,748 scientific articles published among 923 journals between 2006 an 2008, based on the information provided by the papers’ authors.

Using this information, the researchers constructed a network of manuscript flows: a link exists between two journals if a manuscript initially submitted to one of them was rejected and subsequently submitted to the other. The links therefore have a directional character, like flows in a river network.

“The authors should be commended for assembling this previously hidden data”, says physicist Sidney Redner of Boston University, a specialist on networks of scientific citation.

Some of what Calcagno and colleagues found was unsurprising. On the whole, the network was modular, composed of distinct clusters that corresponded to subject categories, such as plant sciences, genetics and developmental biology, and with rather little movement of manuscripts between journals in different categories.

It’s no surprise too that the highest-impact journals, such as Nature and Science, are central to the network. What was less expected is that these journals publish a higher proportion of papers of papers previously submitted elsewhere, relative to more specialized and lower-impact publications.

“We expected the opposite trend, and the result is at first sight paradoxical”, says Calcagno. But Michael Schreiber, an expert in bibliometrics at the Technical University of Chemnitz in Germany, argues that this “is not surprising if you turn it around: it means that lower-impact journals get fewer resubmissions.” For one thing, he says, there are more low-impact journals, so resubmissions are more widely spread. And second, low-impact journals will have a lower threshold for acceptance and so will accept more first-time submissions.

On the whole, however, there are surprisingly few resubmissions. Three-quarters of all published papers appear in the journal to which they are first submitted. This suggests that the scientific community is rather efficient at figuring out where their papers are best suited. Calcagno says he found this surprising: “I expected more resubmissions, in view of the journal acceptance rates I was familiar with.”

Although the papers in this study were all in the biological sciences, the findings show some agreement with a previous study of papers submitted to the leading chemistry journal Angewandte Chemie, which found that most of those rejected ended up being published in journals with a lower impact factor [2].

Whether the same trends will be found for other disciplines remains to be seen, however. “There are clear differences in publication practices of, say, mathematics or economics”, says Calcagno, and he thinks these might alter the proportions of resubmissions.

Perhaps the most surprising finding of the work is that papers published after having been previously submitted to another journal are more highly cited on average than papers in the same journal that haven’t been – regardless of whether the resubmissions moved to journals with higher or lower impact.

Calcagno and colleagues think that this reflects the improving influence of peer review: the input from referees and editors makes papers better, even if they get rejected initially.

It’s a heartening idea. “Given the headaches encountered during refereeing by all parties involved, it is gratifying that there is some benefit, at least by citation counts”, says Redner.

But that interpretation has yet to be verified, and contrasts with previous studies of publication histories which found that very few manuscripts change substantially between initial submission and eventual publication [2].

Nonetheless, there is apparently some reason to be patient with your paper’s critics – they’ll do you good in the end. “These results should help authors endure the frustration associated with long resubmission processes”, say the researchers.

On the other hand, the conclusions that Schreiber draws for journal editors might please authors less: “Reject more, because more rejections improve quality.”

References
1. Calcagno, V. et al., Science Express doi: 10.1126/science.1227833 (2012).
2. Bornmann, L. & Daniel, H.-D. Angew. Chem. Int. Ed. 47, 7173-7178 (2008).

The lightning seeds


Here’s my previous piece for BBC Future. A new one just went up – will add that soon. This Center for Lightning Research in Florida looks fairly awesome, as this picture shows – that’s what I call an experiment!

________________________________________________________________

It seems hard to believe that we still don’t understand what causes lightning during thunderstorms – but that’s a fact. One idea is that they are triggered by particles streaming into the atmosphere from space, which release showers of electrons that seed the strike. A new study interrogates that notion and finds that, if there’s anything in it, it’s probably not quite in the way we thought.

Famously, Benjamin Franklin was one of the first people to investigate how lightning is triggered. He was right enough to conclude that lightning is a natural electrical discharge – those were the early days of harnessing electricity – but it’s not clear that his celebrated kite-and-key experiment ever went beyond a mere idea, not least because the kite was depicted, in Franklin’s account, as being flown – impossibly – out of a window.

In some ways we’ve not got much further since Franklin. It’s not yet agreed, for example, how a thundercloud gets charged up in the first place. Somehow the motions of air, cloud droplets, and precipitation (at that altitude, ice particles) conspire to separate positive from negative charge at the scale of individual molecules. It seems that ice particles acquire electrical charge as they collide, rather as rubbing can induce static electricity, and that somehow smaller ice particles tend to become positively charged while larger ones become negatively charged. As the small particles are carried upwards by convection currents, the larger ones sink under gravity, and so their opposite charges get separated, creating an electrical field. A lightning strike discharges this field – it is basically a gigantic spark jumping between the ‘live wire’ and the ‘earth’ of an electrical circuit, in which the former is the charged cloud and the latter is literally the earth.

While many details of this process aren’t at all clear, one of the biggest mysteries is how the spark gets triggered. For the electrical fields measured in thunderclouds don’t seem nearly big enough to induce the so-called ‘electrical breakdown’ needed for a lightning strike, in which air along the lightning path becomes ionized (its molecules losing or gaining electrons to become electrically charged.) It’s rather as if a spark were to leap spontaneously out of a plug socket and hit you – the electric field just isn’t sufficient for that to happen.

Something is therefore needed to ‘seed’ the lightning discharge. In 1997 Russian scientist Alexander Gurevich and his coworkers in Moscow suggested that perhaps the seed is a cosmic ray: a particle streaming into the atmosphere from outer space at high energy. These particles – mostly protons and electrons – pervade the universe, being produced in awesomely energetic astrophysical processes, and they are constantly raining down on Earth. If a cosmic ray collides with an air molecule, this can kick out a spray of fundamental particles and fragments of nuclei. Those in turn interact with other molecules, ionizing them and generating a shower of electrons.

In the electric field of a thundercloud, these electrons are accelerated, much as particles are in a particle accelerator, creating yet more energetic collisions in a ‘runaway’ process that builds into a lightning strike. This process is also expected to produce X-rays and gamma-rays, which are spawned by ‘relativistic’ electrons that have speeds approaching the speed of light. Since bursts of these rays have been detected by satellites during thunderstorms, Gurevich’s idea of cosmic-ray-induced lightning seemed plausible.

If it’s right, the avalanche of electrons should also generate radio waves, which would be detectable from the ground. Three years ago Joseph Dwyer of the Florida Institute of Technology began trying to detect such radio signals from thunderstorms, as well as using arrays of particle detectors to look for the showers of particles predicted from cosmic-ray collisions. These and other studies by Dwyer and other groups are still being conducted (literally) at the International Center for Lightning Research and Testing at the US Army base of Camp Blanding in Florida.

But meanwhile, Dwyer has teamed up with Leonid Babich and his colleagues at the Russian Federal Nuclear Center in Sarov to delve further into the theory of Gurevich’s idea. (The Russian pre-eminence in this field of the electrical physics of the atmosphere dates from the cold-war Soviet era.) They have asked whether the flux of high-energy cosmic-rays, with their accompanying runaway electron avalanches, is sufficient to boost the conductivity of air and cause a lightning strike.

To do that, the researchers have worked through the equations describing the chances of cosmic-ray collisions, the rate of electron production and the electric fields this induces. The equations are too complicated to be solved by hand, but a computer can crunch through the numbers. And the results don’t look good for Gurevich’s hypothesis: runaway electron avalanches produced by cosmic-ray showers just don’t seem capable of producing electrical breakdown of air and lightning discharge.

However, all is not lost. As well as the particle cascades caused by collisions of high-energy cosmic rays, the atmosphere can also be electrified by the effects of cosmic rays with lower energy, which are more plentiful. When these collide with air molecules, the result is nothing like as catastrophic: they simply ionize the molecules. But a gradual build-up of such ionized particles within a thundercloud could, according to these calculations, eventually produce a strong enough electrical field to permit a lightning discharge. That possibility has yet to be investigated in detail, but Dwyer and colleagues think that it leaves an avenue still open for cosmic rays to lie at the origin of thunderbolts.

Paper: L. P. Babich, E. I. Bochkov, J. R. Dwyer & I. M. Kutsyk, Journal of Geophysical Research 117, A09316 (2012).

Monday, October 08, 2012

Chemists get the blues

Just got back from judging the Chemistry World science writing competition. Makes me feel old, or perhaps just reminds me that I am. Anyway, many congratulations to the winner Chris Sinclair, whose article I believe will appear soon in Chemistry World. Meanwhile, here is my last Crucible column.
__________________________________________________

“Ultramarine blue is a colour illustrious, beautiful, and most perfect, beyond all other colours”, wrote the Italian artist Cennino Cennini in the late fourteenth century. He and his contemporaries adored this mineral pigment for its rich, deep lustre. But they didn’t use it much, at least not unless they had a particularly rich client, because it was so costly. As the name implies, it came from ‘over the seas’ – all the way from what is now Afghanistan, where mines in the remote region of Badakhshan were the only known source of the parent mineral, lapis lazuli, for centuries. Not only was ultramarine expensive to import, but it was laborious to make from the raw material, in a process of grinding and repeated washing that separated the blue colorant from impurities. So ultramarine could cost more than its weight in gold, and painters reserved it for the most precious parts of their altarpieces, especially the robes of the Virgin Mary.

Blue has always been a problem for artists. One of the first synthetic pigments, Egyptian blue (calcium copper silicate), was pale. The best mineral alternative to ultramarine, called azurite (hydrous copper carbonate), was more readily accessible but greenish rather than having ultramarine’s glorious purple-reddish tinge. Around 1704 a misconceived alchemical experiment yielded Prussian blue (iron ferrocyanate), which is blackish, prone to discolour, and decomposes to hydrogen cyanide under mildly acidic conditions. The discovery of cobalt blue (cobalt aluminate) in 1802, followed by a synthetic route to ultramarine in 1826, seemed to solve these problems of hue, stability and cost, but even these ‘artificial’ blues have drawbacks: cobalt is rather toxic, and ultramarine is sensitive to heat, light and acid, which limits its use in some commercial applications.

This is why the identification of a new inorganic blue pigment in 2009 looked so promising. Mas Subramanian and coworkers at Oregon State University found that trivalent manganese ions produce an intense blue colour, with the prized ‘reddish’ shade of ultramarine, when they occupy a trigonal bipyramidal site in metal oxides [1]. The researchers substituted Mn3+ for some indium ions in yttrium indium oxide (YInO3), forming a solid solution of YInO3 and YMnO3, which has a blue colour even though the two oxides themselves are white and black respectively. The depth of the colour varies from pale to virtually black as the manganese content is increased, although it is significantly blue even for only about 2 percent substitution. The researchers found that inserting manganese into other metal oxides with the same coordination geometry also offers strong blues. Meanwhile, similar substitutions of iron (III) and copper (II) generate bright orange and green pigments [2,3]. Those are traditionally less problematic, however, and while the materials may prove to have useful magnetic properties, it’s the blue that has attracted colour manufacturers.

Producing a commercially viable pigment is much more than a matter of finding a strongly coloured substance. It must be durable, for example. Although ultramarine, made industrially from cheap ingredients, is now available in quantities that would have staggered Titian and Michelangelo, it fades in direct sunlight because the sodalite framework is degraded and the sulphur chromophores are released and decompose – a process only recently understood [4]. This rules out many uses for exterior coatings. In contrast, the manganese compound has good thermal, chemical and light stability.

One of the key advantages of the YIn1-xMnxO3 compounds over traditional blues, however, is their strong reflectivity in the near-infrared region. Many other pigments, including cobalt blue and carbon black, have strong absorption bands here. This means that surfaces coated with these pigments heat up when exposed to strong sunlight. Building roofs coloured with such materials become extremely hot and can increase the demand of air conditioning in hot climates; instrument panels and steering wheels of cars may become almost too hot to touch. That’s why there is a big industrial demand for so-called ‘cool’ pigments, which retain their absorbance in the visible region but have low absorbance in the infrared. These can feel noticeably cooler when exposed to sunlight.

This aspect in particular has motivated the Ohio-based pigment company Shepherd Color to start exploring the commercial potential of the new blue pigment. One significant obstacle is the price of the indium oxide (In2O3) used as a starting material. This is high, because it is produced (mostly in China) primarily for the manufacture of the transparent conductive oxide indium titanium oxide for electronic displays and other optoelectronic applications. Those uses demand that the material be made with extremely high purity (around 99.999 percent), which drives up the cost. In principle, the low-purity In2O3 that would suffice for making Yin1-xMnxO3 could be considerably cheaper, but is not currently made at all as there is no market demand.

That’s why Subramanian and colleagues are now trying to find a way of eliminating the indium from their manganese compounds – to find a cheaper host that can place the metal atoms in the same coordination environment. If they succeed, it’s possible that we’ll see yet another revolution in the chemistry of the blues.

1. A. E. Smith et al., J. Am. Chem. Soc. 131, 17084-17086 (2009).
2. A. E. Smith, A. W. Sleight & M. A. Subramanian, Mater. Res. Bull. 46, 1-5 (2011). 1. A. E. Smith et al., J. Am. Chem. Soc. 131, 17084-17086 (2009).
3. P. Jiang, J. Li, A. W. Sleight & M. A. Subramanian, Inorg. Chem. 50, 5858-5860 (2011).
4. E. Del Federico et al., Inorg. Chem. 45, 1270-1276 (2006).

Thursday, October 04, 2012

The cost of useless information

This was a damned difficult story to write for Nature news, and the published version is a fair bit different to this original text. I can’t say which works best – perhaps it’s just one of those stories for which it’s helpful to have more than one telling. Part of the difficulty is that, to be honest, the real interest is fundamental, not in terms of what this idea can do in any applied sense. Anyway, I’m going to append to this some comments from coauthor David Sivak of the Lawrence Berkeley National Laboratory, which help to explain the slightly counter-intuitive notion of proteins being predictive machines with memories.

__________________________________________________

Machines are efficient only if they collect information that helps them predict the future

The most efficient machines remember what’s happened to them, and use that memory to predict what the future holds. This conclusion of a new study by Susanne Still of the University of Hawaii at Manoa and her coworkers [1] should apply equally to ‘machines’ ranging from molecular enzymes to computers and even scientific models. It not only offers a new way to think about processes in molecular biology but might ultimately lead to improved computer model-building.

“[This idea] that predictive capacity can be quantitatively connected to thermodynamic efficiency is particularly striking”, says chemist Christopher Jarzynski of the University of Maryland.

The notion of constructing a model of the environment and using it for prediction might feel perfectly familiar for a scientific model – a computer model of weather, say. But it seems peculiar to think of a biomolecule such as a motor protein doing this too.

Yet that’s just what it does, the researchers say. A molecular motor does its job by undergoing changes in the conformation of the proteins that comprise it.

“Which conformation it is in now is correlated with what states the environment passed through previously”, says Still’s coworker Gavin Crooks of the Lawrence Berkeley National Laboratory in California. So the state of the molecule at any instant embodies a memory of its past.

But the environment of a biomolecule is full of random noise, and there’s no gain in the machine ‘remembering’ the fine details of that buffeting. “Some information just isn't useful for making predictions”, says Crooks. “Knowing that the last coin toss came up heads is useless information, since it tells you nothing about the next coin toss.”

If a machine does store such useless information, eventually it has to erase it, since its memory is finite – for a biomolecule, very much so. But according to the theory of computation, erasing information costs energy - it results in heat being dissipated, which makes the machine inefficient.

On the other hand, information that has predictive value is valuable, since it enables the machine to ‘prepare’ – to adapt to future circumstances, and thus to work optimally. “My thinking is inspired by dance, and sports in general, where if I want to move more efficiently then I need to predict well”, says Still.

Alternatively, think of a vehicle fitted with a smart driver-assistance system that uses sensors to anticipate its imminent environment and react accordingly – to brake in an optimal manner, and so maximize fuel efficiency.

That sort of predictive function costs only a tiny amount of processing energy compared with the total energy consumption of a car. But for a biomolecule it can be very costly to store information, so there’s a finely balanced tradeoff between the energetic cost of information processing against the inefficiencies caused by poor anticipation.

“If biochemical motors and pumps are efficient, they must be doing something clever”, says Still. “Something in fact tied to the cognitive ability we pride ourselves with: the capacity to construct concise representations of the world we have encountered, which allow us to say something about things yet to come.”

This balance, and the search for concision, is precisely what scientific models have to negotiate too. Suppose you are trying to devise a computer model of a complex system, such as how people vote. It might need to take into account the demographics of the population concerned, and networks of friendship and contact by which people influence each other. Might it also need a representation of mass media influences? Of individuals’ socioeconomic status? Their neural circuitry?

In principle, there’s no end to the information the model might incorporate. But then you have an almost one-to-one mapping of the real world onto the model: it’s not really a model at all, but just a mass of data, much of which might end up being irrelevant to prediction.

So again the challenge is to achieve good predictive power without remembering everything. “This is the same as saying that a model should not be overly complicated – that is, Occam's Razor”, says Still. She hopes this new connection between prediction and memory might guide intuition in improving algorithms that minimize the complexity of a model for a specific desired predictive power, used for example to study phenomena such as climate change.

References
1. Still, S., Sivak, D. A., Bell, A. J. & Crooks, G. E. Phys. Rev. Lett. 109, 120604 (2012).

David Sivak’s comments:
On the level of a single biomolecule, the basic idea is that a given protein under given environmental conditions (temperature, pH, ionic concentrations, bound/unbound small molecules, conformation of protein binding partners, etc.) will have a particular equilibrium probability distribution over different conformations. Different protein sequences will have different equilibrium distributions for given environmental conditions. For example, an evolved protein sequence is more likely to adopt a folded globular structure at ambient temperature, as compare to a random polypeptide. If you look over the distribution of possible environmental conditions, different protein sequences will differ in the correlations between their conformational state and particular environmental variables, i.e., the information their conformational state stores about the particular environmental variables.

When the environmental conditions change, that equilibrium distribution changes, but the actual distribution of the protein shifts to the new equilibrium distribution gradually. In particular, the dynamics of interconversion between different protein conformations dictates how long it takes for particular correlations with past environmental variables to die out, i.e., for memory of particular aspects of the environment to persist. Thus the conformational preferences (as a function of environmental conditions) and the interconversion dynamics determine the memory of particular protein sequences for various aspects of their environmental history.

One complication is that this memory, this correlation with past environmental states, may be a subtle phenomenon, distributed over many detailed aspects of the protein conformation, rather than something relatively simple like the binding of a specific ion. So, we like to stress that the model is implicit. But it certainly is the case that an enzyme mutated at its active site could differ from the wild-type protein in its binding affinity for a metal ion, and could also have a different rate of ion dissociation. Since the presence or absence of this bound metal ion embodies a memory of past ion concentrations, the mutant and wild-type enzymes would differ in their memory.

For a molecular motor, there are lots of fluctuating quantities in the environment, but only some of these fluctuations will be predictive of things the motor needs for its function. An efficient motor should not, for example, retain memory of every water molecule that collides with it, even if it could, because that will provide negligible information of use in predicting future fluctuations of those quantities that are relevant for the motor's functioning.

In vivo, the rotary F0F1-ATP synthase is driven by protonmotive flow across the mitochondrial membrane. The motor could retain conformational correlations with many aspects of its past history, but this analysis says that the motor will behave efficiently if it remembers molecular events predictive of when the next proton will flow down its channel, and loses memory of other molecular events irrelevant to its function. In order to efficiently couple that flow to the functional role of the motor, synthesizing ATP, the motor should retain information about the past that is predictive of such protonmotive flow, but lose any correlation with irrelevant molecular events, such as incidental collisions by water molecules.

But we are hesitant to commit to any particular example. We are all thinking about good concrete instantiations of these concepts for future extensions of this work. Right now, the danger of very specific examples like the F0F1 motor is that people who know much more about the particular system than we do might get bogged down in arguing the details, such as what exactly drives the motor, whether that driving involves conformational selection or induced fit, how concerted the mechanism is, etc., when the main point is that this framework applies regardless of the exact manner in which the system and environment are instantiated. Not to mention the fact that some subtle solvent rearrangements at the mouth of the channel may in fact be very predictive of future proton flow.

Friday, September 21, 2012

How to copy faster

Yes, there’s more tonight. It's Friday. Here’s my latest news story for Nature. This one was tough, but hopefully worth it. Possibly I ended up with a better explanation on the Nature site than here of why reversibility is linked to the minimum heat output. But it’s a tricky matter, so no harm in having two bites of the cherry.
____________________________________________
Bacteria replicate close to the physical limit of efficiency, says a new study – but might we make them better still?

Bacteria such as E. coli typically take about 20 minutes to replicate. Can they do it any faster? A little, but not much, says biological physicist Jeremy England of the Massachusetts Institute of Technology. In a preprint [1], he estimates that bacteria are impressively close to – within a factor of 2-3 of – the limiting efficiency of replication set by the laws of physics.

“It is heartening to learn this”, says Gerald Joyce, a chemist at the Scripps Research Institute in La Jolla, California, who work includes the development of synthetic replicating molecules based on RNA. “I suppose I should take some comfort that our primitive RNA-based self-replicator apparently operates even closer to the thermodynamic lower bound”, he adds.

At the root of England’s work is a question that has puzzled many scientists: how do living systems seem to defy the Second Law of Thermodynamics by sustaining order instead of falling apart into entropic chaos? In his 1944 book What is Life?, physicist Erwin Schrödinger asserted that life feeds on ‘negative entropy’ – which was really not much more than restating the problem.

Life doesn’t really defy the Second Law because it produces entropy to compensate for its own orderliness – that is why we are warmer than our usual surroundings. England set out to make this picture rigorous by estimating the amount of heat that must unavoidably be produced when a living organism replicates – one of the key defining characteristics of life. In other words, how efficient can replication be while still respecting the Second Law?

To attack this problem, England uses the concepts of statistical mechanics, the microscopic basis of classical thermodynamics. Statistical mechanics relates different arrangements of a set of basic constituents, such as atoms or molecules, to the probabilities of their occurring. The Second Law – the inexorable increase of entropy or, loosely speaking, disorder – is generally considered to follow from the fact that there are many more disorderly arrangements of such constituents than orderly ones, so that these are far more likely to be the outcome of the particles’ movements and interactions.

The question is: what is the cheapest way, in terms of how much energy (technically free energy, which takes into account both the energy needed to make and break chemical bonds and the associated entropy changes) is involved, of going from one bacterium to two? That turns out to be a matter of how easily one can reverse the process.

For the analogous question of the minimal cost of doing a computation – combining two bits of information in a logic operation – the answer depends on how much energy it costs to reset a bit and ‘undo’ the computation. This quantity places a fundamental limit on how low the power consumption of a computer can be.

“The probability that the reverse transition from two cells to one could happen is the quantity that tells us how irreversible the replication process is”, says England. “Whatever this quantity is, it need not be dominated by the trajectories that would just look like the movie playing backwards: there are many ways of starting with two cells and ending up with one. I’m asking what class of paths should dominate that process.”

The problem is precisely those “many ways”. “You can drive yourself nuts trying to think of everything”, England says. But he considered the most general reversal route: if, by chance, the atoms in the replicated bacteria happen to move such that all its molecules disintegrate. That is, of course, immensely unlikely. But by figuring out exactly how unlikely, England can place a rough limit on how reversible replication is, and thus on its minimum energy cost.

By plugging some numbers into the equations describing the likelihood of a replication being reversed – how long on average the chemical bonds holding proteins together will last, say, and how many such bonds there are in a bacterium – England estimates that the minimal amount of heat a bacterium must generate to replicate is a little more than a third of the amount a real E. coli cell generates. That’s impressive: if the cells were only twice as efficient, they’d be approaching the maximum efficiency physically possible.

“The weakest point in my argument is the assumption that we know what the ‘most likely very unlikely path’ for spontaneous disintegration of a bacterium is”, England admits. “We’re talking about things that simply never happen, so we can’t have much intuition about them.” As a result, he says that his treatment “certainly shouldn't be thought of us a proof as much as a plausibility argument.”

It’s precisely this that troubles Joyce, who compares the calculation with the joke about a physicist trying to solve a problem in dairy farming. “As an experimentalist, it is hard for me relate to this ‘spherical cow’ treatment of a self-replicating system”, Joyce says. “Here E. coli seems to be nothing more than the equivalent of its dry weight in proteins.”

England says that we can hardly expect bacteria to do much better than they do given that they have to cope with many different environments and so can’t be optimized for any particular one. But if we want to engineer a bacterium for a highly specialized task using synthetic biology, he says, then there is room for improvement: such a modified E. coli could be at least twice as efficient at replicating, which means that a colony could grow twice as fast. That could be useful in biotechnology. “We may be able to build self-replicators that grow much more rapidly than the ones we're currently aware of,” he says.

He also concludes that there’s a trade-off between speed of replication and robustness: a replicator that is prone to falling apart produces less heat, and so can replicate faster, than one that is more robust. The findings might therefore have implications for understanding the origin of life. Many researchers, including Joyce, suspect that DNA-based replicators were preceded on the early Earth by those based on RNA, which both encoded genetic information and acted as an enzyme-like catalyst for proto-biological reactions. This fits with England’s hypothesis, because RNA is less chemically stable than DNA, and so would be more fleet and nimble in getting replication started. “Something else than RNA might work even better on a shorter timescale at an earlier stage,” England adds.

References
1. England, J. L. preprint http://www.arxiv.org/abs/1209.1179 (2012).

What your snoring says about you

OK, BBC Future again. Who said Zzzzz? [Incidentally, who did decree in kids’ books that sleeping would be denoted by “Zzzzz”? It sounds nothing like sleeping, and drives me crazy. Even Julia Donaldson does it. Listen, this is a big deal with a 3- and 7-year-old.]
______________________________________________________
Snoring is no joke for partners, but it’s not much fun for the snorer either. Severe snoring is the sound of a sleeper fighting for breath, as relaxed muscles in the pharynx (the top of the throat) allow the airway to become blocked. Lots of people snore, but the loud and irregular snoring caused by a condition known as obstructive sleep apnea (OSA) can leave a sufferer tired and fuddled during the day, even though he or she is rarely fully awoken by the night-time disruption. It’s difficult to treat – there are currently no effective drugs, and the usual intervention involves a machine that inflates the airway, or in extreme cases surgery. But the first step is to distinguish genuine OSA, which afflicts between 4 and 10 percent of the population, from ordinary snoring.

That kind of diagnosis is costly and laborious too. Often a snorer will need to sleep under observation in a laboratory. But some researchers believe that there is a signature of OSA encoded in the sounds of the snores themselves – which can be easily recorded at home for later analysis. A team in Brazil that brings together medics and physicists has now found a way of analysing snore recordings that is able not only to spot OSA but can distinguish between mild and severe cases.

Diagnosing OSA from snore sounds is not a new idea. The question is how, if at all, the clinical condition is revealed by the noises. Does OSA affect the total number of snores, or their loudness, or their acoustic quality, or their regularity – or several or all of these things? In 2008 a team in Turkey showed that the statistical regularity of snores has the potential to discriminate ordinary sleepers from OSA sufferers. And last year a group in Australia found that a rather complex analysis of the sound characteristics, such as the pitch, of snores might be capable of providing such a diagnosis, at least in cases where the sound is recorded under controlled and otherwise quiet conditions.

Physicist Adriano Alencar of the University of São Paulo and his colleagues have now added to this battery of acoustic methods for identifying OSA. They recorded the snoring of patients referred to the university’s Sleep Laboratory because of suspected OSA, and studied the measurements for a fingerprint of OSA in the regularity of snores.

A person who snores but does not suffer from OSA typically does so in synchrony with breathing, with successive snores less than about ten seconds apart. In these cases the obstruction of the airway that triggers snoring comes and goes, so that snoring might stop for perhaps a couple of minutes or more before resuming. So for ‘healthy’ snoring, the spacing between snores tends to be either less than ten seconds or, from time to time, more than about 100 seconds.

OSA patients, meanwhile, have snore intervals that fall within this time window. The snores follow one another in train, but with a spacing dictated by the more serious restriction of airflow rather than the steady in-and-out of breathing. The researchers measured what they call a snore time interval index, which is a measure of how often the time between snores falls between 10 and 100 seconds. They compare this with a standard clinical measure of OSA severity called the apnea-hypopnea index (AHI), which is obtained from complicated monitoring of a sleeping patient’s airflow in a laboratory. (Hypopnea is the milder form of OSA in which the airway becomes only partially blocked.)

Alencar and colleagues find that the higher the value of their snore interval index, the higher the patient’s corresponding AHI is. In other words, the snore index can be used as a pretty reliable proxy for the AHI: you can just record the snores rather than going through the rigmarole of the whole lab procedure.

That’s not all. The researchers could also use a snore recording to figure out how snores are related to each other – whether there is a kind of ‘snore memory’, so that, say, a particular snore is linked to a recent burst of snoring. This memory is measured by a so-called Hurst exponent, which reveals hidden patterns in a series of events that, at first glance, look random and disconnected. An automated computer analysis of the snore series could ‘learn’, based on training with known test cases, to use the Hurst exponent to distinguish moderate from severe cases of OSA, making the correct diagnosis for 16 of 17 patients.

The work of Alencar and colleagues hasn’t yet been peer-reviewed. But in the light of the earlier studies of OSA signatures in snore sounds, it adds to the promise of an easy and cheap way of spotting snorers who have a clinical condition that needs treatment. What’s more, it supports a growing belief that the human body generates several subtle but readily measured indicators of health and disease, revealed by statistical regularities in apparently random signals. For example, sensitive measurement of the crackling sounds generated when our airways open as we breathe in can tell us about the condition of our lungs, perhaps revealing respiratory problems, while ‘buried’ statistical regularities in heartbeat intervals or muscle movements encode information about cardiac health or sleep states. Our bodies tell us a lot about ourselves, if only we know how to listen.

Reference: A. M. Alencar et al., preprint http://www.arxiv.org/abs/1208.2242.

The silk laser

Here is my latest piece for BBC Future (which you can’t see in the UK). I have at least one other piece from this column yet to put up here – that will follow.
______________________________________________________________
Electronic waste from obsolete phones, cameras, computers and other mobile devices is one of the scourges of the age of information. The circuitry and packaging is not only non-biodegradable but is laced with toxic substances such as heavy metals. Imagine, then, a computer that can be disposed of by simply letting soil bacteria eat it – or even, should the fancy take you, by eating it yourself. Biodegradable information technology is now closer to appearing on the menu following the announcement by Fiorenzo Omenetto of Tufts University in Medford, Massachusetts, and coworkers of a laser made from silk.

In collaboration with David Kaplan, a specialist in the biochemistry of silk at Tufts, Omenetto has been exploring the uses of silk for several years. He is convinced that it can offer us much more than glamorous clothing. It is immensely strong – more so than steel – and can be used to make tough fibres and ropes. In the Far East silk was once used to pad armour, and in pre-revolutionary Russia a form of primitive bullet-proof clothing was made from it. It can be moulded like plastic, yet is biodegradable: silk cups can be thrown away to quickly break down in the environment. It is also biocompatible, and so could be used to make medical implants such as screws to hold together mending bones, or artificial blood vessels. You can even eat it safely, although it doesn’t taste good.

What’s more, all of this comes from sustainable and environmentally friendly processing. Spiders and silkworms make silk in water at ordinary body temperature, spinning the threads from a solution of the silk protein. Harvesting natural silk is one option, but the genes that encode the protein can be transferred to other species, so that it can be produced by bacteria in fermentation vats, or even expressed in the milk of transgenic goats. Turning this raw silk protein into strong fibres is not easy – it’s hard to reproduce the delicate thread-spinning apparatus of spiders – but if you just want to cast films of silk as if it were a plastic then this isn’t an issue.

Perhaps some of the most remarkable potential uses for this ancient material are in high-tech optical technology, like that which forms the basis of optical storage and telecommunications. Using moulds patterned on the microscopic scale, silk can be shaped into structures that reflect and diffract light, like those on DVDs – it will support holograms, for instance. Its transparency commends it for optical fibres, and Omenetto and colleagues have previously shaped silk films into so-called waveguides, rather like very thin optical fibres laid down directly on a solid surface such as a silicon chip. But rather than just using silk to passively guide and direct light, they wanted to generate light from it too. This is what the silk laser enables.

In a laser, a light-emitting substance – the lasing medium – is sandwiched between mirrors which allow the light to bounce back and forth. The medium is placed in a light-emitting state by pumping in energy, typically using either another light source or an electrical current. When light is emitted, its trapping by the mirrors means that it triggers still more emission as it bounces to and fro, so that all the light is released in an avalanche. This puts all the light waves in step with one another, which is what gives laser light its intensity and narrowly focused beam. The beam eventually escapes through one of the mirrors, which is designed to be only partially reflective.

Because of their brightness, focus and rapid on-off switching, lasers are used in telecommunications to transmit information as a stream of light pulses that encode the binary digital information of computers and microprocessors: a pulse corresponds to “1”, say, and a gap in the pulse stream to a “0”. In this way, information can be fed over long distances down optical fibres. Increasingly, computer and electrical engineers are now aiming to move and process information directly on microchips in the form of light. Then there’s no cumbersome light-to-electrical conversion of data at each end of the transmission, and light-based information processing could potentially be faster and carry more signal, since different data streams can be conveyed simultaneously in light of different colours. These so-called photonic chips could transform information technology, and Omenetto believes that with silk it should be possible to create ‘biophotonic’ circuits. That demands not just channelling light but generating it – in a laser.

Silk doesn’t absorb or emit light at the visible and infrared frequencies used in conventional telecommunications and optical information technology. So to make it into a laser medium, one needs to add substances that do. Organic dyes (carbon-based molecules) are already widely used, dispersed in a liquid solvent or in some solid matrix, to make dye lasers. The researchers figured they could mix such a dye into silk. They used one called stilbene, which is water-soluble and closely related to chemical compounds found in plants and used as textile brighteners.

Working with Stefano Toffanin and colleagues at the Institute for the Study of Nanostructured Materials in Bologna, Italy, Omenetto and his coworkers patterned a thin layer of silica (silicon dioxide) on the surface of a slice of silicon into a series of grooves about a quarter of a micrometre wide, which act as a mirror for the bluish-purple light that stilbene emits. They then covered this with a layer of silk spiced with the dye, and found that when they pumped this structure with ultraviolet light, it emitted light with the characteristic signature of laser light: an intense beam with a very narrow spread in frequency.

Making the device on silicon means that it could potentially be controlled electronically and merged with conventional chip circuitry. But that’s not essential – silk-based light circuits and devices could be laid down on other materials, perhaps on cheap, flexible and degradable plastics.

This isn’t the first time that biological materials have been used to make lasers. For example, in 2002 a team in Japan made one using films of DNA infused with organic dyes. And last year, two researchers in the US and Korea made lasers from living cells that were engineered to produce a fluorescent protein found naturally in a species of jellyfish. But the attraction of silk is that it is cheap, easy to process, biodegradable and already used to make a range of other light-based devices.

There might be even more dramatic possibilities. Recently, Omenetto and colleagues showed that silk is a good support for growing neurons, the cells that communicate nerve signals in the nervous system and the brain. This leads them to speculate that silk might mediate between optical and electronic technology and our nervous system, for example by bringing light sources intimately close to nerve cells for imaging them, or perhaps even developing circuitry that can transmit signals across damaged nerves.

Reference: S. Toffanin et al., Applied Physics Letters 101, 091110 (2012)

Tuesday, September 11, 2012

As easy as ABC?

Here’s my latest news story for Nature. There’s a lot of superscripts in here, for which I’ll use the x**n notation. Every time I encounter mathematicians, I’m reminded what a very different world they live in.
_____________________________________________________________________________

If it’s true, a Japanese mathematician’s solution to a conjecture about whole numbers would be an ‘astounding achievement’

The recondite world of mathematics is abuzz with a claim that one of the most important problems in number theory has been solved.

Japanese mathematician Shinichi Mochizuki of Kyoto University has released a 500-page proof of the ABC conjecture, which describes a purported relationship between whole numbers – a so-called Diophantine problem.

The ABC conjecture might not be as familiar to the wider world as Fermat’s Last Theorem, but in some ways it is more significant. “The ABC conjecture, if proved true, at one stroke solves many famous Diophantine problems, including Fermat's Last Theorem”, says Dorian Goldfeld, a mathematician at Columbia University in New York.

“If Mochizuki’s proof is correct, it will be one of the most astounding achievements of mathematics of the 21st century”, he adds.

Like Fermat’s theorem, the ABC conjecture is a postulate about equations of the deceptively simple form A+B=C that relate three whole numbers A, B and C. It involves the concept of a square-free number: one that can’t be divided by the square of any number. 15 and 17 are square free-numbers, but 16 and 18 – divisible by 4**2 and 3**2 respectively – are not.

The “square-free” part of a number n, denoted sqp(n), is the largest square-free number that can be formed by multiplying prime factors of n. For instance, sqp(18) = 2×3 = 6.

If you’ve got that, you should get the ABC conjecture. Proposed independently by David Masser and Joseph Oesterle in 1985, it concerns a property of the product of the three integers A×B×C, or ABC – or more specifically, of the square-free part of this product, which involves their distinct prime factors.

The conjecture states that the ratio of sqp(ABC)**r/C always has some minimum value greater than zero for any value of r greater than 1. For example, if A=3 and B=125, so that C=128, sqp(ABC)=30 and sqp(ABC)**2/C = 900/128. In this case, where r=2, sqp(ABC)**r/C is nearly always greater than 1, and always greater than zero.

It turns out that this conjecture encapsulates many other Diophantine problems, including Fermat’s Last Theorem (which states that A**n + B**n = C**n has no integer solutions if n>2). Like many Diophantine problems, it is at root all about the relationships between prime numbers – according to Brian Conrad of Stanford University, “it encodes a deep connection between the prime factors of A, B and A+B”.

“The ABC conjecture is the most important unsolved problem in Diophantine analysis”, says Goldfeld. “To mathematicians it is also a thing of beauty. Seeing so many Diophantine problems unexpectedly encapsulated into a single equation drives home the feeling that all the subdisciplines of mathematics are aspects of a single underlying unity.”

Unsurprisingly, then, many mathematicians have expended a great deal of effort trying to prove the conjecture. In 2007 the French mathematician Lucien Szpiro, whose work in 1978 led to the ABC conjecture in the first place, claimed to have a proof of it, but it was soon found to be flawed.

Like Szpiro, and also like Andrew Wiles who proved Fermat’s Last Theorem in 1994, Mochizuki has attacked the problem using the theory of elliptic curves, which are the smooth curves generated by algebraic relationships of the sort y**2 = x**3 + ax + b.

There, however, the relationship of Mochizuki’s work to previous efforts stops. In the present and earlier papers he has developed entirely new techniques that very few other mathematicians yet fully understand. “His work is extremely novel”, says Conrad. “It uses a huge number of new insights that are going to take a long time to be digested by the community.”

This novelty invokes entirely new mathematical ‘objects’ – abstract entities analogous to more familiar examples such as geometric objects, sets, permutations, topologies and matrices. “At this point he is probably the only one that knows it all”, says Goldfeld.

As a result, Goldfeld says, “if the proof is correct it will take a long time to check all the details.” The proof is spread over four long papers, each of which rests on earlier long papers. “It can require a huge investment of time to understand a long and sophisticated proof, so the willingness by others to do this rests not only on the importance of the announcement but also on the track record of the authors”, Conrad explains.

Mochizuki’s track record certainly makes the effort worthwhile. “He has proved extremely deep theorems in the past, and is very thorough in his writing, so that provides a lot of confidence”, says Conrad. And he adds that the payoff would be more than a matter of simply verifying the claim. “The exciting aspect is not just that the conjecture may have now been solved, but more importantly that the new techniques and insights he must have had to introduce should be very powerful tools for solving future problems in number theory.”

Mochizuki’s papers:
Paper 1
Paper 2
Paper 3
Paper 4

Friday, September 07, 2012

Cold fusion redux

This obituary of Martin Fleischmann appears in a similar form in the latest issue of Nature. No one, of course, wants to seem carping or churlish in an obit, and I hope this achieves some kind of balance. But I have to admit that, looking back now, I couldn’t help but be reminded of how badly Pons and Fleischmann behaved while cold fusion was at its height. In particular, the way they threatened Utah physicist Michael Salamon, who tried to replicate their experiments with their own equipment, was unforgivable. And it was pretty distasteful to see Fleischmann more recently bad-mouthing all his critics, searching for ways to belittle or dismiss Mark Wrighton, Frank Close, Nate Lewis, Gary Taubes and others. As I try to say here, it’s not getting things wrong that should count against you, but how you handle it.

____________________________________________________

OBITUARY
Martin Fleischmann, 1927-2012

Pioneering electrochemist who claimed to have discovered cold fusion

“Whatever one’s opinion about cold fusion, it should not be allowed to dominate our view of a remarkable and outstanding scientist.” This plea appears in the University of Southampton’s obituary of Martin Fleischmann, who carried out much of the work there that made him renowned as an electrochemist. It is not clear that it will be heeded.

Fleischmann died on 3rd August at the age of 85 after illness from Parkinson’s disease, heart disease and diabetes. He made substantial contributions to his discipline, being the first person to observe surface-enhanced Raman emission (now the basis of a widely used technique) and developing the use of ultramicroelectrodes as sensitive electrochemical probes. But he is best known now for his claim in 1989 to have initiated nuclear fusion on a bench top using only the kind of equipment a school lab might possess.

The ‘cold fusion’ debacle provoked bitter disputes, court cases and controversies that reverberate today. Along with polywater and homeopathy, cold fusion is now regarded as one of the most notorious cases of what chemist Irving Langmuir called ‘pathological science’ – as he put it, “the science of things that aren’t so”.

It would be wrong to draw a veil over cold fusion as an aberration in Fleischmann’s otherwise distinguished career. For it was instead an extreme example of the style that characterized his research: a willingness to suggest bold and provocative ideas, to take risks and to make imaginative leaps that could sometimes yield a rich harvest.

Fleischmann was born in Karoly Vary (Karlsbad) in Czechoslovakia in 1927. His father was of Jewish heritage and opposed Hitler’s regime; his family fled just before the German invasion to the Netherlands and then England. Fleischmann studied chemistry at Imperial College in London and, after a PhD in electrochemistry, he moved to the University of Newcastle. In 1967 he was appointed to the Faraday Chair of Chemistry at Southampton, where he explored reactions at electrode surfaces.

In 1974, Fleischmann and his coworkers observed unusually intense Raman emission (scattered light shifted in energy by the interaction with molecular vibrational states) from organic molecules adsorbed on the surface of silver electrodes. They did not immediately recognize that the enhancement was caused by the surface, and indeed the mechanism is still not fully understood – but surface-enhanced Raman spectroscopy (SERS) has become a valuable tool for investigating surface chemistry.

Around 1980 Fleischmann and Mark Wightman independently pioneered the use of ultramicroelectrodes just a few micrometres across to study otherwise-inaccessible electrode processes – for example, at low electrolyte concentrations or with very fast rates of reaction. Such innovations gave Fleischmann international repute. In 1985, two years after his early retirement from Southampton, he was elected a Fellow of the Royal Society.

Fleischmann’s longstanding interest in hydrogen surface chemistry on palladium led to the cold fusion experiments. When hydrogen molecules adsorbed onto palladium dissociate into atoms, these atoms can diffuse into the metal lattice, making the metal a ‘sponge’ able to soak up large amounts of hydrogen. Very high pressures of hydrogen can build up – perhaps, Flesichmann wondered, sufficient to trigger nuclear fusion.

Fleischmann’s retirement in 1983 freed him to conduct self-funded experiments at the University of Utah – a location conducive to Fleischmann, a passionate skier – with his former student Stanley Pons. They electrolysed solutions of lithium deuteroxide, collecting deuterium at the palladium cathode, and claimed to measure more heat output than could be accounted for by the energy fed in – a signature, they said, of deuterium fusion within the electrode. On returning in the morning to one experiment left running overnight, they found that the apparatus had been vaporized and the fume cupboard and part of the floor destroyed. Was this a particularly violent outburst of fusion?

Not until 1989 did Fleischmann, Pons and their student Marvin Hawkins move to publish their data. They discovered they were in competition with a team at Brigham Young University in Utah, led by physicist Steven Jones, which was conducting similar studies. Initially Fleischmann and Pons accused Jones of plagiarizing their ideas, but eventually the groups agreed to coordinate their announcements with a joint submission to Nature on 24th March. Yet Fleischmann and Pons first rushed a (highly uninformative) paper into print with the Journal of Electroanalytical Chemistry, organized a press conference on 23nd March, and faxed their paper to Nature that same day without telling Jones.

The rest, as they say, is history, told for example in Frank Close’s Too Hot To Handle (1991) and Gary Taubes’ Bad Science (1993). The fusion claims shocked the world: physicists had been trying for decades, at great expense but with no success, to harness nuclear fusion for energy generation. Now it appeared that chemists had achieved it at a minuscule fraction of the expense, potentially solving the energy crisis. Jones’ paper was eventually [Nature 338, 737; 1989] published by Nature; that of Pons, Fleischmann and Hawkins was withdrawn when the authors professed to be too busy, in the wake of their astounding announcement, to address the reviewers’ comments. When Pons spoke at the spring meeting of the American Chemical Society on 12th April, the atmosphere was jubilant: it was hailed as a triumph of chemistry over physics. Physicists were more sceptical, and pointed out serious problems with Fleischmann and Pons’ claims to have detected the emission of neutrons diagnostic of deuterium fusion.

Accusations that they had manipulated the neutron data were never substantiated, but what really put paid to cold fusion was the persistent failure of other groups to reliably reproduce the purported excess heat generation and other signatures of potential fusion. More accusations, recriminations and general bad behaviour followed: coercion, intimidation, litigation (Pons’ lawyer threatened Utah physicist Michael Salamon with legal action after he published his negative attempts at replication in Nature), withholding of data (Fleischmann refused outright at one meeting of physicists to discuss crucial control experiments), and suspicions of experimental tampering (were some groups spiking their equipment with tritium, a fingerprint of fusion?). The University of Utah sought aggressively to capitalize, throwing $5m at a ‘National Cold Fusion Institute’ that closed only two years after it opened.

Once cold fusion lost its credibility, Fleischmann and Pons moved to France to continue their work with private funding, but later fell out. Now only a few lone ‘believers’ pursue the work. Fleischmann did not distinguish himself in the aftermath, belittling critical peers in interviews and hinting at paranoid conspiracy theories. But perhaps the biggest casualty of cold fusion was electrochemistry itself, suddenly made to seem a morass of charlatanism and poor technique. That was unfair: some of the most authoritative (negative) attempts to replicate the results were conducted by electrochemists.

Flesichmann’s tragedy was almost Shakespearean, not least because he was himself in many ways a sympathetic character: resourceful, energetic, immensely inventive and remembered warmly by collaborators. As Linus Pauling and Fred Hoyle also exemplified, once you’ve been proved right against the odds, it becomes harder to accept the possibility of error. “Many a time in my life I have been accused of coming up with crazy ideas,” he once said. “Fortunately, I'm able to say that, so far, the critics have had to back off.” But although a final reckoning should not let genuine achievements be overshadowed by errors, the blot that cold fusion left on Fleischmann’s reputation is hard to expunge. To make a mistake or a premature claim, even to fall prey to self-deception, is a risk any scientist runs. The true test is how one deals with it.

Friday, August 31, 2012

The chemical brain

Here’s my latest Crucible column for Chemistry World. A techie one, but no harm in that. I also have a feature on nanobubbles in this (September) issue, and will try to stick that up, in extended form, on my website soon.

___________________________________________________________________

Bartosz Grzybowski of Northwestern University in Illinois, who has already established himself as one of the most inventive current practitioners of the chemical art, has unveiled a ‘chemo-informatic’ scheme called Chematica that can stake a reasonable claim to being paradigm-changing. He and his colleagues have spent years assembling the transformations linking chemical species into a vast network that codifies and organizing the known pathways through chemical space. Each node of the network is either a molecule or element, or a chemical reaction. Links connect reactants and products via the nexus of a known reaction. The full network contains around 7 million compound nodes and about the same number of reaction nodes. Grzybowski calls it a “collective chemical brain.”

I predict a mixed reaction from chemists. On the one hand the potential value of such a tool for discovering improved or entirely new synthetic pathways to drugs, materials and other useful products is tremendous, and has already been illustrated by Grzybowski’s team. On the other hand, Chematica seems to imply that chemistry is indeed, as the old jibe puts it, just cookery, and is something now better orchestrated by computer than by chemists.

I’ll come back to that. First let’s look at what Chematica is. Grzybowski first described the network in 2005 [1], when he was mostly concerned with its topological properties rather than with chemical insights. Like the Internet or some social networks, the chemical network has ‘scale-free’ connectivity, meaning that the distribution of nodes with different degrees of connectivity n is a power law: the number of nodes with n links is proportional to n(exp-α), where α is a constant. This means that a few very highly connected nodes are the hubs that bind the network together and provide shortcuts. The same structure is also found in the reaction network of compounds in metabolic pathways.

In a trio of new papers the researchers have now started to put the network to use. In the first, they perform an automated trawl for new one-pot reactions that can replace existing multi-step syntheses [2]. The advantages of single-step processes are obvious: no laborious separation and purification of products after each step, with consequent reductions in yield. Identifying potential one-pot processes linking molecular nodes that hitherto lacked a direct connection here means subjecting the relevant reactions to several filtering steps that check for compatibility – for example, checking that a water-solvated synthesis will not unintentionally hydrolyse functional groups. This filtering is painstaking in principle, but very quick once automated.

It is one thing to demonstrate that such one-pot syntheses are possible in principle, but Grzybowski and colleagues have ensured that at least some of those identified work in practice. Specifically, they looked for syntheses of quinoline-based molecules – common components of drugs and dyes – and thiophenes, which have useful electronic and optical properties. Many of the new pathways worked with high yields, in some cases demonstrably higher than those of alternative multi-step syntheses. Some false positives arise from errors in the literature used to build the network.

Another use of Chematica is to optimize existing syntheses – something previously reliant on manual or inexhaustive semi-automated searches. Looking for improved – basically, cheaper – routes to a given target is a matter of stepping progressively backwards from that molecule to preceding intermediates [3]. An algorithm can calculate the costs of all such steps in the network, working recursively backwards to a specified ‘depth’ (maximum number of synthetic steps) and finding the cheapest option. Applied to syntheses conducted by Grzybowski’s company ProChimia, Chematica offered potential savings of up to 45 percent if instituted for 51 of the company’s targets. The greatest the number of targets, the greater the savings because of the economies of shared ingredients and intermediates.

Finally, and perhaps most controversially, the researchers show how Chematica can be used to identify threats of chemical-weapons manufacture by terrorists [4]. The network can be searched for routes to harmful substances such as nerve agents using unregulated ingredients. Of course, it can also disclose such routes, but as with viral genomic data [5], open access to such data should be the best antidote to the risks they inherently pose.

Does all this, then, mean that synthetic organic chemists are about to be automated? The usual response is to insist that computers will never match human creativity. But that defence is looking increasingly under threat in, say, chess, maths and perhaps even music and visual art. In some ways chemical synthesis is as rule-bound as music if not chess, and thus ripe for an algorithmic approach. Perhaps at least some of the beauty rightly attributed to classic syntheses should be seen as illustrating human ingenuity in the face of tasks for which no better solution then existed. Synthetic schemes designed by humans surely won’t become obsolete any time soon – but there seems no harm in acknowledging that the time may come when the art and creativity of chemistry resides more solidly in our decisions of what to make, and why, than in how we make it.

References

1. M. Fialkowski, K. J. M. Bishop, V. A. Chubukov, C. J. Campbell & B. A. Grzybowski, Angew. Chem. Int. Ed. 44, 7263 (2005).

2. C. M. Gothard et al., Angew. Chem. Int. Ed. online publication 10.1002/anie.201202155 (2012).

3. M. Kowalik et al., Angew. Chem. Int. Ed. online publication 10.1002/anie.201202209 (2012).

4. P. E. Fuller, C. M. Gothard, N. A. Gothard, A. Wieckiewicz & B. A. Grzybowski, Angew. Chem. Int. Ed. online publication 10.1002/anie.201202210 (2012).

5. M. Imai et al., Nature 10.1038/nature10831 (2012).