I have a letter in New Humanist responding to Francis Spufford’s recent defence of his Christian belief, a brief resume of the case he lays out in his new book. The letter was truncated to the second paragraph, my first and main point having been made in the preceding letter from Leo Pilkington. Here it is anyway.
And while I’m here: I have some small contributions in a nice documentary on Channel 4 tomorrow about Mary Shelley’s Frankenstein. I strolled past Boris Karloff’s blue plaque today, as I often do, erected on the wall above my local chippy. He was a Peckham Rye boy named William Henry Pratt. Happy Halloween.
______________________________________________________________
Since I’m the sort of atheist who believes that we can and should get on with religious folk, and because I have such high regard for Francis Spufford, I am in what I suspect is the minority of your readers in agreeing with the essence and much of the substance of what he says. It’s a shame, though, that he slightly spoils his case by repeating the spurious suggestion that theists and atheists are mirror images because of their yes/no belief in God. The null position for a proposition that an arbitrary entity exists for which there is no objective evidence or requirement and no obvious way of testing is not to shrug and say “well, I guess we just don’t know either way.” We are back to Russell’s teapot orbiting the Sun. The reason why the teapot argument won’t wash for religious belief is, as Spufford rightly says, because a belief in God is about so many other feelings, values and notions (including doubt and uncertainty), not ‘merely’ the issue of whether one can make the case objectively. While this subjectivity throws the likes of Sam Harris into paroxysms, it’s a part of human experience that we have to deal with.
Spufford is also a little too glib in dismissing the anger that religion arouses. The Guardian’s Comment is Free is a bad example, being a pathological little ecosystem to itself. Some of that anger stems from religious abuses to human rights and welfare, interference in public life, denial of scientific evidence, and oppression, conformity and censure. All of these malaises will, I am sure, be as deplored by Spufford as they are by non-believers. When religions show themselves capable of putting their own houses in order, it becomes so much easier for atheists to acknowledge (as we should) the good that they can also offer to believers and non-believers alike.
Tuesday, October 30, 2012
Thursday, October 25, 2012
Balazs Gyorffy (1938-2012)
I just heard that the solid-state physicist Balazs Gyorffy, an emeritus professor at Bristol, has died from cancer after a short illness. Balazs was a pioneer of first-principles calculations of electronic structure in alloys, and contributed to the theory of superconductivity in metals. But beyond his considerable scientific achievements, Balazs was an inspirational person, whose energy and passion made you imagine he would be immortal. He was a former Olympic swimmer, and was apparently swimming right up until his illness made it impossible. He was interested in everything, and was a wonderfully generous and supportive man. His attempts to teach me about Green’s functions while I was at Bristol never really succeeded, but he was extremely kind with his time and advice on Hungary when I was writing my novel The Sun and Moon Corrupted. Balazs was a refugee from the 1956 Hungarian uprising, and was an external member of the Hungarian Academy of Sciences. He was truly a unique man, and I shall be among the many others who will miss him greatly.
An old look at Milan
I have no reason for posting this old photo of Milan Cathedral except that I found it among a batch of old postcards (though the photo's an original) and think it is fabulous. I tend to like my cathedrals more minimalist, but this one is fabulously over the top.
Why cancer is smart
This is my most recent piece on BBC Future, though another goes up tomorrow.
_______________________________________________________________
Cancer is usually presented as a problem of cells becoming mindless replicators, proliferating without purpose or restraint. But that underestimates the foe, according to a new paper, whose authors argue that we’ll stand a better chance of combating it if we recognize that cancer cells are a lot smarter and operate as a cooperating community.
One of the authors, physicist Eshel Ben-Jacob of Tel Aviv University in Israel, has argued for some time that many single-celled organisms, whether they are tumour cells or gut bacteria, show a rudimentary form of social intelligence – an ability to act collectively in ways that adapt to the prevailing conditions, learn from experience and solve problems, all with the ‘aim’ of improving their chances of survival. He even believes there is evidence that they can modify their own genomes in beneficial ways.
Some of these ideas are controversial, but others are undeniable. One of the classic examples of a single-celled cooperator, the soil-dwelling slime mold Dictyostelium discoideum, survives a lack of warmth or moisture by communicating from cell to cell and coordinating their behaviour. Some cells send out pulses of a chemical attractant which diffuse into the environment and trigger other cells to move towards them. The community of cells then forms into complex patterns, eventually clumping together into multicelled bodies that look like weird mushrooms. Some of these cells become spores, entering into a kind of suspended animation until conditions improve.
Many bacteria can engage in similar feats of communication and coordination, which can produce complex colony shapes such as vortex-like circulating blobs or exotic branching patterns. These displays of ‘social intelligence’ help the colonies survive adversity, sometimes to our cost. Biofilms, for example – robust, slimy surface coatings that harbour bacteria and can spread infection in hospitals – are manufactured through the cooperation of several different species.
But the same social intelligence that helps bacteria thrive can be manipulated to attack pathogenic varieties. As cyberwarfare experts know, disrupting communications can be deadly. Some strategies for protecting against dangerous bacteria now target their cell-to-cell communications, for example by introducing false signals that might induce cells to eat one another or to dissolve biofilms. So it pays to know what they’re saying to one another.
Ben-Jacob, along with Donald Coffey of the Johns Hopkins University School of Medicine in Baltimore and ‘biological physicist’ Herbert Levine of Rice University in Houston, Texas, think that we should be approaching cancer therapy this way too: not by aiming to kill off tumour cells with lethal doses of poisons or radiation, but by interrupting their conversations.
There are several indications that cancer cells thrive by cooperating. One trick that bacteria use for invading new territory, including other organisms, is to use a mode of cell-to-cell communication called quorum sensing to determine how densely populated their colony is: above a certain threshold, they might have sufficient strength in numbers to form biofilms or infect a host. Researchers have suggested that this process is similar to the way cancer cells spread during metastasis. Others think that group behaviour of cancer cells might explain why they can become so quickly resistant to drugs.
Cancer cells are very different from bacteria: they are rogue human cells, so-called eukaryotic cells which have a separate compartment for the genetic material and are generally deemed a more advanced type of cell than ‘primitive’ bacteria, in which the chromosomes are just mixed up with everything else. Yet it’s been suggested that, when our cells turn cancerous and the normal processes regulating their growth break down, more primitive ‘single-celled’ styles of behaviour are unleashed.
Primitive perhaps – but still terrifyingly smart. Tumours can trick the body into making new blood vessels to nourish them. They can enslave healthy cells and turn them into decoys to evade the immune system. They seem even able to fool the immune system into helping the cancer to develop. It’s still not clear exactly how they do some of these things. The anthropomorphism that makes cancer cells evil enemies to be ‘fought’ risks distorting the challenge, but it’s not hard to see why researchers succumb to it.
Cancer cells resistant to drugs can and do emerge at random by natural selection in a population. But they may also have tricks that speed up mutation and boost the chances of resistant strains appearing. And they seem able to generate dormant, spore-like forms, as Dictyostelium discoideum and some bacteria do, that produce ‘time-bomb’ relapses even after cancer traces have disappeared in scans and blood tests.
So what’s to be done? Ben-Jacob and colleagues say that if we can crack the code of how cancer cells communicate, we might be able to subvert it. These cells seem to exchange chemical signals, including short strands of the nucleic acid RNA which is known to control genes. They can even genetically modify and reprogramme healthy cells by dispatching segments of DNA. The researchers think that it might be possible to turn this crosstalk of tumour cells against them, inducing the cells to die or split apart spontaneously.
Meanwhile, if we can figure out what triggers the ‘awakening’ of dormant cancer cells, they might be tricked into revealing themselves at the wrong time, after the immune system has been boosted to destroy them in their vulnerable, newly aroused state. Ben-Jacob and colleagues suggest experiments that could probe how this switch from dormant to active cells comes about. Beyond this, perhaps we might commandeer harmless or even indigenous bacteria to act as spies and agent provocateurs, using their proven smartness to outwit and undermine that of cancer cells.
The ‘warfare’ analogy in cancer treatment is widely overplayed and potentially misleading, but in this case it has some value. It is often said that the nature of war has changed over the past several decades: it’s no longer about armies, superior firepower, and battlefield strategy, but about grappling with a more diffuse foe – indeed one loosely organized into ‘cells’ – by identifying and undermining channels of recruitment, communication and interaction. If it means anything to talk of a ‘war on cancer’, then perhaps here too we need to think about warfare in this new way.
Reference: E. Ben-Jacob, D. S. Coffey & H. Levine, Trends in Microbiology 20, 403-410 (2012).
_______________________________________________________________
Cancer is usually presented as a problem of cells becoming mindless replicators, proliferating without purpose or restraint. But that underestimates the foe, according to a new paper, whose authors argue that we’ll stand a better chance of combating it if we recognize that cancer cells are a lot smarter and operate as a cooperating community.
One of the authors, physicist Eshel Ben-Jacob of Tel Aviv University in Israel, has argued for some time that many single-celled organisms, whether they are tumour cells or gut bacteria, show a rudimentary form of social intelligence – an ability to act collectively in ways that adapt to the prevailing conditions, learn from experience and solve problems, all with the ‘aim’ of improving their chances of survival. He even believes there is evidence that they can modify their own genomes in beneficial ways.
Some of these ideas are controversial, but others are undeniable. One of the classic examples of a single-celled cooperator, the soil-dwelling slime mold Dictyostelium discoideum, survives a lack of warmth or moisture by communicating from cell to cell and coordinating their behaviour. Some cells send out pulses of a chemical attractant which diffuse into the environment and trigger other cells to move towards them. The community of cells then forms into complex patterns, eventually clumping together into multicelled bodies that look like weird mushrooms. Some of these cells become spores, entering into a kind of suspended animation until conditions improve.
Many bacteria can engage in similar feats of communication and coordination, which can produce complex colony shapes such as vortex-like circulating blobs or exotic branching patterns. These displays of ‘social intelligence’ help the colonies survive adversity, sometimes to our cost. Biofilms, for example – robust, slimy surface coatings that harbour bacteria and can spread infection in hospitals – are manufactured through the cooperation of several different species.
But the same social intelligence that helps bacteria thrive can be manipulated to attack pathogenic varieties. As cyberwarfare experts know, disrupting communications can be deadly. Some strategies for protecting against dangerous bacteria now target their cell-to-cell communications, for example by introducing false signals that might induce cells to eat one another or to dissolve biofilms. So it pays to know what they’re saying to one another.
Ben-Jacob, along with Donald Coffey of the Johns Hopkins University School of Medicine in Baltimore and ‘biological physicist’ Herbert Levine of Rice University in Houston, Texas, think that we should be approaching cancer therapy this way too: not by aiming to kill off tumour cells with lethal doses of poisons or radiation, but by interrupting their conversations.
There are several indications that cancer cells thrive by cooperating. One trick that bacteria use for invading new territory, including other organisms, is to use a mode of cell-to-cell communication called quorum sensing to determine how densely populated their colony is: above a certain threshold, they might have sufficient strength in numbers to form biofilms or infect a host. Researchers have suggested that this process is similar to the way cancer cells spread during metastasis. Others think that group behaviour of cancer cells might explain why they can become so quickly resistant to drugs.
Cancer cells are very different from bacteria: they are rogue human cells, so-called eukaryotic cells which have a separate compartment for the genetic material and are generally deemed a more advanced type of cell than ‘primitive’ bacteria, in which the chromosomes are just mixed up with everything else. Yet it’s been suggested that, when our cells turn cancerous and the normal processes regulating their growth break down, more primitive ‘single-celled’ styles of behaviour are unleashed.
Primitive perhaps – but still terrifyingly smart. Tumours can trick the body into making new blood vessels to nourish them. They can enslave healthy cells and turn them into decoys to evade the immune system. They seem even able to fool the immune system into helping the cancer to develop. It’s still not clear exactly how they do some of these things. The anthropomorphism that makes cancer cells evil enemies to be ‘fought’ risks distorting the challenge, but it’s not hard to see why researchers succumb to it.
Cancer cells resistant to drugs can and do emerge at random by natural selection in a population. But they may also have tricks that speed up mutation and boost the chances of resistant strains appearing. And they seem able to generate dormant, spore-like forms, as Dictyostelium discoideum and some bacteria do, that produce ‘time-bomb’ relapses even after cancer traces have disappeared in scans and blood tests.
So what’s to be done? Ben-Jacob and colleagues say that if we can crack the code of how cancer cells communicate, we might be able to subvert it. These cells seem to exchange chemical signals, including short strands of the nucleic acid RNA which is known to control genes. They can even genetically modify and reprogramme healthy cells by dispatching segments of DNA. The researchers think that it might be possible to turn this crosstalk of tumour cells against them, inducing the cells to die or split apart spontaneously.
Meanwhile, if we can figure out what triggers the ‘awakening’ of dormant cancer cells, they might be tricked into revealing themselves at the wrong time, after the immune system has been boosted to destroy them in their vulnerable, newly aroused state. Ben-Jacob and colleagues suggest experiments that could probe how this switch from dormant to active cells comes about. Beyond this, perhaps we might commandeer harmless or even indigenous bacteria to act as spies and agent provocateurs, using their proven smartness to outwit and undermine that of cancer cells.
The ‘warfare’ analogy in cancer treatment is widely overplayed and potentially misleading, but in this case it has some value. It is often said that the nature of war has changed over the past several decades: it’s no longer about armies, superior firepower, and battlefield strategy, but about grappling with a more diffuse foe – indeed one loosely organized into ‘cells’ – by identifying and undermining channels of recruitment, communication and interaction. If it means anything to talk of a ‘war on cancer’, then perhaps here too we need to think about warfare in this new way.
Reference: E. Ben-Jacob, D. S. Coffey & H. Levine, Trends in Microbiology 20, 403-410 (2012).
Tuesday, October 16, 2012
Sweets in Boots
Here’s a piece I just wrote for the Guardian’s Comment is Free. Except in this case it isn’t, because comments have been prematurely terminated. That may be rectified soon, if you want to join the rush.
________________________________________
In the 13th century, £164 was an awful lot of money. But that’s how much the ailing Edward I spent on making over two thousand pounds in weight of medicinal syrups. Sugar was rare, and its very sweetness was taken as evidence of its medicinal value. Our word ‘treacle’ comes from theriac, a medieval cure-all made from roasted vipers, which could prevent swellings, unblock intestinal blockages, remove skin blemishes and sores, cure fevers, heart trouble, dropsy, epilepsy and palsy, induce sleep, improve digestion, restore lost speech, convey strength and heal wounds. No wonder town authorities monitored the apothecaries who made it, to make sure they didn’t palm people off with substandard stuff.
We like a good laugh at medieval medicine, don’t we? Then we walk into the sweetie shops for grown-ups known as Boots to buy lozenges, pastilles and syrups (hmm, suspiciously olde words, now that I think about it) for our aches, coughs and sneezes. Of course, some of us consider this sugaring of the pill to be prima face evidence of duping by the drug companies, and we go instead for the bitter natural cures, the Bach remedies and alcoholic tinctures which, like the medieval syphilis cure called guaiac, are made from twigs and wood, cost the earth, and taste vile.
Each to his own. I quite like the sugar rush. And I’m not surprised that Edward I did – on a medieval diet, a spoonful of sugar would probably work wonders for your metabolism, you’d feel like a new person for a few hours until your dropsy kicked in again. This, I surmise, must be why there is Benylin in my medicine cabinet. Because surely I didn’t – did I? – buy it because I thought it would make my cough any better?
An ‘expert panel’ convened by Which? Magazine has just announced that “We spend billions on over-the-counter pharmacy products each year but we’ve found evidence of popular products making claims that our experts judged just aren’t backed by sufficient evidence.” Cough syrups are among the worst offenders. They sell like crazy in winter, are mostly sugar (including treacle), and probably do sod all, despite the surreally euphemistic claims of brands such as Benylin that they will make your cough “more productive”.
Let’s be fair – Boots, at least, never claimed otherwise. Its “Web MB” admits that “The NHS says there’s not much scientific evidence that cough medicines work… The NHS says there are no shortcuts with coughs caused by viral infections. It just takes time for your body to fight off the infection.” Sure, if the syrup contains paracetamol, it might ease your aching head; if there’s any antihistamine in there, your streaming nose and eyes might dry up a bit. But if you want to soothe your throat, honey and lemon is at least as good – the Guardian’s told you that already.
The Which? report also questioned evidence that Seven Seas Jointcare tablets, Adios Herbal Slimming Tablets and Bach Rescue Remedy spray (to “restore inner calm”) do any good. Are you shocked yet?
Consumers deserve protection against charlatans, for sure. But as far as the over-the-shelf pharmacy counter is concerned, you might as well be expecting scientific evidence for palm reading. Can we, in this post-Ben Goldacre age, now ditch the simplistic view that medicine is about the evidence-based products of the pharmaceutical industry versus the crystal healers? That modern conceit ignores the entire history of medicine, in which folk belief, our wish for magical remedies, placebos, diet, fraud, abuse of authority, and the pressures of commerce have always played at least as big a role as anything resembling science. Modern drugs have made life longer and more bearable, but drug companies are no more above fixing the ‘evidence’ than some alternative cures are above ignoring it.
We’re right to be outraged at Big Pharma misbehaving, especially when their evasions and elisions concern drugs with potentially serious side-effects. But the sniffles and coughs that send us grazing in Boots are the little slings and arrows of life, and all we’re doing there is indulging in some pharmacological comfort eating. I’m a fan of analgesics, and my summers are made bearable by antihistamines, but a lot of the rest is merely lifestyle-targeted placebo. There’s no harm in that, but if we are going to be affronted when we find that those saccharine pills and potions won’t cure us, we’ve misunderstood the nature of the transaction.
________________________________________
In the 13th century, £164 was an awful lot of money. But that’s how much the ailing Edward I spent on making over two thousand pounds in weight of medicinal syrups. Sugar was rare, and its very sweetness was taken as evidence of its medicinal value. Our word ‘treacle’ comes from theriac, a medieval cure-all made from roasted vipers, which could prevent swellings, unblock intestinal blockages, remove skin blemishes and sores, cure fevers, heart trouble, dropsy, epilepsy and palsy, induce sleep, improve digestion, restore lost speech, convey strength and heal wounds. No wonder town authorities monitored the apothecaries who made it, to make sure they didn’t palm people off with substandard stuff.
We like a good laugh at medieval medicine, don’t we? Then we walk into the sweetie shops for grown-ups known as Boots to buy lozenges, pastilles and syrups (hmm, suspiciously olde words, now that I think about it) for our aches, coughs and sneezes. Of course, some of us consider this sugaring of the pill to be prima face evidence of duping by the drug companies, and we go instead for the bitter natural cures, the Bach remedies and alcoholic tinctures which, like the medieval syphilis cure called guaiac, are made from twigs and wood, cost the earth, and taste vile.
Each to his own. I quite like the sugar rush. And I’m not surprised that Edward I did – on a medieval diet, a spoonful of sugar would probably work wonders for your metabolism, you’d feel like a new person for a few hours until your dropsy kicked in again. This, I surmise, must be why there is Benylin in my medicine cabinet. Because surely I didn’t – did I? – buy it because I thought it would make my cough any better?
An ‘expert panel’ convened by Which? Magazine has just announced that “We spend billions on over-the-counter pharmacy products each year but we’ve found evidence of popular products making claims that our experts judged just aren’t backed by sufficient evidence.” Cough syrups are among the worst offenders. They sell like crazy in winter, are mostly sugar (including treacle), and probably do sod all, despite the surreally euphemistic claims of brands such as Benylin that they will make your cough “more productive”.
Let’s be fair – Boots, at least, never claimed otherwise. Its “Web MB” admits that “The NHS says there’s not much scientific evidence that cough medicines work… The NHS says there are no shortcuts with coughs caused by viral infections. It just takes time for your body to fight off the infection.” Sure, if the syrup contains paracetamol, it might ease your aching head; if there’s any antihistamine in there, your streaming nose and eyes might dry up a bit. But if you want to soothe your throat, honey and lemon is at least as good – the Guardian’s told you that already.
The Which? report also questioned evidence that Seven Seas Jointcare tablets, Adios Herbal Slimming Tablets and Bach Rescue Remedy spray (to “restore inner calm”) do any good. Are you shocked yet?
Consumers deserve protection against charlatans, for sure. But as far as the over-the-shelf pharmacy counter is concerned, you might as well be expecting scientific evidence for palm reading. Can we, in this post-Ben Goldacre age, now ditch the simplistic view that medicine is about the evidence-based products of the pharmaceutical industry versus the crystal healers? That modern conceit ignores the entire history of medicine, in which folk belief, our wish for magical remedies, placebos, diet, fraud, abuse of authority, and the pressures of commerce have always played at least as big a role as anything resembling science. Modern drugs have made life longer and more bearable, but drug companies are no more above fixing the ‘evidence’ than some alternative cures are above ignoring it.
We’re right to be outraged at Big Pharma misbehaving, especially when their evasions and elisions concern drugs with potentially serious side-effects. But the sniffles and coughs that send us grazing in Boots are the little slings and arrows of life, and all we’re doing there is indulging in some pharmacological comfort eating. I’m a fan of analgesics, and my summers are made bearable by antihistamines, but a lot of the rest is merely lifestyle-targeted placebo. There’s no harm in that, but if we are going to be affronted when we find that those saccharine pills and potions won’t cure us, we’ve misunderstood the nature of the transaction.
The nobel art of matchmaking
I have a Nature news story on the economics Nobel prize. Here’s the pre-edited version.
________________________________________________
Two economists are rewarded for the theory and application of how to design markets for money-free transactions
The theory and practice of matching resources to those who need them, in cases where conventional market forces cannot determine the outcome, has won the Nobel prize in economics for Lloyd Shapley of the University of California at Los Angeles and Alvin Roth of Harvard University.
Their work on matching “has applications everywhere”, says economist Atila Abdulkadiroglu of Duke University in Durham, North Carolina. “Shapley's work laid the groundwork, and Roth's work brought the theory to life.”
“This is terrific prize to a pair of very deserving scholars”, says economist Paul Milgrom of Stanford University in California.
The work of Shapley and Roth shows how to find optimal matches between people or institutions ‘trading’ in commodities that money can’t buy: how to allocate students to schools or universities, say, or to match organ donors to patients.
Universities can’t determine which students enrol simply by setting their fees arbitrarily high, since these are capped. And payments for organ donation are generally prohibited on ethical grounds. In such situations, how can one find matches that are stable, in the sense that no one considers they can do better by seeking a different match?
In the 1960s Shapley and his coworker David Gale analysed the most familiar match-making problem: marriage. They asked how ten men and ten women could be matched such that none would see any benefit in breaking the partnership to make a better match.
The answer was to let one group (men or women) choose their preferred partner, and then let those who were rejected by their first choice make their second-best selection. This process continues until none of the choosers wishes to make another proposal, whereupon the group holding the proposals finally accepts them.
Shapley and Gale (who died in 2008) proved that this process will always lead to stable matching [1]. They also found, however, that it works to the advantage of the choosers – that is, those who make the proposals do better than those receiving them.
“Without the framework Shapley and Gale introduced, we would not be able to think about these problems in sound theoretical terms”, says Abdulkadiroglu.
However, their work was considered little more than a neat academic result until, about 20 years later, Roth saw that it could be applied to situations in the real world. He found that the US National Resident Matching Program, a clearing house for allocating medical graduates to hospitals, used an algorithm similar to Shapley and Gale’s, which prevented problems caused by the fact that hospitals might need to offer students internships before they even knew which area they were going to specialize in [2].
But he discovered that the same problem in the UK was addressed with quite different matching algorithms in different regions, some of which were stable and some not [3]. His work persuaded local health authorities to abandon inefficient, unstable practices.
Roth also helped to tailor such matching strategies to specific market conditions – for example, to adapt the allocation of students to hospitals to the constraint that, as more women graduated, students might often be looking for places as a couple. And he showed how to make these matching schemes immune to manipulation by either party in the transaction.
Roth and his coworkers also applied the Gale-Shapley algorithm to the allocation of pupils among schools. “He directly employs the theory in real-life problems”, says Abdulkadiroglu. “This is not a trivial task. Life brings in complications and institutional constraints that are difficult to imagine or study within the abstract world of theory.”
Shapley extended his analysis to cases where one of the parties in the transaction is passive, expressing no preferences – for example, in the allocation of student rooms. David Gale devised a scheme for finding a stable allocation called ‘top trading’, in which agents are given one object each but can swap them for their preferred choice. Satisfied swappers leave the market, and the others continue the swapping until everything has been allocated. In 1974 Shapley and Herbert Scarf showed that this process always led to stable solutions [4]. Roth has subsequently used this approach to match patients with organ donors.
All of these situations are examples of so-called cooperative game theory, in which the agents seek to align the choices, make matches and coalitions – as opposed to the more familiar non-cooperative game theory that won Nobels for John Nash (1994), Thomas Schelling (2005) and others, in which agents act independently. “In my view, Shapley has made more than one prize-worthy contribution to game theory”, says Milgrom, “but his work on matching has the greatest economic significance.”
With economic theory signally failing to bring order and stability to the world’s financial markets, it’s notable that the Nobel committee has chosen to reward work that offers practical solutions in ‘markets’ in which money is of little consequence. The work of Shapley and Roth shows that there is room for economic theory outside the ruthless cut-and-thrust of money markets – and perhaps, indeed, that in a more cooperative world it can be more effective.
References
1. Gale, D. & Shapley, L. S. American Mathematical Monthly 69, 9-15 (1962).
2. Roth, A. E. Journal of Political Economy 92, 991-1016 (1984).
3. Roth, A. E. American Economic Review 81, 415-40 (1991).
4. Shapley, L.S. & Scarf, H. Journal of Mathematical Economics 1, 23-37 (1974).
________________________________________________
Two economists are rewarded for the theory and application of how to design markets for money-free transactions
The theory and practice of matching resources to those who need them, in cases where conventional market forces cannot determine the outcome, has won the Nobel prize in economics for Lloyd Shapley of the University of California at Los Angeles and Alvin Roth of Harvard University.
Their work on matching “has applications everywhere”, says economist Atila Abdulkadiroglu of Duke University in Durham, North Carolina. “Shapley's work laid the groundwork, and Roth's work brought the theory to life.”
“This is terrific prize to a pair of very deserving scholars”, says economist Paul Milgrom of Stanford University in California.
The work of Shapley and Roth shows how to find optimal matches between people or institutions ‘trading’ in commodities that money can’t buy: how to allocate students to schools or universities, say, or to match organ donors to patients.
Universities can’t determine which students enrol simply by setting their fees arbitrarily high, since these are capped. And payments for organ donation are generally prohibited on ethical grounds. In such situations, how can one find matches that are stable, in the sense that no one considers they can do better by seeking a different match?
In the 1960s Shapley and his coworker David Gale analysed the most familiar match-making problem: marriage. They asked how ten men and ten women could be matched such that none would see any benefit in breaking the partnership to make a better match.
The answer was to let one group (men or women) choose their preferred partner, and then let those who were rejected by their first choice make their second-best selection. This process continues until none of the choosers wishes to make another proposal, whereupon the group holding the proposals finally accepts them.
Shapley and Gale (who died in 2008) proved that this process will always lead to stable matching [1]. They also found, however, that it works to the advantage of the choosers – that is, those who make the proposals do better than those receiving them.
“Without the framework Shapley and Gale introduced, we would not be able to think about these problems in sound theoretical terms”, says Abdulkadiroglu.
However, their work was considered little more than a neat academic result until, about 20 years later, Roth saw that it could be applied to situations in the real world. He found that the US National Resident Matching Program, a clearing house for allocating medical graduates to hospitals, used an algorithm similar to Shapley and Gale’s, which prevented problems caused by the fact that hospitals might need to offer students internships before they even knew which area they were going to specialize in [2].
But he discovered that the same problem in the UK was addressed with quite different matching algorithms in different regions, some of which were stable and some not [3]. His work persuaded local health authorities to abandon inefficient, unstable practices.
Roth also helped to tailor such matching strategies to specific market conditions – for example, to adapt the allocation of students to hospitals to the constraint that, as more women graduated, students might often be looking for places as a couple. And he showed how to make these matching schemes immune to manipulation by either party in the transaction.
Roth and his coworkers also applied the Gale-Shapley algorithm to the allocation of pupils among schools. “He directly employs the theory in real-life problems”, says Abdulkadiroglu. “This is not a trivial task. Life brings in complications and institutional constraints that are difficult to imagine or study within the abstract world of theory.”
Shapley extended his analysis to cases where one of the parties in the transaction is passive, expressing no preferences – for example, in the allocation of student rooms. David Gale devised a scheme for finding a stable allocation called ‘top trading’, in which agents are given one object each but can swap them for their preferred choice. Satisfied swappers leave the market, and the others continue the swapping until everything has been allocated. In 1974 Shapley and Herbert Scarf showed that this process always led to stable solutions [4]. Roth has subsequently used this approach to match patients with organ donors.
All of these situations are examples of so-called cooperative game theory, in which the agents seek to align the choices, make matches and coalitions – as opposed to the more familiar non-cooperative game theory that won Nobels for John Nash (1994), Thomas Schelling (2005) and others, in which agents act independently. “In my view, Shapley has made more than one prize-worthy contribution to game theory”, says Milgrom, “but his work on matching has the greatest economic significance.”
With economic theory signally failing to bring order and stability to the world’s financial markets, it’s notable that the Nobel committee has chosen to reward work that offers practical solutions in ‘markets’ in which money is of little consequence. The work of Shapley and Roth shows that there is room for economic theory outside the ruthless cut-and-thrust of money markets – and perhaps, indeed, that in a more cooperative world it can be more effective.
References
1. Gale, D. & Shapley, L. S. American Mathematical Monthly 69, 9-15 (1962).
2. Roth, A. E. Journal of Political Economy 92, 991-1016 (1984).
3. Roth, A. E. American Economic Review 81, 415-40 (1991).
4. Shapley, L.S. & Scarf, H. Journal of Mathematical Economics 1, 23-37 (1974).
Monday, October 15, 2012
A little help for my friends
It’s sometimes said in defence of J. K. Rowling that even indifferent writing can take children towards better fare. I have no idea, from very limited contact with Rowling, if that is likely to apply there, but the principle worked for me in the case of Michael Moorcock, except to say that even when he was working at his fastest and pulpiest in the early 1970s, with Elric doing his angst-ridden thing to keep the wolf from Moorcock’s door, his writing was never actually indifferent but bursting with bonkers energy and always managing to imply that (as with Ballard, in a very different way) there was a hefty mind behind what the garish book covers tried to sell as science-fantasy schlock. And so it was, as Jerry Cornelius pointed to Burroughs (William, not Edgar Rice) and modernist experimentation, and Behold the Man heads towards Jung, the Dancers at the End of Time bring up Peake and Goethe, thus to Dickens and Dostoevsky and after that you’re on your own. Which kind of means that when Moorcock started writing literary novels like Mother London, that was no more than his fans expected.
Which is perhaps a verbose way of saying that, when my friend Henry Gee garners praise from Moorcock (who he’d managed to convince to write a Futures piece for Nature) for his sci-fi trilogy The Sigil, I get a thrill of vicarious pleasure. It’s grand enough already that Henry has conceived of a blockbusting space-opera trilogy now graced with E. E. ‘Doc’ Smith-style covers and with what seems to be the sort of outrageously cosmic plotline that could only have been hatched by a Lovecraft fan ensconced in the wilds of Cromer (I’ve seen only the first volume, so don’t know where the story ends up, but only that this is a Grand Concept indeed). But to see it praised by Moorcock, Kim Stanley Robinson and Ian Watson is a great pleasure. And so here, because my blog is among other things unashamedly a vehicle for puffing my friends, is an advert for Henry’s deliciously retro literary triple album.
And while I am singing praises, I have been long overdue in giving a plug for the album Raga Saga, which features string quartet arrangements of South Indian classical music by V. S. Narasimhan and V. R. Sekar. The CD’s title is perhaps dodgy; the rest is certainly not. This is a fascinating blend of Indian classical tradition and Western orchestration. I’m nervous that my unfamiliarity with this tradition – I know a little about the theory behind some of this music, but have very little exposure to it – leave me vulnerable to that common Western trait of revelling in a vague “exoticism” without any deep appreciation of what is actually happening in the music. I’ve no doubt my enjoyment has an element of this. But it does seem to me that this particular example of “east meets west” brings something interesting and valuable to both. Narasimhan’s brother Vasantha Iyengar told me about this recording, and he says that:
“My brother lives in Chennai, India and is a professional violinist and composer. He works for the film industry to make a living but is passionate about and has been trained in Western and Indian music. Because of this combination, he always heard the beautiful Indian melodies with harmony in his head and started trying out this idea. He has been working on this kind of style since the year 2000. In 2005, to his utter pleasant surprise, he got email from world class musicians, Yo Yo Ma, Zubin Mehta and his violinist hero, Vengerov, appreciating his quartet work. He has been very encouraged about continuing with this pioneering work. It is still difficult to spread the message that great music can be made with this kind of blend and of course to get attention from companies like Sony for a recording. So my son, just out of business school has taken it upon himself to help the uncle out to bring out his vision: he has built the website stringtemple and is doing his best.”
I hope this effort is still working out: it deserves to.
Which is perhaps a verbose way of saying that, when my friend Henry Gee garners praise from Moorcock (who he’d managed to convince to write a Futures piece for Nature) for his sci-fi trilogy The Sigil, I get a thrill of vicarious pleasure. It’s grand enough already that Henry has conceived of a blockbusting space-opera trilogy now graced with E. E. ‘Doc’ Smith-style covers and with what seems to be the sort of outrageously cosmic plotline that could only have been hatched by a Lovecraft fan ensconced in the wilds of Cromer (I’ve seen only the first volume, so don’t know where the story ends up, but only that this is a Grand Concept indeed). But to see it praised by Moorcock, Kim Stanley Robinson and Ian Watson is a great pleasure. And so here, because my blog is among other things unashamedly a vehicle for puffing my friends, is an advert for Henry’s deliciously retro literary triple album.
And while I am singing praises, I have been long overdue in giving a plug for the album Raga Saga, which features string quartet arrangements of South Indian classical music by V. S. Narasimhan and V. R. Sekar. The CD’s title is perhaps dodgy; the rest is certainly not. This is a fascinating blend of Indian classical tradition and Western orchestration. I’m nervous that my unfamiliarity with this tradition – I know a little about the theory behind some of this music, but have very little exposure to it – leave me vulnerable to that common Western trait of revelling in a vague “exoticism” without any deep appreciation of what is actually happening in the music. I’ve no doubt my enjoyment has an element of this. But it does seem to me that this particular example of “east meets west” brings something interesting and valuable to both. Narasimhan’s brother Vasantha Iyengar told me about this recording, and he says that:
“My brother lives in Chennai, India and is a professional violinist and composer. He works for the film industry to make a living but is passionate about and has been trained in Western and Indian music. Because of this combination, he always heard the beautiful Indian melodies with harmony in his head and started trying out this idea. He has been working on this kind of style since the year 2000. In 2005, to his utter pleasant surprise, he got email from world class musicians, Yo Yo Ma, Zubin Mehta and his violinist hero, Vengerov, appreciating his quartet work. He has been very encouraged about continuing with this pioneering work. It is still difficult to spread the message that great music can be made with this kind of blend and of course to get attention from companies like Sony for a recording. So my son, just out of business school has taken it upon himself to help the uncle out to bring out his vision: he has built the website stringtemple and is doing his best.”
I hope this effort is still working out: it deserves to.
Sunday, October 14, 2012
Quantum optics strikes again
Here’s a piece I wrote for the Prospect blog on the physics Nobel. For my Prospect article on the renaissance of interest in the foundations of quantum theory, see here.
____________________________________________________________
There’s never been a better time to be a quantum physicist. The foundations of quantum theory were laid about a hundred years ago, but the subject is currently enjoying a renaissance as modern experimental techniques make it possible to probe fundamental questions that were left hanging by the subject’s originators, such as Albert Einstein, Niels Bohr, Erwin Schrödinger and Werner Heisenberg. We are now not only getting to grapple with the alleged weirdness of the quantum world, but also putting its paradoxical principles to practical use.
This is reflected in the fact that three physics Nobel prizes have been awarded since 1997 in the field of quantum optics, the most recent going this year to Serge Haroche of the Collège de France in Paris and David Wineland of the National Institute of Standards and Technology in Boulder, Colorado. It’s ‘quantum’ because the work of these two scientists is concerned with examining the way atoms and other small particles are governed by quantum rules. And it’s ‘optics’ because they use light to do it. Indeed, light is itself described by quantum physics, being composed (as Einstein’s Nobel-winning work of 1905 showed) of packets of energy called photons. The word ‘quantum’ was coined by Max Planck in 1900 to describe this discrete ‘graininess’ of the world at the scale of atoms.
The basic principle of a quantum particle is that its energy is constrained to certain discrete amounts, rather than being changeable gradually. Whereas a bicycle wheel can spin at any speed (faster speeds corresponding to more energy), a quantum wheel may rotate only at several distinct speeds. And it may jump between them only if supplied with the right amount of energy. Atoms make these ‘quantum jumps’ between energy states when they absorb photons with the right energy – this in turn being determined by the photon’s wavelength (light of different colours has different wavelengths).
Scientists since Planck’s time have been using light to study these quantum states of atoms. The trouble is that this entails changing the state in order to observe it. Haroche and Wineland have pioneered methods of probing quantum states without destroying them. That’s important not just to examine the fundamentals of quantum theory but for some applications of quantum behaviour, such as high-precision atomic clocks (central to GPS systems) and superfast quantum computers.
Wineland uses ‘atom traps’ to capture individual electrically charged atoms (ions) in electric fields. One counter-intuitive conclusion of quantum theory is that atoms can exist in two or more different quantum states simultaneously, called superpositions. These are generally very delicate, and destroyed when we try to look at them. But Wineland had mastered ways to probe superpositions of trapped ions with laser light without unravelling them. Haroche does the opposite: he traps individual photons of light between two mirrors, and fires atoms through the trap that detect the photon’s quantum state without disturbing it.
‘Reading out’ quantum states non-destructively is a trick needed in quantum computers, in which information is encoded in quantum superpositions so that many different states can be examined at once – a property that would allow some problems to be solved extremely fast. Such a ‘quantum information technology’ is steadily becoming reality, and it is doubtless this combination of fundamental insight and practical application that has made quantum optics so popular with Stockholm. Quantum physics might still seem other-worldly, but we’ll all be making ever more use of it.
____________________________________________________________
There’s never been a better time to be a quantum physicist. The foundations of quantum theory were laid about a hundred years ago, but the subject is currently enjoying a renaissance as modern experimental techniques make it possible to probe fundamental questions that were left hanging by the subject’s originators, such as Albert Einstein, Niels Bohr, Erwin Schrödinger and Werner Heisenberg. We are now not only getting to grapple with the alleged weirdness of the quantum world, but also putting its paradoxical principles to practical use.
This is reflected in the fact that three physics Nobel prizes have been awarded since 1997 in the field of quantum optics, the most recent going this year to Serge Haroche of the Collège de France in Paris and David Wineland of the National Institute of Standards and Technology in Boulder, Colorado. It’s ‘quantum’ because the work of these two scientists is concerned with examining the way atoms and other small particles are governed by quantum rules. And it’s ‘optics’ because they use light to do it. Indeed, light is itself described by quantum physics, being composed (as Einstein’s Nobel-winning work of 1905 showed) of packets of energy called photons. The word ‘quantum’ was coined by Max Planck in 1900 to describe this discrete ‘graininess’ of the world at the scale of atoms.
The basic principle of a quantum particle is that its energy is constrained to certain discrete amounts, rather than being changeable gradually. Whereas a bicycle wheel can spin at any speed (faster speeds corresponding to more energy), a quantum wheel may rotate only at several distinct speeds. And it may jump between them only if supplied with the right amount of energy. Atoms make these ‘quantum jumps’ between energy states when they absorb photons with the right energy – this in turn being determined by the photon’s wavelength (light of different colours has different wavelengths).
Scientists since Planck’s time have been using light to study these quantum states of atoms. The trouble is that this entails changing the state in order to observe it. Haroche and Wineland have pioneered methods of probing quantum states without destroying them. That’s important not just to examine the fundamentals of quantum theory but for some applications of quantum behaviour, such as high-precision atomic clocks (central to GPS systems) and superfast quantum computers.
Wineland uses ‘atom traps’ to capture individual electrically charged atoms (ions) in electric fields. One counter-intuitive conclusion of quantum theory is that atoms can exist in two or more different quantum states simultaneously, called superpositions. These are generally very delicate, and destroyed when we try to look at them. But Wineland had mastered ways to probe superpositions of trapped ions with laser light without unravelling them. Haroche does the opposite: he traps individual photons of light between two mirrors, and fires atoms through the trap that detect the photon’s quantum state without disturbing it.
‘Reading out’ quantum states non-destructively is a trick needed in quantum computers, in which information is encoded in quantum superpositions so that many different states can be examined at once – a property that would allow some problems to be solved extremely fast. Such a ‘quantum information technology’ is steadily becoming reality, and it is doubtless this combination of fundamental insight and practical application that has made quantum optics so popular with Stockholm. Quantum physics might still seem other-worldly, but we’ll all be making ever more use of it.
Friday, October 12, 2012
Don't take it too hard
This one appeared yesterday on Nature news.
__________________________________________________
A study of scientific papers’ histories from submission to publication unearths some unexpected patterns
Just had your paper rejected? Don’t worry – that might boost its eventual citation tally. An excavation of the usually hidden trajectories of scientific papers from journal to journal before publication has found that papers published in a journal after having first been submitted and rejected elsewhere receive significantly more citations on average than ones submitted only to that journal.
This is one of the unexpected insights offered by the study, conducted by Vincent Calcagno of the French Institute for Agricultural Research in Sophia-Antipolis and his colleagues [1]. They have tracked the submission histories of 80,748 scientific articles published among 923 journals between 2006 an 2008, based on the information provided by the papers’ authors.
Using this information, the researchers constructed a network of manuscript flows: a link exists between two journals if a manuscript initially submitted to one of them was rejected and subsequently submitted to the other. The links therefore have a directional character, like flows in a river network.
“The authors should be commended for assembling this previously hidden data”, says physicist Sidney Redner of Boston University, a specialist on networks of scientific citation.
Some of what Calcagno and colleagues found was unsurprising. On the whole, the network was modular, composed of distinct clusters that corresponded to subject categories, such as plant sciences, genetics and developmental biology, and with rather little movement of manuscripts between journals in different categories.
It’s no surprise too that the highest-impact journals, such as Nature and Science, are central to the network. What was less expected is that these journals publish a higher proportion of papers of papers previously submitted elsewhere, relative to more specialized and lower-impact publications.
“We expected the opposite trend, and the result is at first sight paradoxical”, says Calcagno. But Michael Schreiber, an expert in bibliometrics at the Technical University of Chemnitz in Germany, argues that this “is not surprising if you turn it around: it means that lower-impact journals get fewer resubmissions.” For one thing, he says, there are more low-impact journals, so resubmissions are more widely spread. And second, low-impact journals will have a lower threshold for acceptance and so will accept more first-time submissions.
On the whole, however, there are surprisingly few resubmissions. Three-quarters of all published papers appear in the journal to which they are first submitted. This suggests that the scientific community is rather efficient at figuring out where their papers are best suited. Calcagno says he found this surprising: “I expected more resubmissions, in view of the journal acceptance rates I was familiar with.”
Although the papers in this study were all in the biological sciences, the findings show some agreement with a previous study of papers submitted to the leading chemistry journal Angewandte Chemie, which found that most of those rejected ended up being published in journals with a lower impact factor [2].
Whether the same trends will be found for other disciplines remains to be seen, however. “There are clear differences in publication practices of, say, mathematics or economics”, says Calcagno, and he thinks these might alter the proportions of resubmissions.
Perhaps the most surprising finding of the work is that papers published after having been previously submitted to another journal are more highly cited on average than papers in the same journal that haven’t been – regardless of whether the resubmissions moved to journals with higher or lower impact.
Calcagno and colleagues think that this reflects the improving influence of peer review: the input from referees and editors makes papers better, even if they get rejected initially.
It’s a heartening idea. “Given the headaches encountered during refereeing by all parties involved, it is gratifying that there is some benefit, at least by citation counts”, says Redner.
But that interpretation has yet to be verified, and contrasts with previous studies of publication histories which found that very few manuscripts change substantially between initial submission and eventual publication [2].
Nonetheless, there is apparently some reason to be patient with your paper’s critics – they’ll do you good in the end. “These results should help authors endure the frustration associated with long resubmission processes”, say the researchers.
On the other hand, the conclusions that Schreiber draws for journal editors might please authors less: “Reject more, because more rejections improve quality.”
References
1. Calcagno, V. et al., Science Express doi: 10.1126/science.1227833 (2012).
2. Bornmann, L. & Daniel, H.-D. Angew. Chem. Int. Ed. 47, 7173-7178 (2008).
__________________________________________________
A study of scientific papers’ histories from submission to publication unearths some unexpected patterns
Just had your paper rejected? Don’t worry – that might boost its eventual citation tally. An excavation of the usually hidden trajectories of scientific papers from journal to journal before publication has found that papers published in a journal after having first been submitted and rejected elsewhere receive significantly more citations on average than ones submitted only to that journal.
This is one of the unexpected insights offered by the study, conducted by Vincent Calcagno of the French Institute for Agricultural Research in Sophia-Antipolis and his colleagues [1]. They have tracked the submission histories of 80,748 scientific articles published among 923 journals between 2006 an 2008, based on the information provided by the papers’ authors.
Using this information, the researchers constructed a network of manuscript flows: a link exists between two journals if a manuscript initially submitted to one of them was rejected and subsequently submitted to the other. The links therefore have a directional character, like flows in a river network.
“The authors should be commended for assembling this previously hidden data”, says physicist Sidney Redner of Boston University, a specialist on networks of scientific citation.
Some of what Calcagno and colleagues found was unsurprising. On the whole, the network was modular, composed of distinct clusters that corresponded to subject categories, such as plant sciences, genetics and developmental biology, and with rather little movement of manuscripts between journals in different categories.
It’s no surprise too that the highest-impact journals, such as Nature and Science, are central to the network. What was less expected is that these journals publish a higher proportion of papers of papers previously submitted elsewhere, relative to more specialized and lower-impact publications.
“We expected the opposite trend, and the result is at first sight paradoxical”, says Calcagno. But Michael Schreiber, an expert in bibliometrics at the Technical University of Chemnitz in Germany, argues that this “is not surprising if you turn it around: it means that lower-impact journals get fewer resubmissions.” For one thing, he says, there are more low-impact journals, so resubmissions are more widely spread. And second, low-impact journals will have a lower threshold for acceptance and so will accept more first-time submissions.
On the whole, however, there are surprisingly few resubmissions. Three-quarters of all published papers appear in the journal to which they are first submitted. This suggests that the scientific community is rather efficient at figuring out where their papers are best suited. Calcagno says he found this surprising: “I expected more resubmissions, in view of the journal acceptance rates I was familiar with.”
Although the papers in this study were all in the biological sciences, the findings show some agreement with a previous study of papers submitted to the leading chemistry journal Angewandte Chemie, which found that most of those rejected ended up being published in journals with a lower impact factor [2].
Whether the same trends will be found for other disciplines remains to be seen, however. “There are clear differences in publication practices of, say, mathematics or economics”, says Calcagno, and he thinks these might alter the proportions of resubmissions.
Perhaps the most surprising finding of the work is that papers published after having been previously submitted to another journal are more highly cited on average than papers in the same journal that haven’t been – regardless of whether the resubmissions moved to journals with higher or lower impact.
Calcagno and colleagues think that this reflects the improving influence of peer review: the input from referees and editors makes papers better, even if they get rejected initially.
It’s a heartening idea. “Given the headaches encountered during refereeing by all parties involved, it is gratifying that there is some benefit, at least by citation counts”, says Redner.
But that interpretation has yet to be verified, and contrasts with previous studies of publication histories which found that very few manuscripts change substantially between initial submission and eventual publication [2].
Nonetheless, there is apparently some reason to be patient with your paper’s critics – they’ll do you good in the end. “These results should help authors endure the frustration associated with long resubmission processes”, say the researchers.
On the other hand, the conclusions that Schreiber draws for journal editors might please authors less: “Reject more, because more rejections improve quality.”
References
1. Calcagno, V. et al., Science Express doi: 10.1126/science.1227833 (2012).
2. Bornmann, L. & Daniel, H.-D. Angew. Chem. Int. Ed. 47, 7173-7178 (2008).
The lightning seeds
Here’s my previous piece for BBC Future. A new one just went up – will add that soon. This Center for Lightning Research in Florida looks fairly awesome, as this picture shows – that’s what I call an experiment!
________________________________________________________________
It seems hard to believe that we still don’t understand what causes lightning during thunderstorms – but that’s a fact. One idea is that they are triggered by particles streaming into the atmosphere from space, which release showers of electrons that seed the strike. A new study interrogates that notion and finds that, if there’s anything in it, it’s probably not quite in the way we thought.
Famously, Benjamin Franklin was one of the first people to investigate how lightning is triggered. He was right enough to conclude that lightning is a natural electrical discharge – those were the early days of harnessing electricity – but it’s not clear that his celebrated kite-and-key experiment ever went beyond a mere idea, not least because the kite was depicted, in Franklin’s account, as being flown – impossibly – out of a window.
In some ways we’ve not got much further since Franklin. It’s not yet agreed, for example, how a thundercloud gets charged up in the first place. Somehow the motions of air, cloud droplets, and precipitation (at that altitude, ice particles) conspire to separate positive from negative charge at the scale of individual molecules. It seems that ice particles acquire electrical charge as they collide, rather as rubbing can induce static electricity, and that somehow smaller ice particles tend to become positively charged while larger ones become negatively charged. As the small particles are carried upwards by convection currents, the larger ones sink under gravity, and so their opposite charges get separated, creating an electrical field. A lightning strike discharges this field – it is basically a gigantic spark jumping between the ‘live wire’ and the ‘earth’ of an electrical circuit, in which the former is the charged cloud and the latter is literally the earth.
While many details of this process aren’t at all clear, one of the biggest mysteries is how the spark gets triggered. For the electrical fields measured in thunderclouds don’t seem nearly big enough to induce the so-called ‘electrical breakdown’ needed for a lightning strike, in which air along the lightning path becomes ionized (its molecules losing or gaining electrons to become electrically charged.) It’s rather as if a spark were to leap spontaneously out of a plug socket and hit you – the electric field just isn’t sufficient for that to happen.
Something is therefore needed to ‘seed’ the lightning discharge. In 1997 Russian scientist Alexander Gurevich and his coworkers in Moscow suggested that perhaps the seed is a cosmic ray: a particle streaming into the atmosphere from outer space at high energy. These particles – mostly protons and electrons – pervade the universe, being produced in awesomely energetic astrophysical processes, and they are constantly raining down on Earth. If a cosmic ray collides with an air molecule, this can kick out a spray of fundamental particles and fragments of nuclei. Those in turn interact with other molecules, ionizing them and generating a shower of electrons.
In the electric field of a thundercloud, these electrons are accelerated, much as particles are in a particle accelerator, creating yet more energetic collisions in a ‘runaway’ process that builds into a lightning strike. This process is also expected to produce X-rays and gamma-rays, which are spawned by ‘relativistic’ electrons that have speeds approaching the speed of light. Since bursts of these rays have been detected by satellites during thunderstorms, Gurevich’s idea of cosmic-ray-induced lightning seemed plausible.
If it’s right, the avalanche of electrons should also generate radio waves, which would be detectable from the ground. Three years ago Joseph Dwyer of the Florida Institute of Technology began trying to detect such radio signals from thunderstorms, as well as using arrays of particle detectors to look for the showers of particles predicted from cosmic-ray collisions. These and other studies by Dwyer and other groups are still being conducted (literally) at the International Center for Lightning Research and Testing at the US Army base of Camp Blanding in Florida.
But meanwhile, Dwyer has teamed up with Leonid Babich and his colleagues at the Russian Federal Nuclear Center in Sarov to delve further into the theory of Gurevich’s idea. (The Russian pre-eminence in this field of the electrical physics of the atmosphere dates from the cold-war Soviet era.) They have asked whether the flux of high-energy cosmic-rays, with their accompanying runaway electron avalanches, is sufficient to boost the conductivity of air and cause a lightning strike.
To do that, the researchers have worked through the equations describing the chances of cosmic-ray collisions, the rate of electron production and the electric fields this induces. The equations are too complicated to be solved by hand, but a computer can crunch through the numbers. And the results don’t look good for Gurevich’s hypothesis: runaway electron avalanches produced by cosmic-ray showers just don’t seem capable of producing electrical breakdown of air and lightning discharge.
However, all is not lost. As well as the particle cascades caused by collisions of high-energy cosmic rays, the atmosphere can also be electrified by the effects of cosmic rays with lower energy, which are more plentiful. When these collide with air molecules, the result is nothing like as catastrophic: they simply ionize the molecules. But a gradual build-up of such ionized particles within a thundercloud could, according to these calculations, eventually produce a strong enough electrical field to permit a lightning discharge. That possibility has yet to be investigated in detail, but Dwyer and colleagues think that it leaves an avenue still open for cosmic rays to lie at the origin of thunderbolts.
Paper: L. P. Babich, E. I. Bochkov, J. R. Dwyer & I. M. Kutsyk, Journal of Geophysical Research 117, A09316 (2012).
Monday, October 08, 2012
Chemists get the blues
Just got back from judging the Chemistry World science writing competition. Makes me feel old, or perhaps just reminds me that I am. Anyway, many congratulations to the winner Chris Sinclair, whose article I believe will appear soon in Chemistry World. Meanwhile, here is my last Crucible column.
__________________________________________________
“Ultramarine blue is a colour illustrious, beautiful, and most perfect, beyond all other colours”, wrote the Italian artist Cennino Cennini in the late fourteenth century. He and his contemporaries adored this mineral pigment for its rich, deep lustre. But they didn’t use it much, at least not unless they had a particularly rich client, because it was so costly. As the name implies, it came from ‘over the seas’ – all the way from what is now Afghanistan, where mines in the remote region of Badakhshan were the only known source of the parent mineral, lapis lazuli, for centuries. Not only was ultramarine expensive to import, but it was laborious to make from the raw material, in a process of grinding and repeated washing that separated the blue colorant from impurities. So ultramarine could cost more than its weight in gold, and painters reserved it for the most precious parts of their altarpieces, especially the robes of the Virgin Mary.
Blue has always been a problem for artists. One of the first synthetic pigments, Egyptian blue (calcium copper silicate), was pale. The best mineral alternative to ultramarine, called azurite (hydrous copper carbonate), was more readily accessible but greenish rather than having ultramarine’s glorious purple-reddish tinge. Around 1704 a misconceived alchemical experiment yielded Prussian blue (iron ferrocyanate), which is blackish, prone to discolour, and decomposes to hydrogen cyanide under mildly acidic conditions. The discovery of cobalt blue (cobalt aluminate) in 1802, followed by a synthetic route to ultramarine in 1826, seemed to solve these problems of hue, stability and cost, but even these ‘artificial’ blues have drawbacks: cobalt is rather toxic, and ultramarine is sensitive to heat, light and acid, which limits its use in some commercial applications.
This is why the identification of a new inorganic blue pigment in 2009 looked so promising. Mas Subramanian and coworkers at Oregon State University found that trivalent manganese ions produce an intense blue colour, with the prized ‘reddish’ shade of ultramarine, when they occupy a trigonal bipyramidal site in metal oxides [1]. The researchers substituted Mn3+ for some indium ions in yttrium indium oxide (YInO3), forming a solid solution of YInO3 and YMnO3, which has a blue colour even though the two oxides themselves are white and black respectively. The depth of the colour varies from pale to virtually black as the manganese content is increased, although it is significantly blue even for only about 2 percent substitution. The researchers found that inserting manganese into other metal oxides with the same coordination geometry also offers strong blues. Meanwhile, similar substitutions of iron (III) and copper (II) generate bright orange and green pigments [2,3]. Those are traditionally less problematic, however, and while the materials may prove to have useful magnetic properties, it’s the blue that has attracted colour manufacturers.
Producing a commercially viable pigment is much more than a matter of finding a strongly coloured substance. It must be durable, for example. Although ultramarine, made industrially from cheap ingredients, is now available in quantities that would have staggered Titian and Michelangelo, it fades in direct sunlight because the sodalite framework is degraded and the sulphur chromophores are released and decompose – a process only recently understood [4]. This rules out many uses for exterior coatings. In contrast, the manganese compound has good thermal, chemical and light stability.
One of the key advantages of the YIn1-xMnxO3 compounds over traditional blues, however, is their strong reflectivity in the near-infrared region. Many other pigments, including cobalt blue and carbon black, have strong absorption bands here. This means that surfaces coated with these pigments heat up when exposed to strong sunlight. Building roofs coloured with such materials become extremely hot and can increase the demand of air conditioning in hot climates; instrument panels and steering wheels of cars may become almost too hot to touch. That’s why there is a big industrial demand for so-called ‘cool’ pigments, which retain their absorbance in the visible region but have low absorbance in the infrared. These can feel noticeably cooler when exposed to sunlight.
This aspect in particular has motivated the Ohio-based pigment company Shepherd Color to start exploring the commercial potential of the new blue pigment. One significant obstacle is the price of the indium oxide (In2O3) used as a starting material. This is high, because it is produced (mostly in China) primarily for the manufacture of the transparent conductive oxide indium titanium oxide for electronic displays and other optoelectronic applications. Those uses demand that the material be made with extremely high purity (around 99.999 percent), which drives up the cost. In principle, the low-purity In2O3 that would suffice for making Yin1-xMnxO3 could be considerably cheaper, but is not currently made at all as there is no market demand.
That’s why Subramanian and colleagues are now trying to find a way of eliminating the indium from their manganese compounds – to find a cheaper host that can place the metal atoms in the same coordination environment. If they succeed, it’s possible that we’ll see yet another revolution in the chemistry of the blues.
1. A. E. Smith et al., J. Am. Chem. Soc. 131, 17084-17086 (2009).
2. A. E. Smith, A. W. Sleight & M. A. Subramanian, Mater. Res. Bull. 46, 1-5 (2011). 1. A. E. Smith et al., J. Am. Chem. Soc. 131, 17084-17086 (2009).
3. P. Jiang, J. Li, A. W. Sleight & M. A. Subramanian, Inorg. Chem. 50, 5858-5860 (2011).
4. E. Del Federico et al., Inorg. Chem. 45, 1270-1276 (2006).
__________________________________________________
“Ultramarine blue is a colour illustrious, beautiful, and most perfect, beyond all other colours”, wrote the Italian artist Cennino Cennini in the late fourteenth century. He and his contemporaries adored this mineral pigment for its rich, deep lustre. But they didn’t use it much, at least not unless they had a particularly rich client, because it was so costly. As the name implies, it came from ‘over the seas’ – all the way from what is now Afghanistan, where mines in the remote region of Badakhshan were the only known source of the parent mineral, lapis lazuli, for centuries. Not only was ultramarine expensive to import, but it was laborious to make from the raw material, in a process of grinding and repeated washing that separated the blue colorant from impurities. So ultramarine could cost more than its weight in gold, and painters reserved it for the most precious parts of their altarpieces, especially the robes of the Virgin Mary.
Blue has always been a problem for artists. One of the first synthetic pigments, Egyptian blue (calcium copper silicate), was pale. The best mineral alternative to ultramarine, called azurite (hydrous copper carbonate), was more readily accessible but greenish rather than having ultramarine’s glorious purple-reddish tinge. Around 1704 a misconceived alchemical experiment yielded Prussian blue (iron ferrocyanate), which is blackish, prone to discolour, and decomposes to hydrogen cyanide under mildly acidic conditions. The discovery of cobalt blue (cobalt aluminate) in 1802, followed by a synthetic route to ultramarine in 1826, seemed to solve these problems of hue, stability and cost, but even these ‘artificial’ blues have drawbacks: cobalt is rather toxic, and ultramarine is sensitive to heat, light and acid, which limits its use in some commercial applications.
This is why the identification of a new inorganic blue pigment in 2009 looked so promising. Mas Subramanian and coworkers at Oregon State University found that trivalent manganese ions produce an intense blue colour, with the prized ‘reddish’ shade of ultramarine, when they occupy a trigonal bipyramidal site in metal oxides [1]. The researchers substituted Mn3+ for some indium ions in yttrium indium oxide (YInO3), forming a solid solution of YInO3 and YMnO3, which has a blue colour even though the two oxides themselves are white and black respectively. The depth of the colour varies from pale to virtually black as the manganese content is increased, although it is significantly blue even for only about 2 percent substitution. The researchers found that inserting manganese into other metal oxides with the same coordination geometry also offers strong blues. Meanwhile, similar substitutions of iron (III) and copper (II) generate bright orange and green pigments [2,3]. Those are traditionally less problematic, however, and while the materials may prove to have useful magnetic properties, it’s the blue that has attracted colour manufacturers.
Producing a commercially viable pigment is much more than a matter of finding a strongly coloured substance. It must be durable, for example. Although ultramarine, made industrially from cheap ingredients, is now available in quantities that would have staggered Titian and Michelangelo, it fades in direct sunlight because the sodalite framework is degraded and the sulphur chromophores are released and decompose – a process only recently understood [4]. This rules out many uses for exterior coatings. In contrast, the manganese compound has good thermal, chemical and light stability.
One of the key advantages of the YIn1-xMnxO3 compounds over traditional blues, however, is their strong reflectivity in the near-infrared region. Many other pigments, including cobalt blue and carbon black, have strong absorption bands here. This means that surfaces coated with these pigments heat up when exposed to strong sunlight. Building roofs coloured with such materials become extremely hot and can increase the demand of air conditioning in hot climates; instrument panels and steering wheels of cars may become almost too hot to touch. That’s why there is a big industrial demand for so-called ‘cool’ pigments, which retain their absorbance in the visible region but have low absorbance in the infrared. These can feel noticeably cooler when exposed to sunlight.
This aspect in particular has motivated the Ohio-based pigment company Shepherd Color to start exploring the commercial potential of the new blue pigment. One significant obstacle is the price of the indium oxide (In2O3) used as a starting material. This is high, because it is produced (mostly in China) primarily for the manufacture of the transparent conductive oxide indium titanium oxide for electronic displays and other optoelectronic applications. Those uses demand that the material be made with extremely high purity (around 99.999 percent), which drives up the cost. In principle, the low-purity In2O3 that would suffice for making Yin1-xMnxO3 could be considerably cheaper, but is not currently made at all as there is no market demand.
That’s why Subramanian and colleagues are now trying to find a way of eliminating the indium from their manganese compounds – to find a cheaper host that can place the metal atoms in the same coordination environment. If they succeed, it’s possible that we’ll see yet another revolution in the chemistry of the blues.
1. A. E. Smith et al., J. Am. Chem. Soc. 131, 17084-17086 (2009).
2. A. E. Smith, A. W. Sleight & M. A. Subramanian, Mater. Res. Bull. 46, 1-5 (2011). 1. A. E. Smith et al., J. Am. Chem. Soc. 131, 17084-17086 (2009).
3. P. Jiang, J. Li, A. W. Sleight & M. A. Subramanian, Inorg. Chem. 50, 5858-5860 (2011).
4. E. Del Federico et al., Inorg. Chem. 45, 1270-1276 (2006).
Thursday, October 04, 2012
The cost of useless information
This was a damned difficult story to write for Nature news, and the published version is a fair bit different to this original text. I can’t say which works best – perhaps it’s just one of those stories for which it’s helpful to have more than one telling. Part of the difficulty is that, to be honest, the real interest is fundamental, not in terms of what this idea can do in any applied sense. Anyway, I’m going to append to this some comments from coauthor David Sivak of the Lawrence Berkeley National Laboratory, which help to explain the slightly counter-intuitive notion of proteins being predictive machines with memories.
__________________________________________________
Machines are efficient only if they collect information that helps them predict the future
The most efficient machines remember what’s happened to them, and use that memory to predict what the future holds. This conclusion of a new study by Susanne Still of the University of Hawaii at Manoa and her coworkers [1] should apply equally to ‘machines’ ranging from molecular enzymes to computers and even scientific models. It not only offers a new way to think about processes in molecular biology but might ultimately lead to improved computer model-building.
“[This idea] that predictive capacity can be quantitatively connected to thermodynamic efficiency is particularly striking”, says chemist Christopher Jarzynski of the University of Maryland.
The notion of constructing a model of the environment and using it for prediction might feel perfectly familiar for a scientific model – a computer model of weather, say. But it seems peculiar to think of a biomolecule such as a motor protein doing this too.
Yet that’s just what it does, the researchers say. A molecular motor does its job by undergoing changes in the conformation of the proteins that comprise it.
“Which conformation it is in now is correlated with what states the environment passed through previously”, says Still’s coworker Gavin Crooks of the Lawrence Berkeley National Laboratory in California. So the state of the molecule at any instant embodies a memory of its past.
But the environment of a biomolecule is full of random noise, and there’s no gain in the machine ‘remembering’ the fine details of that buffeting. “Some information just isn't useful for making predictions”, says Crooks. “Knowing that the last coin toss came up heads is useless information, since it tells you nothing about the next coin toss.”
If a machine does store such useless information, eventually it has to erase it, since its memory is finite – for a biomolecule, very much so. But according to the theory of computation, erasing information costs energy - it results in heat being dissipated, which makes the machine inefficient.
On the other hand, information that has predictive value is valuable, since it enables the machine to ‘prepare’ – to adapt to future circumstances, and thus to work optimally. “My thinking is inspired by dance, and sports in general, where if I want to move more efficiently then I need to predict well”, says Still.
Alternatively, think of a vehicle fitted with a smart driver-assistance system that uses sensors to anticipate its imminent environment and react accordingly – to brake in an optimal manner, and so maximize fuel efficiency.
That sort of predictive function costs only a tiny amount of processing energy compared with the total energy consumption of a car. But for a biomolecule it can be very costly to store information, so there’s a finely balanced tradeoff between the energetic cost of information processing against the inefficiencies caused by poor anticipation.
“If biochemical motors and pumps are efficient, they must be doing something clever”, says Still. “Something in fact tied to the cognitive ability we pride ourselves with: the capacity to construct concise representations of the world we have encountered, which allow us to say something about things yet to come.”
This balance, and the search for concision, is precisely what scientific models have to negotiate too. Suppose you are trying to devise a computer model of a complex system, such as how people vote. It might need to take into account the demographics of the population concerned, and networks of friendship and contact by which people influence each other. Might it also need a representation of mass media influences? Of individuals’ socioeconomic status? Their neural circuitry?
In principle, there’s no end to the information the model might incorporate. But then you have an almost one-to-one mapping of the real world onto the model: it’s not really a model at all, but just a mass of data, much of which might end up being irrelevant to prediction.
So again the challenge is to achieve good predictive power without remembering everything. “This is the same as saying that a model should not be overly complicated – that is, Occam's Razor”, says Still. She hopes this new connection between prediction and memory might guide intuition in improving algorithms that minimize the complexity of a model for a specific desired predictive power, used for example to study phenomena such as climate change.
References
1. Still, S., Sivak, D. A., Bell, A. J. & Crooks, G. E. Phys. Rev. Lett. 109, 120604 (2012).
David Sivak’s comments:
On the level of a single biomolecule, the basic idea is that a given protein under given environmental conditions (temperature, pH, ionic concentrations, bound/unbound small molecules, conformation of protein binding partners, etc.) will have a particular equilibrium probability distribution over different conformations. Different protein sequences will have different equilibrium distributions for given environmental conditions. For example, an evolved protein sequence is more likely to adopt a folded globular structure at ambient temperature, as compare to a random polypeptide. If you look over the distribution of possible environmental conditions, different protein sequences will differ in the correlations between their conformational state and particular environmental variables, i.e., the information their conformational state stores about the particular environmental variables.
When the environmental conditions change, that equilibrium distribution changes, but the actual distribution of the protein shifts to the new equilibrium distribution gradually. In particular, the dynamics of interconversion between different protein conformations dictates how long it takes for particular correlations with past environmental variables to die out, i.e., for memory of particular aspects of the environment to persist. Thus the conformational preferences (as a function of environmental conditions) and the interconversion dynamics determine the memory of particular protein sequences for various aspects of their environmental history.
One complication is that this memory, this correlation with past environmental states, may be a subtle phenomenon, distributed over many detailed aspects of the protein conformation, rather than something relatively simple like the binding of a specific ion. So, we like to stress that the model is implicit. But it certainly is the case that an enzyme mutated at its active site could differ from the wild-type protein in its binding affinity for a metal ion, and could also have a different rate of ion dissociation. Since the presence or absence of this bound metal ion embodies a memory of past ion concentrations, the mutant and wild-type enzymes would differ in their memory.
For a molecular motor, there are lots of fluctuating quantities in the environment, but only some of these fluctuations will be predictive of things the motor needs for its function. An efficient motor should not, for example, retain memory of every water molecule that collides with it, even if it could, because that will provide negligible information of use in predicting future fluctuations of those quantities that are relevant for the motor's functioning.
In vivo, the rotary F0F1-ATP synthase is driven by protonmotive flow across the mitochondrial membrane. The motor could retain conformational correlations with many aspects of its past history, but this analysis says that the motor will behave efficiently if it remembers molecular events predictive of when the next proton will flow down its channel, and loses memory of other molecular events irrelevant to its function. In order to efficiently couple that flow to the functional role of the motor, synthesizing ATP, the motor should retain information about the past that is predictive of such protonmotive flow, but lose any correlation with irrelevant molecular events, such as incidental collisions by water molecules.
But we are hesitant to commit to any particular example. We are all thinking about good concrete instantiations of these concepts for future extensions of this work. Right now, the danger of very specific examples like the F0F1 motor is that people who know much more about the particular system than we do might get bogged down in arguing the details, such as what exactly drives the motor, whether that driving involves conformational selection or induced fit, how concerted the mechanism is, etc., when the main point is that this framework applies regardless of the exact manner in which the system and environment are instantiated. Not to mention the fact that some subtle solvent rearrangements at the mouth of the channel may in fact be very predictive of future proton flow.
__________________________________________________
Machines are efficient only if they collect information that helps them predict the future
The most efficient machines remember what’s happened to them, and use that memory to predict what the future holds. This conclusion of a new study by Susanne Still of the University of Hawaii at Manoa and her coworkers [1] should apply equally to ‘machines’ ranging from molecular enzymes to computers and even scientific models. It not only offers a new way to think about processes in molecular biology but might ultimately lead to improved computer model-building.
“[This idea] that predictive capacity can be quantitatively connected to thermodynamic efficiency is particularly striking”, says chemist Christopher Jarzynski of the University of Maryland.
The notion of constructing a model of the environment and using it for prediction might feel perfectly familiar for a scientific model – a computer model of weather, say. But it seems peculiar to think of a biomolecule such as a motor protein doing this too.
Yet that’s just what it does, the researchers say. A molecular motor does its job by undergoing changes in the conformation of the proteins that comprise it.
“Which conformation it is in now is correlated with what states the environment passed through previously”, says Still’s coworker Gavin Crooks of the Lawrence Berkeley National Laboratory in California. So the state of the molecule at any instant embodies a memory of its past.
But the environment of a biomolecule is full of random noise, and there’s no gain in the machine ‘remembering’ the fine details of that buffeting. “Some information just isn't useful for making predictions”, says Crooks. “Knowing that the last coin toss came up heads is useless information, since it tells you nothing about the next coin toss.”
If a machine does store such useless information, eventually it has to erase it, since its memory is finite – for a biomolecule, very much so. But according to the theory of computation, erasing information costs energy - it results in heat being dissipated, which makes the machine inefficient.
On the other hand, information that has predictive value is valuable, since it enables the machine to ‘prepare’ – to adapt to future circumstances, and thus to work optimally. “My thinking is inspired by dance, and sports in general, where if I want to move more efficiently then I need to predict well”, says Still.
Alternatively, think of a vehicle fitted with a smart driver-assistance system that uses sensors to anticipate its imminent environment and react accordingly – to brake in an optimal manner, and so maximize fuel efficiency.
That sort of predictive function costs only a tiny amount of processing energy compared with the total energy consumption of a car. But for a biomolecule it can be very costly to store information, so there’s a finely balanced tradeoff between the energetic cost of information processing against the inefficiencies caused by poor anticipation.
“If biochemical motors and pumps are efficient, they must be doing something clever”, says Still. “Something in fact tied to the cognitive ability we pride ourselves with: the capacity to construct concise representations of the world we have encountered, which allow us to say something about things yet to come.”
This balance, and the search for concision, is precisely what scientific models have to negotiate too. Suppose you are trying to devise a computer model of a complex system, such as how people vote. It might need to take into account the demographics of the population concerned, and networks of friendship and contact by which people influence each other. Might it also need a representation of mass media influences? Of individuals’ socioeconomic status? Their neural circuitry?
In principle, there’s no end to the information the model might incorporate. But then you have an almost one-to-one mapping of the real world onto the model: it’s not really a model at all, but just a mass of data, much of which might end up being irrelevant to prediction.
So again the challenge is to achieve good predictive power without remembering everything. “This is the same as saying that a model should not be overly complicated – that is, Occam's Razor”, says Still. She hopes this new connection between prediction and memory might guide intuition in improving algorithms that minimize the complexity of a model for a specific desired predictive power, used for example to study phenomena such as climate change.
References
1. Still, S., Sivak, D. A., Bell, A. J. & Crooks, G. E. Phys. Rev. Lett. 109, 120604 (2012).
David Sivak’s comments:
On the level of a single biomolecule, the basic idea is that a given protein under given environmental conditions (temperature, pH, ionic concentrations, bound/unbound small molecules, conformation of protein binding partners, etc.) will have a particular equilibrium probability distribution over different conformations. Different protein sequences will have different equilibrium distributions for given environmental conditions. For example, an evolved protein sequence is more likely to adopt a folded globular structure at ambient temperature, as compare to a random polypeptide. If you look over the distribution of possible environmental conditions, different protein sequences will differ in the correlations between their conformational state and particular environmental variables, i.e., the information their conformational state stores about the particular environmental variables.
When the environmental conditions change, that equilibrium distribution changes, but the actual distribution of the protein shifts to the new equilibrium distribution gradually. In particular, the dynamics of interconversion between different protein conformations dictates how long it takes for particular correlations with past environmental variables to die out, i.e., for memory of particular aspects of the environment to persist. Thus the conformational preferences (as a function of environmental conditions) and the interconversion dynamics determine the memory of particular protein sequences for various aspects of their environmental history.
One complication is that this memory, this correlation with past environmental states, may be a subtle phenomenon, distributed over many detailed aspects of the protein conformation, rather than something relatively simple like the binding of a specific ion. So, we like to stress that the model is implicit. But it certainly is the case that an enzyme mutated at its active site could differ from the wild-type protein in its binding affinity for a metal ion, and could also have a different rate of ion dissociation. Since the presence or absence of this bound metal ion embodies a memory of past ion concentrations, the mutant and wild-type enzymes would differ in their memory.
For a molecular motor, there are lots of fluctuating quantities in the environment, but only some of these fluctuations will be predictive of things the motor needs for its function. An efficient motor should not, for example, retain memory of every water molecule that collides with it, even if it could, because that will provide negligible information of use in predicting future fluctuations of those quantities that are relevant for the motor's functioning.
In vivo, the rotary F0F1-ATP synthase is driven by protonmotive flow across the mitochondrial membrane. The motor could retain conformational correlations with many aspects of its past history, but this analysis says that the motor will behave efficiently if it remembers molecular events predictive of when the next proton will flow down its channel, and loses memory of other molecular events irrelevant to its function. In order to efficiently couple that flow to the functional role of the motor, synthesizing ATP, the motor should retain information about the past that is predictive of such protonmotive flow, but lose any correlation with irrelevant molecular events, such as incidental collisions by water molecules.
But we are hesitant to commit to any particular example. We are all thinking about good concrete instantiations of these concepts for future extensions of this work. Right now, the danger of very specific examples like the F0F1 motor is that people who know much more about the particular system than we do might get bogged down in arguing the details, such as what exactly drives the motor, whether that driving involves conformational selection or induced fit, how concerted the mechanism is, etc., when the main point is that this framework applies regardless of the exact manner in which the system and environment are instantiated. Not to mention the fact that some subtle solvent rearrangements at the mouth of the channel may in fact be very predictive of future proton flow.
Subscribe to:
Posts (Atom)