Wednesday, May 25, 2011
Steve Jones gets unnatural
I’ve just discovered a review of Unnatural in the Lancet by Steve Jones. As one might expect, he has an interesting and quite particular take on it. It’s one with which, happily, I agree.
Monday, May 23, 2011
Belated Prospect
I realise that I meant to put up earlier my May column from Prospect. Almost time for the June column now, but here goes.
________________________________________________________
The notion that God has an inordinate fondness for beetles, credited to the biologist J. B. S. Haldane, retains a whiff of solipsism. For beetles are not so unlike us: multicellular, big enough to see, and legged. But God surely favours single-celled organisms far more. Beetles and humans occupy two nearby tips on the tree of life, while single-celled life forms have two of the three fundamental branches all to themselves: bacteria and archaea, so alike that it was only in the 1970s that the latter were awarded their own branch. Archaea have a different biochemistry to bacteria – their metabolism usually produces methane – and they are found everywhere, including the human gut.
Our place on the ‘tree of life’ now looks like it may be even more insignificant, for a team at the University of California, working with genomics pioneer Craig Venter, claims to have found hints of a fourth major branch in the tree, again populated only by single-celled organisms. These branches, called domains, are the most basic divisions in the Linnaean system of biological classification. We share our domain, the eukaryotes (distinguished by the way their cells are structured), with plants, fungi and yet more monocellular species.
Like most things Venter is involved in, the work is controversial. But perhaps not half so controversial as Venter’s belief, expressed in a panel debate titled ‘What is life?’ in Arizona in February, that all life on Earth might not even have a common origin. “I think the tree of life is an artefact of some early scientific studies, which are not really holding up”, he said, to the alarm of fellow panellist Richard Dawkins. His suggestion that there may be merely a “bush of life” only made matters worse.
Drop in the ocean
Despite the glee of creationists, there was nothing in Venter’s speculative remark that need undermine the case for Darwinian evolution. The claim of a fourth domain is backed by a little more evidence, but remains highly tentative. The data were gathered on a now famous round-the-world cruise that Venter undertook between 2003 and 2007 on his yacht to gather genomic information about the host of unknown microorganisms in the oceans. The rapid gene-analysing techniques that he helped to develop allow the genes of different organisms to be rapidly compared in order to identify evolutionary relationships between them. By looking at the same group of genes in two different organisms, one can deduce where in the tree of life they shared a common ancestor.
Using Venter’s data, Jonathan Eisen in California discovered that two families of genes in these marine microbes each seem to show a branch that doesn’t fit on the conventional tree of life. It’s possible that these genes might have been acquired from some unknown forms of virus (viruses are excluded from the tree altogether). The more exciting alternative is that they flag up a new domain. If so, its inhabitants would seem so far to be quite rare – a minor anomaly, like the Basque language, that has persisted quietly for billions of years. But since we are ignorant about perhaps 99 per cent of species on the planet, who knows?
Thinking big
The European Union is looking for big ideas. Really big ones. Its Flagship programme offers to fund two scientific projects to the tune of €1 bn over the next ten years. These must be “ambitious large-scale, science-driven, visionary research initiatives that aim to achieve a scientific breakthrough, provid[ing] a strong and broad basis for future technological innovation and economic exploitation in a variety of areas, as well as novel benefits for society.” In other words, they’ve got to achieve a heck of a lot, and will have truckloads of money to do so.
Six of the applications – all of them highly collaborative, international and interdisciplinary – have now been selected for a year of pilot funding, starting in May. They range from the highly technical to the borders of science fiction.
One promises to develop graphene, the carbon material that won last year’s physics Nobel prize, into a practical fabric for information technologies. Another proposes to truly figure out how the brain works; a third will integrate information technology with medicine to realise the much-advertised ‘personalized medicine’. But these things will all be pursued regardless of the Flagship scheme. More extraordinary, and therefore both more enticing and more risky, are two proposals to develop intelligent, sensitive artificial agents – characterized here as Guardian Angels or Robot Companions – that will help us individually throughout our lives. The sixth proposal (which received the highest rating) is to develop massive computer-simulation systems to model the entire ‘living Earth’, offering a ‘crisis observatory’ that will forecast global problems ranging from wars to economic meltdowns to natural disasters – the latter now all too vivid. The two initiatives to receive full funding will be selected in mid-2012 for launch in 2013.
________________________________________________________
The notion that God has an inordinate fondness for beetles, credited to the biologist J. B. S. Haldane, retains a whiff of solipsism. For beetles are not so unlike us: multicellular, big enough to see, and legged. But God surely favours single-celled organisms far more. Beetles and humans occupy two nearby tips on the tree of life, while single-celled life forms have two of the three fundamental branches all to themselves: bacteria and archaea, so alike that it was only in the 1970s that the latter were awarded their own branch. Archaea have a different biochemistry to bacteria – their metabolism usually produces methane – and they are found everywhere, including the human gut.
Our place on the ‘tree of life’ now looks like it may be even more insignificant, for a team at the University of California, working with genomics pioneer Craig Venter, claims to have found hints of a fourth major branch in the tree, again populated only by single-celled organisms. These branches, called domains, are the most basic divisions in the Linnaean system of biological classification. We share our domain, the eukaryotes (distinguished by the way their cells are structured), with plants, fungi and yet more monocellular species.
Like most things Venter is involved in, the work is controversial. But perhaps not half so controversial as Venter’s belief, expressed in a panel debate titled ‘What is life?’ in Arizona in February, that all life on Earth might not even have a common origin. “I think the tree of life is an artefact of some early scientific studies, which are not really holding up”, he said, to the alarm of fellow panellist Richard Dawkins. His suggestion that there may be merely a “bush of life” only made matters worse.
Drop in the ocean
Despite the glee of creationists, there was nothing in Venter’s speculative remark that need undermine the case for Darwinian evolution. The claim of a fourth domain is backed by a little more evidence, but remains highly tentative. The data were gathered on a now famous round-the-world cruise that Venter undertook between 2003 and 2007 on his yacht to gather genomic information about the host of unknown microorganisms in the oceans. The rapid gene-analysing techniques that he helped to develop allow the genes of different organisms to be rapidly compared in order to identify evolutionary relationships between them. By looking at the same group of genes in two different organisms, one can deduce where in the tree of life they shared a common ancestor.
Using Venter’s data, Jonathan Eisen in California discovered that two families of genes in these marine microbes each seem to show a branch that doesn’t fit on the conventional tree of life. It’s possible that these genes might have been acquired from some unknown forms of virus (viruses are excluded from the tree altogether). The more exciting alternative is that they flag up a new domain. If so, its inhabitants would seem so far to be quite rare – a minor anomaly, like the Basque language, that has persisted quietly for billions of years. But since we are ignorant about perhaps 99 per cent of species on the planet, who knows?
Thinking big
The European Union is looking for big ideas. Really big ones. Its Flagship programme offers to fund two scientific projects to the tune of €1 bn over the next ten years. These must be “ambitious large-scale, science-driven, visionary research initiatives that aim to achieve a scientific breakthrough, provid[ing] a strong and broad basis for future technological innovation and economic exploitation in a variety of areas, as well as novel benefits for society.” In other words, they’ve got to achieve a heck of a lot, and will have truckloads of money to do so.
Six of the applications – all of them highly collaborative, international and interdisciplinary – have now been selected for a year of pilot funding, starting in May. They range from the highly technical to the borders of science fiction.
One promises to develop graphene, the carbon material that won last year’s physics Nobel prize, into a practical fabric for information technologies. Another proposes to truly figure out how the brain works; a third will integrate information technology with medicine to realise the much-advertised ‘personalized medicine’. But these things will all be pursued regardless of the Flagship scheme. More extraordinary, and therefore both more enticing and more risky, are two proposals to develop intelligent, sensitive artificial agents – characterized here as Guardian Angels or Robot Companions – that will help us individually throughout our lives. The sixth proposal (which received the highest rating) is to develop massive computer-simulation systems to model the entire ‘living Earth’, offering a ‘crisis observatory’ that will forecast global problems ranging from wars to economic meltdowns to natural disasters – the latter now all too vivid. The two initiatives to receive full funding will be selected in mid-2012 for launch in 2013.
Friday, May 20, 2011
The chief designer
I have a review of the RSC’s play Little Eagles in Nature this week. Here it is. Too late now to catch the play, I fear, but I thought it was impressive – even though Andrew Billen has some fair criticisms in the New Statesman.
____________________________________________________________________________
Little Eagles
A play by Rona Munro, directed by Roxana Silbert
Hampstead Theatre, London, until 7 May
It is a curious year of anniversaries for the former Soviet military-industrial complex. Fifty years ago the cosmonaut Yuri Gagarin became the first person in space, orbiting the world for 108 minutes in the Vostok spacecraft. And 25 years ago, Reactor 4 of the Chernobyl nuclear plant exploded and sent a cloud of radioactive debris across northern Europe.
One triumph, one failure; each has been marked independently. But while Little Eagles, Rona Munro’s play commissioned by the Royal Shakespeare Company for the Gagarin anniversary, understandably makes no mention of the disaster in Ukraine a quarter of a century later, the connections assert themselves throughout. Most obviously, both events were the fruits of the Cold War nuclear age. The rockets made by Sergei Korolyov, the chief architect of the Soviet space programme and the play’s central character, armed President Khrushchev with intercontinental ballistic missiles before they took Gagarin to the stars.
But more strikingly, we see the space programme degenerate along the same lines that have now made an exclusion zone of Chernobyl. Impossible demands from technically clueless officials and terror at the consequences of neglecting them eventually compromise the technologies fatally – most notably here in the crash of Soyuz 1 in 1967, killing cosmonaut Vladimir Komarov. Gagarin was the backup pilot for that mission, but it was clear that he was by then too valuable a trophy ever to be risked in another spaceflight. All the same, he died a year later during the routine training flight of a jet fighter.
Callous disregard for life marks Munro’s play from beginning to end. We first see Korolyov in the Siberian labour camp where he was sent during Stalin’s purge of the officer class just before the Second World War. As the Soviets developed their military rocket programme, the stupidity of sending someone so brilliant to a virtual death sentence dawned on the regime, and he was freed to resume work several years later. During the 1950s Korolyov wrested control of the whole enterprise, becoming known as the Chief Designer.
Munro’s Korolyov seems to offer an accurate portrait of the man, if the testimony of one of his chief scientists is anything to go by: “He was a king, a strong-willed purposeful person who knew exactly what he wanted… he swore at you, but he never insulted you. The truth is, everybody loved him.” As magnetically played by Darrell D’Silva, you can see why: he is a swaggering, cunning, charming force of nature, playing the system only to realise his dream of reaching the stars. He clearly reciprocates the love of his ‘little eagles’, the cosmonauts chosen with an eye on the Vostok capsule’s height restrictions.
But for his leaders, rocketry was merely weaponry, or a way of demonstrating superiority over their foes in the West. Korolyov becomes a hero for beating the Americans with Sputnik, and then with Vostok. But when the thuggish, foul-mouthed Khrushchev (a terrifying Brian Doherty) is retired in 1964 in favour of the icily efficient Leonid Brezhnev, the game changes. The new leader sees no virtue in Korolyov’s dream of a Mars mission, and is worried instead that the Americans will beat them to the moon. The rushed and bungled Soyuz 1, launched after Korolyov’s death in 1966, was the result.
Out of this fascinating but chewy material, Munro has worked wonders to weave a tale that is intensely human and, aided by the impressive staging, often beautiful and moving. Gagarin’s own story is here a subplot, and not fully worked through – we start to see his sad descent into the vodka bottle, grounded as a toy of the Politburo, but not his ignominious end. There is just a little too much material here for Munro to shoehorn in. But that is the only small complaint in this satisfying and wise production.
What it becomes in the end is a grotesque inversion of The Right Stuff, Tom Wolfe’s account of the US space programme made into an exhilarating movie in 1983. Wolfe’s celebration was a fitting tribute to the courage and ingenuity that ultimately took humans to the moon, but an exposure of the other side of the coin was long overdue. There is something not just awful but also grand and awesome in the grinding resolve of the Soviets to win the space race relying on just the Chief Engineer “and convicts and some university students”, as Korolyov’s doctor puts it.
Little Eagles shows us the mix of both noble and ignoble impulses in the space race that the US programme, with its Columbus rhetoric, still cannot afford to acknowledge. It recognizes the eye-watering glory of seeing the stars and the earth from beyond the atmosphere, but at the same time reveals the human spaceflight programmes as utterly a product of their tense, cheat-beating times, a nationalistic black hole for dollars and roubles (and now, of yuan too). Crucially, it leaves the final judgement to us. “They say you changed the whole sky and everything under it”, Korolyov’s doctor (and conscience) says to him at the end. “What does that mean?”
Wednesday, May 18, 2011
The Achilles' heel of biological complexity
Here’s the pre-edited version of my latest news story for Nature. This is such an interesting issue that I plan to write a more detailed piece on it for Chemistry World soon.
_____________________________________________________________________________
The complex web of protein interactions in our cells may be masking an ever-worsening problem.
Why are we so complicated? You might imagine that we’ve evolved that way because it conveys adaptive benefits. But a new study in Nature [1] suggests that the complexity in the molecular ‘wiring’ of our genome – the way our proteins talk to each other – may be simply a side effect of a desperate attempt to stave off problematic random mutations in the proteins’ structure.
Ariel Fernández, working at Chicago University and now at the Mathematics Institute of Argentina in Buenos Aires, and Michael Lynch of Indiana University in Bloomington argue that complexity in the network of our protein interactions arises because our relatively small population size, compared with single-celled organisms, makes us especially vulnerable to ‘genetic drift’: changes in the gene pool due to the reproductive success of certain individuals by chance rather than by superior fitness.
Whereas natural selection tends to weed out harmful mutations in genes and their related proteins, genetic drift does not. Fernández and Lynch argue that the large number of physical interactions between our proteins – now a crucial component of how information is transmitted in our cells – compensates for the reduction in protein stability wrought by drift. But this response comes at a cost.
It might mask the accumulation of structural weaknesses in proteins to a point where the problem can no longer be contained. Then, say Fernández and Lynch, proteins might be liable to misfold spontaneously – as they do in so-called diseases such as Alzheimer’s, Parkinson’s and prion diseases, caused by misfolded proteins in the brain.
If so, this means we may be fighting a losing race. Genetic drift may eat away at the stability of our proteins until they are overwhelmed, leaving us a sickly species.
This would imply that Darwinian evolution isn’t necessary benign in the long run. By finding a short-term solution to drift, it might merely be creating a time-bomb. “Species with low population are ultimately doomed by nature’s strategy of evolving complexity”, says Fernández.
The work provides “interesting and important news”, according to William Martin, a specialist in molecular evolution at the University of Düsseldorf in Germany. Martin says it shows that evolution of eukaryotes – relatively complex organisms like us, with a cellular ‘nucleus’ that houses the chromosomes – “can be substantially affected by drift.”
Drift is a bigger problem for small populations – those of multicelled eukaryotic organisms – than for large ones, because survival by chance rather than by fitness is statistically more likely for small numbers. Many random mutations in a gene, and thus in the protein made from it, will harm the protein’s resistance to unfolding: the protein’s folded-up shape becomes more apt to loosen as water molecules intrude into it. This loss of shape weakens the protein’s ability to function.
Such problems can be avoided if proteins stick loosely to one another so as to shelter the regions vulnerable to water. Fernández and Lynch say that these associations between proteins – a key feature of the cell biology of eukaryotes – may have therefore initially been a passive response to genetic drift. Over time, certain protein-protein interactions may be selected by evolution for useful functions, such as sending molecular signals across cell membranes.
Using protein structures reported in the Protein Data Bank, the two researchers verified that disruption of the interface between proteins and water, caused mostly by exposure of ‘sticky’ parts of the folded peptide chain [full disclosure: these are actually parts of the chain that hydrogen-bond to one another; exposure to water enables the water molecules to compete for the hydrogen bonding. Ariel Fernández has previously explored how such regions may be ‘wrapped’ in hydrophobic chain segments to keep water away], leads to a greater propensity for a protein to associate with others. They also showed that drift could account for this ‘poor wrapping’ of proteins.
On this view, genome complexity doesn’t offer intrinsic evolutionary advantages, but is a kind of knee-jerk response to the chance appearance of ‘needy proteins’ – which ends up exposing us to serious risks.
“I believe prions are indicators of this gambit gone too far”, says Fernandez. “The proteins with the largest accumulation of structural defects are the prions, soluble proteins so poorly wrapped that they relinquish their functional fold and aggregate”. Prions cause disease by triggering the misfolding of other proteins.
“If genetic variability resulting from random drift keeps increasing, we as a species may end up facing more and more fitness catastrophes of the type that prions represent”, Fernandez adds. “Perhaps the evolutionary cost of our complexity is too high a price to pay in the long run.”
However, Martin doubts that drift alone can account for the difference in complexity between prokaryotes (single-celled organisms without a cell nucleus) and eukaryotes. His previous work has indicated that bioenergetics also plays a strong role [2]. For example, says Martin, prokaryotes with small population sizes are symbiotic, which tend to degenerate, not to become complex. “Population genetics is just one aspect of the complexity issue”, he says.
References
1. Fernandez, A. & Lynch, M. Nature doi:10.1038/nature09992 (2011).
2. Lane, N. & Martin, W. Nature 467, 929-934 (2010).
_____________________________________________________________________________
The complex web of protein interactions in our cells may be masking an ever-worsening problem.
Why are we so complicated? You might imagine that we’ve evolved that way because it conveys adaptive benefits. But a new study in Nature [1] suggests that the complexity in the molecular ‘wiring’ of our genome – the way our proteins talk to each other – may be simply a side effect of a desperate attempt to stave off problematic random mutations in the proteins’ structure.
Ariel Fernández, working at Chicago University and now at the Mathematics Institute of Argentina in Buenos Aires, and Michael Lynch of Indiana University in Bloomington argue that complexity in the network of our protein interactions arises because our relatively small population size, compared with single-celled organisms, makes us especially vulnerable to ‘genetic drift’: changes in the gene pool due to the reproductive success of certain individuals by chance rather than by superior fitness.
Whereas natural selection tends to weed out harmful mutations in genes and their related proteins, genetic drift does not. Fernández and Lynch argue that the large number of physical interactions between our proteins – now a crucial component of how information is transmitted in our cells – compensates for the reduction in protein stability wrought by drift. But this response comes at a cost.
It might mask the accumulation of structural weaknesses in proteins to a point where the problem can no longer be contained. Then, say Fernández and Lynch, proteins might be liable to misfold spontaneously – as they do in so-called diseases such as Alzheimer’s, Parkinson’s and prion diseases, caused by misfolded proteins in the brain.
If so, this means we may be fighting a losing race. Genetic drift may eat away at the stability of our proteins until they are overwhelmed, leaving us a sickly species.
This would imply that Darwinian evolution isn’t necessary benign in the long run. By finding a short-term solution to drift, it might merely be creating a time-bomb. “Species with low population are ultimately doomed by nature’s strategy of evolving complexity”, says Fernández.
The work provides “interesting and important news”, according to William Martin, a specialist in molecular evolution at the University of Düsseldorf in Germany. Martin says it shows that evolution of eukaryotes – relatively complex organisms like us, with a cellular ‘nucleus’ that houses the chromosomes – “can be substantially affected by drift.”
Drift is a bigger problem for small populations – those of multicelled eukaryotic organisms – than for large ones, because survival by chance rather than by fitness is statistically more likely for small numbers. Many random mutations in a gene, and thus in the protein made from it, will harm the protein’s resistance to unfolding: the protein’s folded-up shape becomes more apt to loosen as water molecules intrude into it. This loss of shape weakens the protein’s ability to function.
Such problems can be avoided if proteins stick loosely to one another so as to shelter the regions vulnerable to water. Fernández and Lynch say that these associations between proteins – a key feature of the cell biology of eukaryotes – may have therefore initially been a passive response to genetic drift. Over time, certain protein-protein interactions may be selected by evolution for useful functions, such as sending molecular signals across cell membranes.
Using protein structures reported in the Protein Data Bank, the two researchers verified that disruption of the interface between proteins and water, caused mostly by exposure of ‘sticky’ parts of the folded peptide chain [full disclosure: these are actually parts of the chain that hydrogen-bond to one another; exposure to water enables the water molecules to compete for the hydrogen bonding. Ariel Fernández has previously explored how such regions may be ‘wrapped’ in hydrophobic chain segments to keep water away], leads to a greater propensity for a protein to associate with others. They also showed that drift could account for this ‘poor wrapping’ of proteins.
On this view, genome complexity doesn’t offer intrinsic evolutionary advantages, but is a kind of knee-jerk response to the chance appearance of ‘needy proteins’ – which ends up exposing us to serious risks.
“I believe prions are indicators of this gambit gone too far”, says Fernandez. “The proteins with the largest accumulation of structural defects are the prions, soluble proteins so poorly wrapped that they relinquish their functional fold and aggregate”. Prions cause disease by triggering the misfolding of other proteins.
“If genetic variability resulting from random drift keeps increasing, we as a species may end up facing more and more fitness catastrophes of the type that prions represent”, Fernandez adds. “Perhaps the evolutionary cost of our complexity is too high a price to pay in the long run.”
However, Martin doubts that drift alone can account for the difference in complexity between prokaryotes (single-celled organisms without a cell nucleus) and eukaryotes. His previous work has indicated that bioenergetics also plays a strong role [2]. For example, says Martin, prokaryotes with small population sizes are symbiotic, which tend to degenerate, not to become complex. “Population genetics is just one aspect of the complexity issue”, he says.
References
1. Fernandez, A. & Lynch, M. Nature doi:10.1038/nature09992 (2011).
2. Lane, N. & Martin, W. Nature 467, 929-934 (2010).
Monday, May 09, 2011
Unnatural happenings
There is a smart review of Unnatural in The Age by Damon Young. I don’t just say it is smart because it is positive – he engages intelligently with the issues. This bit made me smile: “Because he's neither a religious nor scientific fundamentalist, Ball's ideas may draw flak from both.” Well, indeed.
And I recently spoke to David Lemberg about the book for a podcast on the very nice Alden Bioethics blog run out of Albany Medical Center in New York. It’s available here.
And I recently spoke to David Lemberg about the book for a podcast on the very nice Alden Bioethics blog run out of Albany Medical Center in New York. It’s available here.
Sunday, May 08, 2011
Are scientific reputations boosted artificially?
Here’s my latest Muse for Nature News.
_________________________________________________________
Scientific reputations emerge in a collective manner. But does this guarantee that fame rests on merit?
Does everyone in science get the recognition they deserve? Well obviously, your work hasn’t been sufficiently appreciated by your peers, but what about everyone else? Yes, I know he is vastly over-rated, and it’s a mystery why she gets invited to give so many keynote lectures, but that aside – is science a meritocracy?
How would you judge? Reputation is often a word-of-mouth affair; grants, awards and prizes offer a rather more concrete measure of success. But increasingly, scientific excellence is measured by citation statistics, not least by the ubiquitous h-index [1], which seeks to quantify the impact of your total oeuvre. Do all or any of these things truly reflect the worth of one’s scientific output?
Many would probably say: sort of. Most good work gets recognized eventually, and most Nobel prizes are applauded and deemed long overdue, rather than denounced as undeserved. But not always. Sometimes important work doesn’t get noticed in the author’s lifetime, and it’s a fair bet that some never comes to light at all. There’s surely an element of chance and luck in the establishment of reputations.
A new paper in PLoS ONE by Santo Fortunato of the Institute for Scientific Interchange in Turin, Italy, Dirk Helbing of ETH in Zurich, Switzerland, and coworkers aims to shed some light on the mechanism by which citations are accrued [2]. They have found that some landmark papers of Nobel laureates quite quickly give their authors a sudden boost in citation rate – and that this boost extends to the author’s earlier papers too, even if they were in unrelated areas.
For example, citations to a pivotal 1989 paper by chemistry Nobel laureate John Fenn on electrospray ionization mass spectrometry [3] took off exponentially, but also raised the citation profile of at least six of Fenn’s older papers. These peaks in citation rate stand out remarkably clearly for several laureates (some of whom have more than one peak), and might be a useful indicator both of important breakthroughs and of scientific performance.
This behaviour could seem reassuring or disturbing, depending on your inclination. On the one hand, some of these researchers were not particularly well known before they published their landmark papers – and yet the value of the work does seem to have been recognized, overcoming the rich-get-richer effect by which those already famous tend more easily to accrue more fame [4]. This boost could help innovative new ideas to take root. On the other hand, such a rise to prominence brings a new rich-get-richer effect, for it awards ‘unearned’ citations to the researcher’s other papers.
And the findings seem to imply that citations are sometimes selected not because they are necessarily the best or most appropriate but to capitalize on the prestige and presumed authority of the person cited. This further distorts a picture that already contains a rich-get-richer element among citations themselves. An earlier analysis suggested that some citations become common largely by chance, benefitting from a feedback effect in which they are chosen simply because others have chosen them before [5].
But at root, what this finding underscores is that science is a social enterprise, with all the consequent quirks and nonlinearities. That has potential advantages, but also drawbacks. In an ideal world, every researchers would reach an independent judgement about the value of a paper or a body of work, and the sum of these judgements should then reflect something fundamental about its worth.
That, however, is no longer an option, not least because there is simply too much to read – no one can hope to keep up with all that happens in their field, let alone in related ones. As a result, the scientific community must act as a collective search engine that hopefully alights on the most promising material. The question is whether this social network is harnessed efficiently, avoiding blind alleys while not overlooking gems.
No one really knows the answer to that. But some social-science studies highlight the possible consequences. For example, it seems that selections made ostensibly on merit are somewhat capricious when others’ choices are taken into account: objectively ‘good’ and ‘bad’ material still tends on average to be seen as such, but feedbacks can create a degree of randomness in what succeeds and fails [6]. Doubtless the same effects operate in the political sphere – so that democracy is a somewhat compromised meritocracy – and also in economics, which is why prices frequently deviate from their ‘fundamental’ value.
But Helbing suggests that there is probably an optimal balance between independence and group-think. A computer model of people exiting a crowded room in an emergency shows that it empties most efficiently when there is just the right amount of follow-the-crowd herding [7]. Are scientific reputations forged in this optimal regime? And if not, what would it take to engineer more wisdom into this particular crowd?
References
1. Hirsch, J. E. Proc. Natl Acad. Sci. USA 102, 16569-16572 (2005).
2. Mazloumian, A., Eom, Y.-H., Helbing, D., Lozano, S. & Fortunato, S. PLoS ONE 6(5), e18975 (2011).
3. Fenn, J. B., Mann, M., Meng, C. K., Wong, S. F. & Whitehouse, C. M., Science 246, 64-71 (1989).
4. Merton, R. K. Science 159, 56-63 (1968).
5. Simkin, M. V. & Roychowdhury, V. P. Ann. Improb. Res. 11, 24-27 (2005).
6. Salganik, M. J., Dodds, P. S. & Watts, D. J. Science 311, 854-856 (2006).
7. Helbing, D., Farkas, I. & Vicsek, T. Nature 407, 487-490 (2000).
_________________________________________________________
Scientific reputations emerge in a collective manner. But does this guarantee that fame rests on merit?
Does everyone in science get the recognition they deserve? Well obviously, your work hasn’t been sufficiently appreciated by your peers, but what about everyone else? Yes, I know he is vastly over-rated, and it’s a mystery why she gets invited to give so many keynote lectures, but that aside – is science a meritocracy?
How would you judge? Reputation is often a word-of-mouth affair; grants, awards and prizes offer a rather more concrete measure of success. But increasingly, scientific excellence is measured by citation statistics, not least by the ubiquitous h-index [1], which seeks to quantify the impact of your total oeuvre. Do all or any of these things truly reflect the worth of one’s scientific output?
Many would probably say: sort of. Most good work gets recognized eventually, and most Nobel prizes are applauded and deemed long overdue, rather than denounced as undeserved. But not always. Sometimes important work doesn’t get noticed in the author’s lifetime, and it’s a fair bet that some never comes to light at all. There’s surely an element of chance and luck in the establishment of reputations.
A new paper in PLoS ONE by Santo Fortunato of the Institute for Scientific Interchange in Turin, Italy, Dirk Helbing of ETH in Zurich, Switzerland, and coworkers aims to shed some light on the mechanism by which citations are accrued [2]. They have found that some landmark papers of Nobel laureates quite quickly give their authors a sudden boost in citation rate – and that this boost extends to the author’s earlier papers too, even if they were in unrelated areas.
For example, citations to a pivotal 1989 paper by chemistry Nobel laureate John Fenn on electrospray ionization mass spectrometry [3] took off exponentially, but also raised the citation profile of at least six of Fenn’s older papers. These peaks in citation rate stand out remarkably clearly for several laureates (some of whom have more than one peak), and might be a useful indicator both of important breakthroughs and of scientific performance.
This behaviour could seem reassuring or disturbing, depending on your inclination. On the one hand, some of these researchers were not particularly well known before they published their landmark papers – and yet the value of the work does seem to have been recognized, overcoming the rich-get-richer effect by which those already famous tend more easily to accrue more fame [4]. This boost could help innovative new ideas to take root. On the other hand, such a rise to prominence brings a new rich-get-richer effect, for it awards ‘unearned’ citations to the researcher’s other papers.
And the findings seem to imply that citations are sometimes selected not because they are necessarily the best or most appropriate but to capitalize on the prestige and presumed authority of the person cited. This further distorts a picture that already contains a rich-get-richer element among citations themselves. An earlier analysis suggested that some citations become common largely by chance, benefitting from a feedback effect in which they are chosen simply because others have chosen them before [5].
But at root, what this finding underscores is that science is a social enterprise, with all the consequent quirks and nonlinearities. That has potential advantages, but also drawbacks. In an ideal world, every researchers would reach an independent judgement about the value of a paper or a body of work, and the sum of these judgements should then reflect something fundamental about its worth.
That, however, is no longer an option, not least because there is simply too much to read – no one can hope to keep up with all that happens in their field, let alone in related ones. As a result, the scientific community must act as a collective search engine that hopefully alights on the most promising material. The question is whether this social network is harnessed efficiently, avoiding blind alleys while not overlooking gems.
No one really knows the answer to that. But some social-science studies highlight the possible consequences. For example, it seems that selections made ostensibly on merit are somewhat capricious when others’ choices are taken into account: objectively ‘good’ and ‘bad’ material still tends on average to be seen as such, but feedbacks can create a degree of randomness in what succeeds and fails [6]. Doubtless the same effects operate in the political sphere – so that democracy is a somewhat compromised meritocracy – and also in economics, which is why prices frequently deviate from their ‘fundamental’ value.
But Helbing suggests that there is probably an optimal balance between independence and group-think. A computer model of people exiting a crowded room in an emergency shows that it empties most efficiently when there is just the right amount of follow-the-crowd herding [7]. Are scientific reputations forged in this optimal regime? And if not, what would it take to engineer more wisdom into this particular crowd?
References
1. Hirsch, J. E. Proc. Natl Acad. Sci. USA 102, 16569-16572 (2005).
2. Mazloumian, A., Eom, Y.-H., Helbing, D., Lozano, S. & Fortunato, S. PLoS ONE 6(5), e18975 (2011).
3. Fenn, J. B., Mann, M., Meng, C. K., Wong, S. F. & Whitehouse, C. M., Science 246, 64-71 (1989).
4. Merton, R. K. Science 159, 56-63 (1968).
5. Simkin, M. V. & Roychowdhury, V. P. Ann. Improb. Res. 11, 24-27 (2005).
6. Salganik, M. J., Dodds, P. S. & Watts, D. J. Science 311, 854-856 (2006).
7. Helbing, D., Farkas, I. & Vicsek, T. Nature 407, 487-490 (2000).
Friday, May 06, 2011
A discourse on method
Actually (to pick up from the previous post), I’d meant to put my last Crucible column up here too. So here it is now.
__________________________________________________
What’s wrong with this claim? “Replication of results is a crucial part of the scientific method. Experimental errors come rapidly to light when researchers prove unable to reproduce the claims of others. In this way, science has a built-in mechanism for self-correction.”
The insistence on replication – as the motto of the Royal Society puts it, ‘take no one’s word for it’ (Nullius in verba) – has indeed long been one of science’s great strengths. It explains why pathological science such as cold fusion and polywater was rather quickly consigned to the dustbin while equally striking claims such as high-temperature superconductivity have entered the textbooks.
But too often this view of the ‘scientific method’ – itself a slippery concept – is regarded as a regular aspect of science in action, rather than an expression of the ideal. Rather few experiments are replicated verbatim, as it were, not least because science is too competitive and busy to spend one’s time doing what someone has already done. Important claims are bound to get checked as others rush to follow up on the work, but mundane stuff will probably never be tested – it will simply sink unheeded into the literature.
No one should be surprised or unduly alarmed at that – if work isn’t important enough to warrant replication, it matters little if it is flawed. And although the difficulty of publishing negative results probably hinders the correction process and favours exaggerated claims, information technologies might now offer solutions.1 What matters more is that replication isn’t just a problem in practice; it’s a problem in theory.
The concept emerged along with experimental science itself in the late sixteenth century. Before that, experiments – when they were done at all – were typically considered not a test of your hypothesis but a demonstration that it was right. If ‘experience’ didn’t fit with theory, no one felt a compelling urge to modify the theory, not least because the world was not considered law-bound it quite the same way it is today. Even though the early experimentalists, often working outside the academic mainstream, decided they needed to filter recipes and reports by attempting to verify them before recording them as fact, the tradition of experiment-as-demonstration persisted for a long time. Many of the celebrated trials shown to the Fellows of the Royal Society were like that.
But in any case, it would be wrong to suppose that the failure of an experiment to verify a hypothesis or to replicate a prior claim should be grounds for their rejection. Robert Boyle appreciated this in his ‘Two Essays, concerning the Unsuccessfulness of Experiments’ (1661). There are many reasons, he wrote, why an experiment might not work as anticipated: the equipment might be faulty, or the reagents not fresh, for example. That was amply borne out (albeit in reverse) by the recent discovery that a crucial step (first reported in 1918) in the alleged total synthesis of quinine by Robert Woodward and William Doering in 1944 depended on a catalyst being aged.2 The very fact that it took 90 years to test that step is itself a comment on how replication really functions in science.
The problem of replication was highlighted by Boyle’s own famous experiments with the air pump. By raising the possibility of a vacuum, these studies posed a serious challenge to the prevailing Aristotelian philosophy. So the stakes were very high. But because of the imperfections of the apparatus, it was no easy matter even for Boyle to reproduce some of his findings. And because the air pump was a hugely sophisticated piece of scientific kit– it has been dubbed the cyclotron of its age – it was very expensive, so very few others were in a position to try the experiments. Even if they did, the designs differed, so one couldn’t be sure that the same procedures were being followed.3 That essentially no replications could be attempted without first-hand experience of Boyle’s instrument reflects today’s situation, in which hardly any complicated experimental procedure can be replicated reliably without direct contact between the labs involved. Even then, the only way to calibrate your apparatus may be against that whose results you’re trying to test.
Which raises the question: if your attempted replication ‘fails’, where is the error? Have you neglected something? Or was the original claim wrong? Or was it right for the wrong reasons? The possibilities are endless. Indeed, the philosophers Pierre Duhem and Willard Van Orman Quine have independently pointed out that, from a strictly logical perspective, no hypothesis can ever be tested or an experimental replication assayed, because the problem is under-determined: discrepancies can never be logically localized to a particular cause. Science makes progress regardless, and what is perhaps surprising is that the ‘scientific method’ remains so effective when it is in truth ramshackle, makeshift and logically shaky.
These issues seem more pertinent than ever. Who, for example, is going to check the findings from the Large Hadron Collider?
References
1. J. Schooler, Nature 470, 437 (2011).
2. A. C. Smith & R. M. Williams, Angew. Chem. Int. Edn 47, 1736–1740 (2008).
3. S. Shapin & S. Schaffer, Leviathan and the Air-Pump (Princeton University Press, Princeton, 1985).
__________________________________________________
What’s wrong with this claim? “Replication of results is a crucial part of the scientific method. Experimental errors come rapidly to light when researchers prove unable to reproduce the claims of others. In this way, science has a built-in mechanism for self-correction.”
The insistence on replication – as the motto of the Royal Society puts it, ‘take no one’s word for it’ (Nullius in verba) – has indeed long been one of science’s great strengths. It explains why pathological science such as cold fusion and polywater was rather quickly consigned to the dustbin while equally striking claims such as high-temperature superconductivity have entered the textbooks.
But too often this view of the ‘scientific method’ – itself a slippery concept – is regarded as a regular aspect of science in action, rather than an expression of the ideal. Rather few experiments are replicated verbatim, as it were, not least because science is too competitive and busy to spend one’s time doing what someone has already done. Important claims are bound to get checked as others rush to follow up on the work, but mundane stuff will probably never be tested – it will simply sink unheeded into the literature.
No one should be surprised or unduly alarmed at that – if work isn’t important enough to warrant replication, it matters little if it is flawed. And although the difficulty of publishing negative results probably hinders the correction process and favours exaggerated claims, information technologies might now offer solutions.1 What matters more is that replication isn’t just a problem in practice; it’s a problem in theory.
The concept emerged along with experimental science itself in the late sixteenth century. Before that, experiments – when they were done at all – were typically considered not a test of your hypothesis but a demonstration that it was right. If ‘experience’ didn’t fit with theory, no one felt a compelling urge to modify the theory, not least because the world was not considered law-bound it quite the same way it is today. Even though the early experimentalists, often working outside the academic mainstream, decided they needed to filter recipes and reports by attempting to verify them before recording them as fact, the tradition of experiment-as-demonstration persisted for a long time. Many of the celebrated trials shown to the Fellows of the Royal Society were like that.
But in any case, it would be wrong to suppose that the failure of an experiment to verify a hypothesis or to replicate a prior claim should be grounds for their rejection. Robert Boyle appreciated this in his ‘Two Essays, concerning the Unsuccessfulness of Experiments’ (1661). There are many reasons, he wrote, why an experiment might not work as anticipated: the equipment might be faulty, or the reagents not fresh, for example. That was amply borne out (albeit in reverse) by the recent discovery that a crucial step (first reported in 1918) in the alleged total synthesis of quinine by Robert Woodward and William Doering in 1944 depended on a catalyst being aged.2 The very fact that it took 90 years to test that step is itself a comment on how replication really functions in science.
The problem of replication was highlighted by Boyle’s own famous experiments with the air pump. By raising the possibility of a vacuum, these studies posed a serious challenge to the prevailing Aristotelian philosophy. So the stakes were very high. But because of the imperfections of the apparatus, it was no easy matter even for Boyle to reproduce some of his findings. And because the air pump was a hugely sophisticated piece of scientific kit– it has been dubbed the cyclotron of its age – it was very expensive, so very few others were in a position to try the experiments. Even if they did, the designs differed, so one couldn’t be sure that the same procedures were being followed.3 That essentially no replications could be attempted without first-hand experience of Boyle’s instrument reflects today’s situation, in which hardly any complicated experimental procedure can be replicated reliably without direct contact between the labs involved. Even then, the only way to calibrate your apparatus may be against that whose results you’re trying to test.
Which raises the question: if your attempted replication ‘fails’, where is the error? Have you neglected something? Or was the original claim wrong? Or was it right for the wrong reasons? The possibilities are endless. Indeed, the philosophers Pierre Duhem and Willard Van Orman Quine have independently pointed out that, from a strictly logical perspective, no hypothesis can ever be tested or an experimental replication assayed, because the problem is under-determined: discrepancies can never be logically localized to a particular cause. Science makes progress regardless, and what is perhaps surprising is that the ‘scientific method’ remains so effective when it is in truth ramshackle, makeshift and logically shaky.
These issues seem more pertinent than ever. Who, for example, is going to check the findings from the Large Hadron Collider?
References
1. J. Schooler, Nature 470, 437 (2011).
2. A. C. Smith & R. M. Williams, Angew. Chem. Int. Edn 47, 1736–1740 (2008).
3. S. Shapin & S. Schaffer, Leviathan and the Air-Pump (Princeton University Press, Princeton, 1985).
Thursday, May 05, 2011
Science and religion - even chemists aren't immune
Oh, it’s risky, I know. But I offer the following mild observations about the recent Templeton Prize in my Crucible column in Chemistry World. When I wrote it, on the day of the announcement, I didn’t realise quite what a lot of shrieking the award would elicit. There is, by the way, a sentence in the final para, omitted in the published version, that makes the meaning of my final sentence a little more apparent. Herein lies a tale.
_______________________________________________________________________
The astronomer Martin Rees, until recently President of the Royal Society, seems nonchalant, even bemused, about receiving this year’s Templeton Prize for work at the interface of science and religion. Not only has he seemingly little idea to what to do with the £1m prize money, but he confesses to knowing little about the Templeton Foundation beyond what appeared in a recent Nature article [1], and wasn’t sure why he had been selected.
According to the Pennsylvania-based Templeton Foundation, set up by the late billionaire John Templeton to develop links between science and spirituality, the prize is awarded to people who have “expanded our vision of human purpose and ultimate reality”. In giving it to Rees, the foundation says that his “profound insights on the cosmos have provoked vital questions that speak to humanity’s highest hopes and worst fears”.
One thing Rees must have known, however, is that his award would be controversial. Some scientists see it as an attempt to buy respectability for the Foundation through the names of illustrious scientists. In its early days the award went to religious figures such as Billy Graham and Mother Teresa. But Rees joins a list of winners that now includes cosmologists George Ellis and John Barrow, physicists Paul Davies, Freeman Dyson and Charles Townes, and biologist Francisco Ayala. This reflects the Foundation’s energetic determination over the past two decades to focus on interactions between science and religion – topics that some sceptics say have no shared ground. Chemistry Nobel laureate Harry Kroto, one of those who has condemned Rees’ acceptance of the prize, suggests that to qualify you just have to be an eminent scientist prepared to be nice – or at least not rude – about religion.
Rees is no stranger to this disputed territory. He presided over the sacking of the Royal Society’s director of education Michael Riess, an ordained Church of England minister, after remarks that were construed as defending the teaching of creationism in schools. Rees also drew fire from the inclusion of a service at St Paul’s Cathedral, led by the Archbishop of Canterbury, in the Royal Society’s 350th anniversary celebrations last year. Rees has said publicly that he has no religious beliefs but occasionally attends church services and recognizes their social role. He takes the pragmatic view that, in battling the anti-scientific extremes of religious fundamentalism, he’d rather have the Archbishop and other moderates on his side. For others, the distance between evidence-based science and faith-based religion is too great to make common cause.
Chemistry might seem too remote from the Templeton Foundation’s goals for the issue of whether to accept its ‘tainted’ money ever to arise. Historically, of course, many chemists were profoundly religious. For Robert Boyle, investigating all aspects of nature was a holy duty that deepens our reverence for God’s works. Michael Faraday had to juggle his science and his profound non-conformist Christian beliefs.
Yet surely chemical research can’t directly speak to religious questions today? Don’t be so sure. In 2005 I took part in a Templeton-funded symposium called “Water of Life: Counterfactual Chemistry and Fine-Tuning in Biochemistry”. While I won’t pretend to have been indifferent to the venue on the shore of Lake Como, I would have declined were it not for the stellar list of other delegates. The meeting was motivated by Harvard biologist Lawrence Henderson’s 1913 book The Fitness of the Environment, in which he suggested that water is ‘biophilic’, with physical and chemical properties remarkably fine-tuned to support life. The question put to the gathering was: are they really?
Among the many contributions, Ruth Lynden-Bell and Pablo Debenedetti described computer simulations of ‘counterfactual water’ in which the properties of the molecule were slightly altered to see if it retained its unique liquid properties [2]. For example, the tetrahedral hydrogen-bonded motif remains, in distorted form, if the H-O-H bond angle is changed from 109.5 degrees to 90 degrees, but the structure becomes more like that of a ‘normal’ liquid as the hydrogen-bond strength is decreased. This notion of a ‘modified chemistry’ thus may probe how far the chemical world is contingent and how far it is inevitable. Of course, one could say that there is no contingence: things are as they are and not otherwise. But fine-tuning arguments in cosmology confront the mystery of why the laws of nature seem geared to enable our existence. If there’s plenty of slack, there’s no mystery to explain. Counterfactual scenarios can also explore the supposed uniqueness of water as life’s solvent, irrespective of any metaphysical implications.
If you want to know what the meeting concluded, you’ll have to read the book [3]. It has only recently been published, in part because some university presses seemed nervous of the association with the Templeton Foundation. Wary at the outset of an underlying agenda, I saw no evidence of it at the meeting: it was good science all the way. Sceptics are right to ask questions about the Foundation’s motives, but they need to be open-minded about the answers. When such scepticism stands in the way of solid science, we are all the losers.
1. M. M. Waldorp, Nature 470, 323 (2011).
2. R. M. Lynden-Bell & P. G. Debenedetti, J. Phys. Chem. B 109, 6527 (2005).
3. R. M. Lynden-Bell, S. Conway Morris, J. D. Barrow, J. L. Finney & C. L. Harper (eds). Water and Life: the Unique Properties of H2O. CRC Press, Boca Raton, 2010.
_______________________________________________________________________
The astronomer Martin Rees, until recently President of the Royal Society, seems nonchalant, even bemused, about receiving this year’s Templeton Prize for work at the interface of science and religion. Not only has he seemingly little idea to what to do with the £1m prize money, but he confesses to knowing little about the Templeton Foundation beyond what appeared in a recent Nature article [1], and wasn’t sure why he had been selected.
According to the Pennsylvania-based Templeton Foundation, set up by the late billionaire John Templeton to develop links between science and spirituality, the prize is awarded to people who have “expanded our vision of human purpose and ultimate reality”. In giving it to Rees, the foundation says that his “profound insights on the cosmos have provoked vital questions that speak to humanity’s highest hopes and worst fears”.
One thing Rees must have known, however, is that his award would be controversial. Some scientists see it as an attempt to buy respectability for the Foundation through the names of illustrious scientists. In its early days the award went to religious figures such as Billy Graham and Mother Teresa. But Rees joins a list of winners that now includes cosmologists George Ellis and John Barrow, physicists Paul Davies, Freeman Dyson and Charles Townes, and biologist Francisco Ayala. This reflects the Foundation’s energetic determination over the past two decades to focus on interactions between science and religion – topics that some sceptics say have no shared ground. Chemistry Nobel laureate Harry Kroto, one of those who has condemned Rees’ acceptance of the prize, suggests that to qualify you just have to be an eminent scientist prepared to be nice – or at least not rude – about religion.
Rees is no stranger to this disputed territory. He presided over the sacking of the Royal Society’s director of education Michael Riess, an ordained Church of England minister, after remarks that were construed as defending the teaching of creationism in schools. Rees also drew fire from the inclusion of a service at St Paul’s Cathedral, led by the Archbishop of Canterbury, in the Royal Society’s 350th anniversary celebrations last year. Rees has said publicly that he has no religious beliefs but occasionally attends church services and recognizes their social role. He takes the pragmatic view that, in battling the anti-scientific extremes of religious fundamentalism, he’d rather have the Archbishop and other moderates on his side. For others, the distance between evidence-based science and faith-based religion is too great to make common cause.
Chemistry might seem too remote from the Templeton Foundation’s goals for the issue of whether to accept its ‘tainted’ money ever to arise. Historically, of course, many chemists were profoundly religious. For Robert Boyle, investigating all aspects of nature was a holy duty that deepens our reverence for God’s works. Michael Faraday had to juggle his science and his profound non-conformist Christian beliefs.
Yet surely chemical research can’t directly speak to religious questions today? Don’t be so sure. In 2005 I took part in a Templeton-funded symposium called “Water of Life: Counterfactual Chemistry and Fine-Tuning in Biochemistry”. While I won’t pretend to have been indifferent to the venue on the shore of Lake Como, I would have declined were it not for the stellar list of other delegates. The meeting was motivated by Harvard biologist Lawrence Henderson’s 1913 book The Fitness of the Environment, in which he suggested that water is ‘biophilic’, with physical and chemical properties remarkably fine-tuned to support life. The question put to the gathering was: are they really?
Among the many contributions, Ruth Lynden-Bell and Pablo Debenedetti described computer simulations of ‘counterfactual water’ in which the properties of the molecule were slightly altered to see if it retained its unique liquid properties [2]. For example, the tetrahedral hydrogen-bonded motif remains, in distorted form, if the H-O-H bond angle is changed from 109.5 degrees to 90 degrees, but the structure becomes more like that of a ‘normal’ liquid as the hydrogen-bond strength is decreased. This notion of a ‘modified chemistry’ thus may probe how far the chemical world is contingent and how far it is inevitable. Of course, one could say that there is no contingence: things are as they are and not otherwise. But fine-tuning arguments in cosmology confront the mystery of why the laws of nature seem geared to enable our existence. If there’s plenty of slack, there’s no mystery to explain. Counterfactual scenarios can also explore the supposed uniqueness of water as life’s solvent, irrespective of any metaphysical implications.
If you want to know what the meeting concluded, you’ll have to read the book [3]. It has only recently been published, in part because some university presses seemed nervous of the association with the Templeton Foundation. Wary at the outset of an underlying agenda, I saw no evidence of it at the meeting: it was good science all the way. Sceptics are right to ask questions about the Foundation’s motives, but they need to be open-minded about the answers. When such scepticism stands in the way of solid science, we are all the losers.
1. M. M. Waldorp, Nature 470, 323 (2011).
2. R. M. Lynden-Bell & P. G. Debenedetti, J. Phys. Chem. B 109, 6527 (2005).
3. R. M. Lynden-Bell, S. Conway Morris, J. D. Barrow, J. L. Finney & C. L. Harper (eds). Water and Life: the Unique Properties of H2O. CRC Press, Boca Raton, 2010.