Thursday, December 22, 2011
400 years of snowflakes
Here is the pre-edited version of my In Retrospect piece for Nature celebrating the 400th anniversary of Kepler’s seminal little treatise on snowflakes.
_________________________________________________________________
Did anyone ever receive a more exquisite New Year’s gift than the German scholar Johannes Matthäus Wackher von Wackenfels, four hundred years ago? It was a booklet of just 24 pages, written by his friend Johannes Kepler, court mathematician to the Holy Roman Emperor Rudolf II in Prague. The title was De nive sexangula (On the Six-Cornered Snowflake), and herein Kepler attempted to explain why snowflakes have this striking hexagonal symmetry. Not only is the booklet charming and witty, but it seeded the notion from which all of crystallography blossomed: that the geometric shapes of crystals can be explained in terms of the packing of their constituent particles.
Like Kepler, Wackher was a self-made man of humble origins whose brilliance earned him a position in the imperial court. By 1611 he had risen to the position of privy councillor, and was a man of sufficient means to act as Kepler’s some-time patron. Sharing an interest in science, he was also godfather to Kepler’s son and in fact a distant relative of Kepler himself. It is sometimes said that Kepler’s booklet was in lieu of a regular gift which the straitened author, who frequently had to petition Rudolf’s treasury for his salary, could not afford. In his introduction, Kepler says he had recently noticed a snowflake on the lapel of his coat as he crossed the Charles Bridge in Prague, and had been moved to ponder on its remarkable geometry.
Kepler came to the imperial court in 1600 as an assistant to the Danish astronomer Tycho Brahe. When Tycho died the following year, Kepler became his successor, eagerly seizing the opportunity to use Tycho’s incomparable observational data to deduce the laws of planetary motion that Isaac Newton’s gravitational theory later explained.
Kepler’s analysis of the snowflake comes at an interesting juncture. It unites the older, Neoplatonic idea of a geometrically ordered universe that reflects God’s wisdom and design with the emerging mechanistic philosophy, in which natural phenomena are explained by proximate causes that, while they may be hidden or ‘occult’ (like gravity), are not mystical. In Mysterium Cosmographicum (1596) Kepler famously concocted a model of the cosmos with the planetary orbits arranged on the surfaces of nested polyhedra, which looks now like sheer numerology. But unlike Tycho, he was a Copernican and came close to formulating the mechanistic gravitational model that Newton later developed.
Kepler was not by any means the first to notice that the snowflake is six-sided. This is recorded in Chinese documents dating back to the second century BCE, and in the Western world the snowflake’s ‘star-like’ forms were noted by Albertus Magnus in the thirteenth century. René Descartes included drawings of sixfold stars and ice ‘flowers’ in his meteorological book Les Météores (1637), while Robert Hooke’s microscopic studies recorded in Micrographia (1665) revealed the elaborate, hierarchical branching patterns.
“There must be a cause why snow has the shape of a six-cornered starlet”, Kepler wrote. “It cannot be chance. Why always six? The cause is not to be looked for in the material, for vapour is formless and flows, but in an agent.” This ‘agent’, he suspected, might be mechanical, namely the orderly stacking of frozen ‘globules’ that represent “the smallest natural unit of a liquid like water” – not explicitly atoms, but as good as. Here he was indebted to the English mathematician Thomas Harriot, who acted as navigator for Walter Raleigh’s voyages to the New World in 1584-5. Raleigh sought Harriot’s expert advice on the most efficient way to stack cannonballs on the ship’s deck, prompting the ingenious Harriot to theorize about the close-packing of spheres. Around 1606-8 he communicated his thoughts to Kepler, who returned to the issue in De nive sexangula. Kepler asserted that hexagonal packing “will be the tightest possible, so that in no other arrangement could more pellets be stuffed into the same container.” This assertion about maximal close-packing became known as Kepler’s conjecture, which was proved using computational methods only in 1998 (published in 2005) [1].
Less commonly acknowledged as a source of inspiration is the seventeenth-century enthusiasm for cabinets of curiosities (Wunderkammern), collections of rare and marvelous objects from nature and art that were presented as microcosms of the entire universe. Rudolf II had one of the most extensive cabinets, to which Kepler would have had privileged access. The forerunners of museum collections, the cabinets have rarely been recognized as having any real influence on the nascent experimental science of the age. But Kepler mentions in his booklet having seen in the palace of the Elector of Saxony in Dresden “a panel inlaid with silver ore, from which a dodecahedron, like a small hazelnut in size, projected to half its depth, as if in flower” – a showy example of the metalsmith’s craft which may have stimulated his thinking about how an emergent order gives crystals their facets.
Yet despite his innovative ideas, in the end Kepler is defeated by the snowflake’s ornate form and its flat, plate-like shape. He realizes that although the packing of spheres creates regular patterns, they are not necessarily hexagonal, let alone as ramified and ornamented as that of the snowflake. He is forced to fall back on Neoplatonic occult forces: God, he suggests, has imbued the water vapour with a “formative faculty” that guides its form. There is no apparent purpose to the flake’s shape, he observes: the “formative reason” must be purely aesthetic or frivolous, nature being “in the habit of playing with the passing moment.” That delightful image, which touches on the late Renaissance debate about nature’s autonomy, remains resonant today in questions about the adaptive value (or not) of some complex patterns and forms in biological growth [2]. Towards the end of his inconclusive tract Kepler offers an incomparably beautiful variant of ‘more research is needed’: “As I write it has again begun to snow, and more thickly than a moment ago. I have been busily examining the little flakes.”
Kepler’s failure to explain the baroque regularity of the snowflake is no disgrace, for not until the 1980s was this understood as a consequence of branching growth instabilities biased by the hexagonal crystal symmetry of ice [3]. In the meantime, Kepler’s vision of crystals as stackings of particles informed the eighteenth-century mineralogical theory of René Just Haüy, the basis of all crystallographic understanding today.
But the influence of Kepler’s booklet goes further. It was in homage that crystallographer Alan Mackay called his seminal 1981 paper on quasicrystals ‘De nive quinquanglua’ [4]. Here, three years before the experimental work that won Dan Shechtman this year’s Nobel prize in chemistry, Mackay showed that a Penrose tiling could, if considered the basis of an atomic ‘quasi-lattice’, produce fivefold diffraction patterns. Quasicrystals showed up in metal alloys, not snow. But Mackay has indicated privately that it might indeed be possible to induce water molecules to pack this way, and quasicrystalline ice was recently reported in computer simulations of water confined between plates [5]. Whether it can furnish five-cornered snowflakes remains to be seen.
References
1. Hales, T. C. Ann. Math. 2nd ser. 162, 1065-1185 (2005).
2. Rothenberg, D. Survival of the Beautiful (Bloomsbury, New York, 2011).
3. Ben-Jacob, E., Goldenfeld, N., Langer, J. S. & Schön, G. Phys. Rev. Lett. 51, 1930-1932 (1983).
4. Mackay, A. L. Kristallografiya 26, 910-919 (1981); in English, Sov. Phys. Crystallogr. 26, 517-522 (1981).
5. Johnston, J. C., Kastelowitz, N. & Molinero, V. J. Chem. Phys. 133, 154516 (2010).
Reputations matter
Rather a lot of posts all at once, I fear. Here is the first, which I meant to put up earlier – last Saturday’s column in the Guardian.
_______________________________________________________________
Johannes Stark was a German physicist whose Nobel prize-winning discovery in 1913, the Stark effect (don’t ask), is still useful today. Just the sort of person, then, who you might expect to have scientific institutes or awards named after him.
The fact that there aren’t any is probably because Stark was a Nazi – a bitter and twisted anti-Semite who rejected relativity because Einstein was Jewish.
Scientists concur that, while your discovery should bear your name no matter how despicable (or just plain crazy) you are, you need a little virtue to be commemorated in other ways.
But how little? Everyone knows Isaac Newton was a grumpy and vindictive old sod, but that hardly seems reason to begrudge the naming of the Isaac Newton Institute for Mathematical Sciences in Cambridge. Yet when the Dutch Nobel laureate Peter Debye was accused in a 2006 book of collusion with the Nazis during his career in pre-war Germany, the Dutch government insisted that the Debye Institute at the University of Utrecht be renamed, and an annual Debye Prize awarded in his hometown of Maastricht was suspended.
Reputations matter, then. Two researchers have claimed this week to lay to rest the suggestion that Charles Darwin stole some of his ideas on natural selection from Alfred Russel Wallace, who sent Darwin a letter explaining his own theory in 1858. Darwin passed it on to other scientific authorities as Wallace requested, but it has been suggested that he first sat on it for weeks and revised his theory in the light of it.
No proper Darwin historian ever took that accusation seriously, not least because everything we know about Darwin’s character makes it highly implausible. But Wallace has admirers on the fringe who identify with his image of the wronged outsider and will stop at nothing to see him given priority. And knocking Darwin’s character is a favourite tactic of creationists for discrediting his science.
This isn’t the last word on that matter, not least because the dates of Wallace’s letter still aren’t airtight. Evolutionary geneticist Steve Jones has rightly said that “The real issue is the science and not who did it.” Oh, but we do care who did it. We do care if Einstein nicked his ideas from his first wife Mileva Maric (another silly notion), or if Gottfried Leibniz pilfered the calculus from Newton.
Partly we like the whiff of scandal. Partly we love seeing giants knocked off their pedestals. But in cases like Debye’s there are more profound questions. Debye finally left his physics institute in Berlin and moved to the US in 1940 because he refused to give up his Dutch citizenship and become German, as the Nazis demanded when they commandeered his institute for war research. Into the breach stepped Werner Heisenberg, among others, whose work on the nuclear programme still excites debate about whether or not he tried to make an atom bomb for Hitler.
After the war, Heisenberg encouraged the myth that he and his colleagues purposely delayed their research to deny Hitler such power. It’s more likely that they never in fact had to make the choice, since they weren’t given the resources of the Manhattan Project. In any event, Heisenberg began the war patriotically anticipating a quick victory. Yet he was never a Nazi, and today we have the Werner Heisenberg Institute and Prize.
Unlike Stark, Heisenberg and Debye weren’t terrible people – they behaved in the compromised, perhaps naïve way that most of us would in such circumstances. But engraving their names in stone and bronze creates difficulties. It forces us to make them unblemished icons, or conversely tempts us to demonize them. This rush to beatify brings down a weight of moral expectation that few of us could shoulder – even the deeply humane Einstein was no saint towards Maric. Why not give time more chance to weather and blur the images of great scientists, to produce enough distance for us to celebrate their achievements while overlooking their all-too-human foibles?
_______________________________________________________________
Johannes Stark was a German physicist whose Nobel prize-winning discovery in 1913, the Stark effect (don’t ask), is still useful today. Just the sort of person, then, who you might expect to have scientific institutes or awards named after him.
The fact that there aren’t any is probably because Stark was a Nazi – a bitter and twisted anti-Semite who rejected relativity because Einstein was Jewish.
Scientists concur that, while your discovery should bear your name no matter how despicable (or just plain crazy) you are, you need a little virtue to be commemorated in other ways.
But how little? Everyone knows Isaac Newton was a grumpy and vindictive old sod, but that hardly seems reason to begrudge the naming of the Isaac Newton Institute for Mathematical Sciences in Cambridge. Yet when the Dutch Nobel laureate Peter Debye was accused in a 2006 book of collusion with the Nazis during his career in pre-war Germany, the Dutch government insisted that the Debye Institute at the University of Utrecht be renamed, and an annual Debye Prize awarded in his hometown of Maastricht was suspended.
Reputations matter, then. Two researchers have claimed this week to lay to rest the suggestion that Charles Darwin stole some of his ideas on natural selection from Alfred Russel Wallace, who sent Darwin a letter explaining his own theory in 1858. Darwin passed it on to other scientific authorities as Wallace requested, but it has been suggested that he first sat on it for weeks and revised his theory in the light of it.
No proper Darwin historian ever took that accusation seriously, not least because everything we know about Darwin’s character makes it highly implausible. But Wallace has admirers on the fringe who identify with his image of the wronged outsider and will stop at nothing to see him given priority. And knocking Darwin’s character is a favourite tactic of creationists for discrediting his science.
This isn’t the last word on that matter, not least because the dates of Wallace’s letter still aren’t airtight. Evolutionary geneticist Steve Jones has rightly said that “The real issue is the science and not who did it.” Oh, but we do care who did it. We do care if Einstein nicked his ideas from his first wife Mileva Maric (another silly notion), or if Gottfried Leibniz pilfered the calculus from Newton.
Partly we like the whiff of scandal. Partly we love seeing giants knocked off their pedestals. But in cases like Debye’s there are more profound questions. Debye finally left his physics institute in Berlin and moved to the US in 1940 because he refused to give up his Dutch citizenship and become German, as the Nazis demanded when they commandeered his institute for war research. Into the breach stepped Werner Heisenberg, among others, whose work on the nuclear programme still excites debate about whether or not he tried to make an atom bomb for Hitler.
After the war, Heisenberg encouraged the myth that he and his colleagues purposely delayed their research to deny Hitler such power. It’s more likely that they never in fact had to make the choice, since they weren’t given the resources of the Manhattan Project. In any event, Heisenberg began the war patriotically anticipating a quick victory. Yet he was never a Nazi, and today we have the Werner Heisenberg Institute and Prize.
Unlike Stark, Heisenberg and Debye weren’t terrible people – they behaved in the compromised, perhaps naïve way that most of us would in such circumstances. But engraving their names in stone and bronze creates difficulties. It forces us to make them unblemished icons, or conversely tempts us to demonize them. This rush to beatify brings down a weight of moral expectation that few of us could shoulder – even the deeply humane Einstein was no saint towards Maric. Why not give time more chance to weather and blur the images of great scientists, to produce enough distance for us to celebrate their achievements while overlooking their all-too-human foibles?
Wednesday, December 21, 2011
Happy Christmas to the Godless
This week I had the pleasure of taking part in one of Robin Ince’s Nine Lessons and Carols for Godless People at the Bloomsbury Theatre in London. Fending off the “I am not worthy” feeling amidst the likes of Simon Singh, Alexei Sayle and Mark Thomas, and knowing what a terrible idea it would be to try to make people laugh, I plucked a few things from my forthcoming book on curiosity, in particular Kepler’s treatise on snowflakes (on which, more shortly). But I couldn’t resist poking some fun at a few of the scientifically illiterate snowflakes we always get at Christmas, including the one above from dear Ed Milliband. I wanted to offer Ed a little get-out clause for his pentagonal snowflakes on the basis of quasicrystalline ice, but time did not permit.
Anyway, it’s a great show if you still have time to catch the last ones. I did a little interview for a podcast by New Humanist, which I mention mostly so that you can get a flavour of the other folk in the show.
Friday, December 16, 2011
Unweaving tangled relationships
Here’s the original text of my latest news story for Nature.
___________________________________________
A new statistical method discovers hidden correlations in complex data.
The American humorist Evan Esar once called statistics the science of producing unreliable facts from reliable figures. A new technique now promises to make those facts a whole lot more dependable.
Brothers David Reshef of the Broad Institute of MIT and Harvard in Cambridge, Massachusetts, Yakir Reshef of the Weizmann Institute of Science in Rehovot, Israel, and their coworkers have devised a method to extract from complex sets of data relationships and trends that are invisible to other types of statistical analysis. They describe their approach in a paper in Science today [1].
“This appears to be an outstanding achievement”, says statistician Douglas Simpson of the University of Illinois at Urbana-Champaign. “It opens up whole new avenues of inquiry.”
Here’s the basic problem. You’ve collected lots of data on some property of a system that could depend on many governing factors. To figure out what depends on what, you plot them on a graph.
If you’re lucky, you might find that this property changes in some simple way as a function of some other factor: for example, people’s health gets steadily better as their wealth increases. There are well known statistical methods for assessing how reliable such correlations are.
But what if there are many simultaneous dependencies in the data? If, say, people are also healthier if they drive less, which might not bear any obvious relation to their wealth (or might even be more prevalent among the less wealthy)? The conflict might leave both relationships hidden from traditional searches for correlations.
The problems can be far worse. Suppose you’re looking at how genes interact in an organism. The activity of one gene could be correlated with that of another, but there could be hundreds of such relationships all mixed together. To a cursory ‘eyeball’ inspection, the data might then just look like random noise.
“If you have a data set with 22 million relationships, the 500 relationships in there that you care about are effectively invisible to a human”, says Yakir Reshef.
And the relationships are all the harder to tease out if you don’t know what you’re looking for in the first place – if you have no a priori reason to suspect that this depends on that.
The new statistical method that Reshef and his colleagues have devised aims to crack precisely those problems. It can spot many superimposed correlations between variables and measure exactly how tight each relationship is, according to a quantity they call the maximal information coefficient (MIC).
A MIC of 1 implies that two variables are perfectly correlated, but possibly according to two or more simultaneous and perhaps opposing relationships: a straight line and a parabola, say. A MIC of zero indicates that there is no relationship between the variables.
To demonstrate the power of their technique, the researchers applied it to a diverse range of problems. In one case they looked at factors that influence people’s health globally in data collected by the World Health Organization. Here they were able to tease out superimposed trends – for example, how female obesity increases with income in the Pacific Islands, where it is considered a sign of status, while in the rest of the world there is no such link.
In another example, the researchers identified genes that were expressed periodically, but with differing cycle times, during the cell cycle of yeast. And they uncovered groups of human gut bacteria that proliferate or decline when diet is altered, finding that some bacteria are abundant precisely when others are not. Finally, they identified which performance factors for baseball players are most strongly correlated to their salaries.
Reshef cautions that finding statistical correlations is only the start of understanding. “At the end of the day you'll need an expert to tell you what your data mean”, he says. “But filtering out the junk in a data set in order to allow someone to explore it is often a task that doesn't require much context or specialized knowledge.”
He adds that “our hope is that this tool will be useful in just about any field that is amassing large amounts of data.” He points to genomics, proteomics, epidemiology, particle physics, sociology, neuroscience, earth and atmospheric science as just some of the scientific fields that are “saturated with data”.
Beyond this, the method should be valuable for ‘data mining’ in sports statistics, social media and economics. “I could imagine financial companies using tools like this to mine the vast amounts of data that they surely keep, or their being used to track patterns in news, societal memes, or cultural trends”, says Reshef.
One of the big remaining questions is about what causes what: the familiar mantra of statisticians is that “correlation does not imply causality”. People who floss their teeth live longer, but that doesn’t mean that flossing increases your lifespan.
“We see the issue of causality as a potential follow-up”, says Reshef. “Inferring causality is an immensely complicated problem, but has been well studied previously.”
Biostatistician Raya Khanin of the Memorial Sloan-Kettering Cancer Center in New York acknowledges the need for a technique like this but reserves judgement about whether we yet have the measure of MIC. “I’m not sure whether its performance is as good as and different from other measures”, she says.
For example, she questions the findings about the mutual exclusivity of some gut bacteria. “Having worked with this type of data, and judging from the figures, I'm quite certain that some basic correlation measures would have uncovered the same type of non-coexistence behavior,” she says.
Another bioinformatics specialist, Simon Rogers of the University of Glasgow in Scotland, also welcomes the method but cautions that the illustrative examples are preliminary at this stage. Of the yeast gene linkages, he says “one would have to do more evaluation to see if they are biologically significant.”
References
1. Reshef, D. N. et al. Science 334, 1518–1524 (2011).
___________________________________________
A new statistical method discovers hidden correlations in complex data.
The American humorist Evan Esar once called statistics the science of producing unreliable facts from reliable figures. A new technique now promises to make those facts a whole lot more dependable.
Brothers David Reshef of the Broad Institute of MIT and Harvard in Cambridge, Massachusetts, Yakir Reshef of the Weizmann Institute of Science in Rehovot, Israel, and their coworkers have devised a method to extract from complex sets of data relationships and trends that are invisible to other types of statistical analysis. They describe their approach in a paper in Science today [1].
“This appears to be an outstanding achievement”, says statistician Douglas Simpson of the University of Illinois at Urbana-Champaign. “It opens up whole new avenues of inquiry.”
Here’s the basic problem. You’ve collected lots of data on some property of a system that could depend on many governing factors. To figure out what depends on what, you plot them on a graph.
If you’re lucky, you might find that this property changes in some simple way as a function of some other factor: for example, people’s health gets steadily better as their wealth increases. There are well known statistical methods for assessing how reliable such correlations are.
But what if there are many simultaneous dependencies in the data? If, say, people are also healthier if they drive less, which might not bear any obvious relation to their wealth (or might even be more prevalent among the less wealthy)? The conflict might leave both relationships hidden from traditional searches for correlations.
The problems can be far worse. Suppose you’re looking at how genes interact in an organism. The activity of one gene could be correlated with that of another, but there could be hundreds of such relationships all mixed together. To a cursory ‘eyeball’ inspection, the data might then just look like random noise.
“If you have a data set with 22 million relationships, the 500 relationships in there that you care about are effectively invisible to a human”, says Yakir Reshef.
And the relationships are all the harder to tease out if you don’t know what you’re looking for in the first place – if you have no a priori reason to suspect that this depends on that.
The new statistical method that Reshef and his colleagues have devised aims to crack precisely those problems. It can spot many superimposed correlations between variables and measure exactly how tight each relationship is, according to a quantity they call the maximal information coefficient (MIC).
A MIC of 1 implies that two variables are perfectly correlated, but possibly according to two or more simultaneous and perhaps opposing relationships: a straight line and a parabola, say. A MIC of zero indicates that there is no relationship between the variables.
To demonstrate the power of their technique, the researchers applied it to a diverse range of problems. In one case they looked at factors that influence people’s health globally in data collected by the World Health Organization. Here they were able to tease out superimposed trends – for example, how female obesity increases with income in the Pacific Islands, where it is considered a sign of status, while in the rest of the world there is no such link.
In another example, the researchers identified genes that were expressed periodically, but with differing cycle times, during the cell cycle of yeast. And they uncovered groups of human gut bacteria that proliferate or decline when diet is altered, finding that some bacteria are abundant precisely when others are not. Finally, they identified which performance factors for baseball players are most strongly correlated to their salaries.
Reshef cautions that finding statistical correlations is only the start of understanding. “At the end of the day you'll need an expert to tell you what your data mean”, he says. “But filtering out the junk in a data set in order to allow someone to explore it is often a task that doesn't require much context or specialized knowledge.”
He adds that “our hope is that this tool will be useful in just about any field that is amassing large amounts of data.” He points to genomics, proteomics, epidemiology, particle physics, sociology, neuroscience, earth and atmospheric science as just some of the scientific fields that are “saturated with data”.
Beyond this, the method should be valuable for ‘data mining’ in sports statistics, social media and economics. “I could imagine financial companies using tools like this to mine the vast amounts of data that they surely keep, or their being used to track patterns in news, societal memes, or cultural trends”, says Reshef.
One of the big remaining questions is about what causes what: the familiar mantra of statisticians is that “correlation does not imply causality”. People who floss their teeth live longer, but that doesn’t mean that flossing increases your lifespan.
“We see the issue of causality as a potential follow-up”, says Reshef. “Inferring causality is an immensely complicated problem, but has been well studied previously.”
Biostatistician Raya Khanin of the Memorial Sloan-Kettering Cancer Center in New York acknowledges the need for a technique like this but reserves judgement about whether we yet have the measure of MIC. “I’m not sure whether its performance is as good as and different from other measures”, she says.
For example, she questions the findings about the mutual exclusivity of some gut bacteria. “Having worked with this type of data, and judging from the figures, I'm quite certain that some basic correlation measures would have uncovered the same type of non-coexistence behavior,” she says.
Another bioinformatics specialist, Simon Rogers of the University of Glasgow in Scotland, also welcomes the method but cautions that the illustrative examples are preliminary at this stage. Of the yeast gene linkages, he says “one would have to do more evaluation to see if they are biologically significant.”
References
1. Reshef, D. N. et al. Science 334, 1518–1524 (2011).
Monday, December 12, 2011
Darwin not guilty: shock verdict
Here’s the pre-edited version of my latest news story for Nature. There’s somewhat more to it than can all be fitted in here, or indeed that I am at liberty to say. It seems that some may still find the authors’ reconstruction of the shipping route of Wallace’s letter open to question, even if they accept (as it seems all serious historians do) that the ‘conspiracy theory’ is bunk.
There was also more to Wallace’s letter to Hooker in September 1858 than I’ve quoted here. He said:
“I cannot but consider myself a favoured party in this matter, because it has hitherto been too much the practice in cases of this sort to impute all the merit to the first discoverer of a new fact or a new theory, & little or none to any other party who may, quite independently, have arrived at the same result a few years or a few hours later.
I also look upon it as a most fortunate circumstance that I had a short time ago commenced a correspondence with Mr. Darwin on the subject of “Varieties,” since it has led to the earlier publication of a portion of his researches & has secured to him a claim of priority which an independent publication either by myself or some other party might have injuriously affected, — for it is evident that the time has now arrived when these & similar views will be promulgated & must be fairly discussed.”
So whatever one thinks of the evidence put forward here, the notion that Darwin pilfered from Wallace really is a non-starter. Not that its advocates will take the slightest notice.
_____________________________________________________
Charles Darwin was not a plagiarist, according to two researchers who claim to have refuted the idea that he revised his own theory of evolution to fit in with that proposed in a letter Darwin received from the naturalist Alfred Russel Wallace.
This accusation has received little support from serious historians of Darwin’s life and work, who concur that Darwin and Wallace came up with the theory of evolution by natural selection independently at more or less the same time. But it has proved hard to dispel, thanks to some vociferous advocates of Wallace’s claim to primacy of the theory of evolution by natural selection.
The charge rests largely on a suggestion that in 1858 Darwin sat on a letter sent from Indonesia by Wallace, including an essay in which he described his ideas, for about two weeks before passing it on to the geologist Charles Lyell as Wallace requested.
After inspecting historical shipping records, John van Wyhe and Kees Rookmaaker, curators of the archives Darwin Online and Wallace Online and historians of science at the National University of Singapore, claim that Wallace’s letter and essay could not in fact have arrived sooner than 18 June, the very day that Darwin told Lyell he had received it [1].
Darwin had begun work on the text that became On the Origin of Species, published in 1859, as early as the 1840s, but had dallied over it. In his letter to Lyell he admitted rueing his own dilatoriness. “I never saw a more striking coincidence”, he said. “If Wallace has my M.S. sketch written out in 1842 he could not have made a better abstract.”
In the event – but not without misgivings about whether it was the honourable thing – Darwin followed the suggestion of Lyell and his friend Joseph Hooker that he write up his own views on evolution so that the papers could be presented side by side to the Linnaean Society in London. This took place on 1 July, but Darwin wasn’t present, for he was still devastated by the death of his youngest son from scarlet fever three days earlier.
The controversy about attribution would probably have mystified both Darwin and Wallace, who remained mutually respectful throughout their lives. Darwin was even ready to relinquish all priority to the idea of natural selection after seeing Wallace’s essay, until Lyell and Hooker persuaded him otherwise. And in September 1858 Wallace wrote to Hooker that “It would have caused me such pain & regret had Mr. Darwin’s excess of generosity led him to make public my paper unaccompanied by his own much earlier & I doubt not much more complete views on the same subject.”
Although most historians have accepted that Darwin’s account of the events was honest, others have argued that Wallace’s letter, sent from the island of Ternate in the Moluccas, arrived at Darwin’s house in Down in southern England, several weeks earlier than 18 June. They suggest that Darwin lied about the date of receipt because he used the intervening time to revise his own ideas in the light of Wallace’s.
The most extreme accusation came in a 2008 book The Darwin Conspiracy: Origins of a Scientific Crime by the former BBC documentary-maker Roy Davies. “Ideas contained in Wallace’s Ternate paper were plagiarised by Charles Darwin”, wrote Davies, who called this “a deliberate and iniquitous case of intellectual theft, deceit and lies.” Others have claimed that Darwin wrote to Hooker on 8 June saying that he had found a ‘missing keystone’ to his theory, and allege that he took this from Wallace’s essay.
“Many conspiracy theorists have made hay because of this unexplained date mystery”, says van Wyhe. He and Rookmaaker have now painstakingly retraced the tracks of the letter. They have discovered the sailing schedules of mail boats operated by Dutch firms in what was then the Dutch East Indies, and claim that these indicate the letter could not have left Ternate sooner than about 5 April. It was carried via Jakarta, Singapore and Sri Lanka, and then overland from Suez to Alexandria. “We found that Wallace’s essay travelled across Egypt on camels”, says van Wyhe. “That was not known before, and it’s a rather charming image to think of this essay that will change the world swaying on the back of a camel for two days.”
The researchers say that the letter then passed on by boat to Gibraltar and Southampton in England, arriving on 16 June. It was taken by train to London and then on to Down to arrive on the morning of the 18th.
“I'm not sure there really ever has been a controversy over this within the history of science community”, says evolutionary biologist John Lynch of Arizona State University, who has written extensively on cultural responses to evolutionary theory. He says that the claims of plagiarism “have had marginal, if any, influence - the evidence has failed to convince most readers.”
The story “has always seemed unlikely to me given what we know about Darwin’s generally kind and tolerant personality”, agrees geneticist Steve Jones of University College, London, whose 1999 book Almost like a Whale was an updated version of the Origin of Species.
But van Wyhe says that “these conspiracy stories are very widely believed. Thousands of people have heard that something fishy happened between Darwin and Wallace. I hear these stories very often when I give popular lectures.”
Historian of science James Lennox of the University of Pittsburgh says that “this is an important piece of evidence for Davies’ claim of deceit on Darwin’s part. I think that claim has been undermined.”
But Lennox adds that he doesn’t think it will close the ‘controversy’. “For a variety of different motives, there will, I fear, always be people who see it as their mission to attack Darwin's character as a way of undermining his remarkable scientific achievements.”
References
1. Van Wyhe, J. & Rookmaaker, K. Biol. J. Linnaean Soc. 105, 249-252 (2012). See here.
There was also more to Wallace’s letter to Hooker in September 1858 than I’ve quoted here. He said:
“I cannot but consider myself a favoured party in this matter, because it has hitherto been too much the practice in cases of this sort to impute all the merit to the first discoverer of a new fact or a new theory, & little or none to any other party who may, quite independently, have arrived at the same result a few years or a few hours later.
I also look upon it as a most fortunate circumstance that I had a short time ago commenced a correspondence with Mr. Darwin on the subject of “Varieties,” since it has led to the earlier publication of a portion of his researches & has secured to him a claim of priority which an independent publication either by myself or some other party might have injuriously affected, — for it is evident that the time has now arrived when these & similar views will be promulgated & must be fairly discussed.”
So whatever one thinks of the evidence put forward here, the notion that Darwin pilfered from Wallace really is a non-starter. Not that its advocates will take the slightest notice.
_____________________________________________________
Charles Darwin was not a plagiarist, according to two researchers who claim to have refuted the idea that he revised his own theory of evolution to fit in with that proposed in a letter Darwin received from the naturalist Alfred Russel Wallace.
This accusation has received little support from serious historians of Darwin’s life and work, who concur that Darwin and Wallace came up with the theory of evolution by natural selection independently at more or less the same time. But it has proved hard to dispel, thanks to some vociferous advocates of Wallace’s claim to primacy of the theory of evolution by natural selection.
The charge rests largely on a suggestion that in 1858 Darwin sat on a letter sent from Indonesia by Wallace, including an essay in which he described his ideas, for about two weeks before passing it on to the geologist Charles Lyell as Wallace requested.
After inspecting historical shipping records, John van Wyhe and Kees Rookmaaker, curators of the archives Darwin Online and Wallace Online and historians of science at the National University of Singapore, claim that Wallace’s letter and essay could not in fact have arrived sooner than 18 June, the very day that Darwin told Lyell he had received it [1].
Darwin had begun work on the text that became On the Origin of Species, published in 1859, as early as the 1840s, but had dallied over it. In his letter to Lyell he admitted rueing his own dilatoriness. “I never saw a more striking coincidence”, he said. “If Wallace has my M.S. sketch written out in 1842 he could not have made a better abstract.”
In the event – but not without misgivings about whether it was the honourable thing – Darwin followed the suggestion of Lyell and his friend Joseph Hooker that he write up his own views on evolution so that the papers could be presented side by side to the Linnaean Society in London. This took place on 1 July, but Darwin wasn’t present, for he was still devastated by the death of his youngest son from scarlet fever three days earlier.
The controversy about attribution would probably have mystified both Darwin and Wallace, who remained mutually respectful throughout their lives. Darwin was even ready to relinquish all priority to the idea of natural selection after seeing Wallace’s essay, until Lyell and Hooker persuaded him otherwise. And in September 1858 Wallace wrote to Hooker that “It would have caused me such pain & regret had Mr. Darwin’s excess of generosity led him to make public my paper unaccompanied by his own much earlier & I doubt not much more complete views on the same subject.”
Although most historians have accepted that Darwin’s account of the events was honest, others have argued that Wallace’s letter, sent from the island of Ternate in the Moluccas, arrived at Darwin’s house in Down in southern England, several weeks earlier than 18 June. They suggest that Darwin lied about the date of receipt because he used the intervening time to revise his own ideas in the light of Wallace’s.
The most extreme accusation came in a 2008 book The Darwin Conspiracy: Origins of a Scientific Crime by the former BBC documentary-maker Roy Davies. “Ideas contained in Wallace’s Ternate paper were plagiarised by Charles Darwin”, wrote Davies, who called this “a deliberate and iniquitous case of intellectual theft, deceit and lies.” Others have claimed that Darwin wrote to Hooker on 8 June saying that he had found a ‘missing keystone’ to his theory, and allege that he took this from Wallace’s essay.
“Many conspiracy theorists have made hay because of this unexplained date mystery”, says van Wyhe. He and Rookmaaker have now painstakingly retraced the tracks of the letter. They have discovered the sailing schedules of mail boats operated by Dutch firms in what was then the Dutch East Indies, and claim that these indicate the letter could not have left Ternate sooner than about 5 April. It was carried via Jakarta, Singapore and Sri Lanka, and then overland from Suez to Alexandria. “We found that Wallace’s essay travelled across Egypt on camels”, says van Wyhe. “That was not known before, and it’s a rather charming image to think of this essay that will change the world swaying on the back of a camel for two days.”
The researchers say that the letter then passed on by boat to Gibraltar and Southampton in England, arriving on 16 June. It was taken by train to London and then on to Down to arrive on the morning of the 18th.
“I'm not sure there really ever has been a controversy over this within the history of science community”, says evolutionary biologist John Lynch of Arizona State University, who has written extensively on cultural responses to evolutionary theory. He says that the claims of plagiarism “have had marginal, if any, influence - the evidence has failed to convince most readers.”
The story “has always seemed unlikely to me given what we know about Darwin’s generally kind and tolerant personality”, agrees geneticist Steve Jones of University College, London, whose 1999 book Almost like a Whale was an updated version of the Origin of Species.
But van Wyhe says that “these conspiracy stories are very widely believed. Thousands of people have heard that something fishy happened between Darwin and Wallace. I hear these stories very often when I give popular lectures.”
Historian of science James Lennox of the University of Pittsburgh says that “this is an important piece of evidence for Davies’ claim of deceit on Darwin’s part. I think that claim has been undermined.”
But Lennox adds that he doesn’t think it will close the ‘controversy’. “For a variety of different motives, there will, I fear, always be people who see it as their mission to attack Darwin's character as a way of undermining his remarkable scientific achievements.”
References
1. Van Wyhe, J. & Rookmaaker, K. Biol. J. Linnaean Soc. 105, 249-252 (2012). See here.
Saturday, December 10, 2011
Creativ thinking
Here’s my latest Critical Scientist column in the Guardian, published today. It now seems that this back page of the Saturday issue is going to be reshuffled for various reasons, so it isn’t clear what the column’s fate will be in the New Year. Enjoy/criticize/excoriate it while you can.
_______________________________________________________________________
The kind of idle pastime that might amuse physicists is to imagine drafting Einstein’s grant applications in 1905. “I propose to investigate the idea that light travels in little bits”, one might say. “I will explore the possibility that time slows down as things speed up” goes another. Imagine what comments those would have elicited from reviewers for the German Science Funding Agency, had such a thing existed. Instead, Einstein just did the work anyway while drawing his wages as a Technical Expert Third Class at the Bern Patent Office. And that’s how he invented quantum physics and relativity.
The moral seems to be that really innovative ideas don’t get funded – indeed, that the system is set up to exclude them. To wring research money from government agencies, you have to write a proposal that gets assessed by anonymous experts (“peer reviewers”). If its ambitions are too grand or its ideas too unconventional, there’s a strong chance it’ll be trashed. So does the money go only to only ‘safe’ proposals that plod down well-trodden avenues, timidly advancing the frontiers of knowledge a few nanometres?
There’s some truth in the accusation that grant mechanisms favour mediocrity. After all, your proposal has to specify exactly what you’re going to achieve. But how can you know the results before you’ve done the experiments, unless you’re aiming to prove the bleeding obvious?
To address this complaint, the US National Science Foundation has recently announced a new scheme for awarding grants. From next year – if Congress approves – the Creative Research Awards for Transformative Interdisciplinary Ventures (CREATIV – oh, I get it) will have $24 million to give to “unusually creative high-risk/high-reward interdisciplinary proposals.” In other words, it’s looking for really new ideas that might not work, but which would be massive if they do.
As science funding goes, $24m is peanuts – the total NSF pot is $5.5 bn. And each application is limited to $1m. But this is just a pilot project; more might follow. The real point is that CREATIV has been created at all, because it could be interpreted as an admission of NSF’s failure to support innovation previously. Needless to say, that’s not how NSF would see it. They would argue that the usual funding mechanisms have blind spots, especially when it comes to supporting research that crosses disciplinary boundaries.
This is a notorious problem. Talking up the importance of “interdisciplinarity” is all the rage, but most funds are still marshaled into conventional boundaries – medicine, say, or particle physics – so that if you have an idea for how to apply particle physics to medicine, each agency directs your grant request to the other one.
The problem is all the worse if you want to tackle a really big problem. To make a new drug you need chemists; to tackle Africa’s AIDS epidemic you will require not only drugs but the expertise of epidemiologists, sociologists, virologists and much else. The buzzword for really big solutions and technologies is “transformative” – the Internet is transformative, Viagra is not. This big-picture thinking is in vogue; the European Commission’s Future Emerging Technologies programme is promising to award €1 bn (now you’re talking) next year for transformational projects under the so-called Flagship Initiative.
Are schemes like CREATIV the way forward? Because the funding will be allocated by individual project managers rather than risking the conservatism of review panels, it could fall prey to cronyism. And who’s to say that those project managers will be any more broad-minded or perceptive? In the end, it’s a Gordian knot: only experts can properly assess proposals, but by definition their vision tends to be narrow. It’s good that CREATIV acknowledges the problem, but it remains to be seen if it’s a solution. Like movie-making or publishing, it’ll need to accept that there will be some duds. It’s a shame there aren’t more scientific problems that can be solved with pen, paper, and a patent clerk’s pay packet.
_______________________________________________________________________
The kind of idle pastime that might amuse physicists is to imagine drafting Einstein’s grant applications in 1905. “I propose to investigate the idea that light travels in little bits”, one might say. “I will explore the possibility that time slows down as things speed up” goes another. Imagine what comments those would have elicited from reviewers for the German Science Funding Agency, had such a thing existed. Instead, Einstein just did the work anyway while drawing his wages as a Technical Expert Third Class at the Bern Patent Office. And that’s how he invented quantum physics and relativity.
The moral seems to be that really innovative ideas don’t get funded – indeed, that the system is set up to exclude them. To wring research money from government agencies, you have to write a proposal that gets assessed by anonymous experts (“peer reviewers”). If its ambitions are too grand or its ideas too unconventional, there’s a strong chance it’ll be trashed. So does the money go only to only ‘safe’ proposals that plod down well-trodden avenues, timidly advancing the frontiers of knowledge a few nanometres?
There’s some truth in the accusation that grant mechanisms favour mediocrity. After all, your proposal has to specify exactly what you’re going to achieve. But how can you know the results before you’ve done the experiments, unless you’re aiming to prove the bleeding obvious?
To address this complaint, the US National Science Foundation has recently announced a new scheme for awarding grants. From next year – if Congress approves – the Creative Research Awards for Transformative Interdisciplinary Ventures (CREATIV – oh, I get it) will have $24 million to give to “unusually creative high-risk/high-reward interdisciplinary proposals.” In other words, it’s looking for really new ideas that might not work, but which would be massive if they do.
As science funding goes, $24m is peanuts – the total NSF pot is $5.5 bn. And each application is limited to $1m. But this is just a pilot project; more might follow. The real point is that CREATIV has been created at all, because it could be interpreted as an admission of NSF’s failure to support innovation previously. Needless to say, that’s not how NSF would see it. They would argue that the usual funding mechanisms have blind spots, especially when it comes to supporting research that crosses disciplinary boundaries.
This is a notorious problem. Talking up the importance of “interdisciplinarity” is all the rage, but most funds are still marshaled into conventional boundaries – medicine, say, or particle physics – so that if you have an idea for how to apply particle physics to medicine, each agency directs your grant request to the other one.
The problem is all the worse if you want to tackle a really big problem. To make a new drug you need chemists; to tackle Africa’s AIDS epidemic you will require not only drugs but the expertise of epidemiologists, sociologists, virologists and much else. The buzzword for really big solutions and technologies is “transformative” – the Internet is transformative, Viagra is not. This big-picture thinking is in vogue; the European Commission’s Future Emerging Technologies programme is promising to award €1 bn (now you’re talking) next year for transformational projects under the so-called Flagship Initiative.
Are schemes like CREATIV the way forward? Because the funding will be allocated by individual project managers rather than risking the conservatism of review panels, it could fall prey to cronyism. And who’s to say that those project managers will be any more broad-minded or perceptive? In the end, it’s a Gordian knot: only experts can properly assess proposals, but by definition their vision tends to be narrow. It’s good that CREATIV acknowledges the problem, but it remains to be seen if it’s a solution. Like movie-making or publishing, it’ll need to accept that there will be some duds. It’s a shame there aren’t more scientific problems that can be solved with pen, paper, and a patent clerk’s pay packet.
Saturday, December 03, 2011
Science criticism
My first of an undisclosed number of columns in the Saturday Guardian has appeared today. And got a shedload of online feedback.
I’m grateful for all these comments, good and bad (and indifferent), for giving me some sense of how the aims of this column are being perceived. It would be as premature for me to tell you what it is going to do at this point, as it is for anyone else to judge it. This is an experiment. We don’t know yet quite where it will go (that’s how it is with experiments, right?). No doubt feedback will have an influence on that. But I think I’d better make a few things more clear than I could in the piece itself:
1. This isn’t going to be a science-knocking column. Wouldn’t that be bizarre? Like appointing a theatre critic who hates theatre. (Someone, I am sure, will now come up with a few candidates for that description.) Theatre, art and literary critics almost inevitably think that theatre, art and literature are the most wonderful things: essential, inspiring, and deeply life-affirming. It is precisely caring strongly about it their subject that constitutes a necessary (if not sufficient) qualification for the job. Well, ditto here.
2. I’m not going to be peer-reviewing anyone’s work. It’s interesting that some of the comments still seem to evince a notion that this is the full extent of the meaningful evaluation of a piece of scientific work. Look at what Dorothy Nelkin brought to the discussion about DNA and genetics – in my view, important questions that were pretty much off the radar screen of most scientists working on those things. Sadly, the Guardian hasn’t got Dorothy Nelkin, though – it’s got me. She would never have done it for this kind of money.
3. But it’s not necessarily about bringing scientists to task for what they do or don’t do or say – at least, not uniquely. I like the three definitions of “critic” in the Free Dictionary:
i. One who forms and expresses judgments of the merits, faults, value, or truth of a matter. [Mostly what peer reviewers are supposed to do, yes?]
ii. One who specializes especially professionally in the evaluation and appreciation of literary or artistic works: a film critic; a dance critic.
iii. One who tends to make harsh or carping judgments; a faultfinder. [Mostly bores and climate sceptics, yes?]
So (ii) then: I don’t see why it’s just ‘literary or artistic works’ that deserve ‘evaluation and appreciation’. Remember that critics praise as well as pillory (and in my view, the best ones always make an effort to find what is valuable in a work). The critic is also there to offer context, draw analogies and comparisons, point to predecessors. (The sceptic might here scoff “Oh yeah, very valuable in science – the predecessors of E=mc2?” To which my answer is here). I also feel that the best critics don’t try to tell you what to think, but just suggest things it might be worth thinking about.
4. Some of these folks will be disappointed – in particular, those who seem to think that the column is going to be concerned mainly with highlighting why science has lost its way, or ignores deep philosophical conundrums, or fails in its social duty. I really hope to be able to touch on some of those issues (that is, to consider whether they’re really true), and I have much sympathy with some of what Nicholas Maxwell has written. But my themes will generally be considerably less grand and more specific, perhaps even parochial. Weekly critics tend to review what’s just opened at the Royal Court, not the state of British theatre, right? Besides, it’s important that I’m realistic about what can be attempted (let alone achieved) in this format. Remember that this is a weekly column in a newspaper, not an academic thesis. I have 600 words, and then you get Lucy Mangan.
All we want to try for, really, is a somewhat different way of writing about science: not merely explaining who did what and why it will transform our lives (which of course it mostly doesn’t), but writing about science as something with its own internal social dynamics, methodological dilemmas, cultural pressures and drivers, and as something that reflects and is reflected by the broader culture. That’s what I have generally attempted to do in my books already. And I want to make it very clear that I don’t claim any great originality in taking this perspective. Many writers have done it before, and doubtless better. It’s just that there is rarely a chance to discuss science in this way in newspapers, where it is all too often given its own little geeks’ ghetto. Indeed, Ben Goldacre’s Bad Science was one of the first efforts that successfully broke that mould. What’s new(ish) is not the idea but the opportunity.
I’m grateful for all these comments, good and bad (and indifferent), for giving me some sense of how the aims of this column are being perceived. It would be as premature for me to tell you what it is going to do at this point, as it is for anyone else to judge it. This is an experiment. We don’t know yet quite where it will go (that’s how it is with experiments, right?). No doubt feedback will have an influence on that. But I think I’d better make a few things more clear than I could in the piece itself:
1. This isn’t going to be a science-knocking column. Wouldn’t that be bizarre? Like appointing a theatre critic who hates theatre. (Someone, I am sure, will now come up with a few candidates for that description.) Theatre, art and literary critics almost inevitably think that theatre, art and literature are the most wonderful things: essential, inspiring, and deeply life-affirming. It is precisely caring strongly about it their subject that constitutes a necessary (if not sufficient) qualification for the job. Well, ditto here.
2. I’m not going to be peer-reviewing anyone’s work. It’s interesting that some of the comments still seem to evince a notion that this is the full extent of the meaningful evaluation of a piece of scientific work. Look at what Dorothy Nelkin brought to the discussion about DNA and genetics – in my view, important questions that were pretty much off the radar screen of most scientists working on those things. Sadly, the Guardian hasn’t got Dorothy Nelkin, though – it’s got me. She would never have done it for this kind of money.
3. But it’s not necessarily about bringing scientists to task for what they do or don’t do or say – at least, not uniquely. I like the three definitions of “critic” in the Free Dictionary:
i. One who forms and expresses judgments of the merits, faults, value, or truth of a matter. [Mostly what peer reviewers are supposed to do, yes?]
ii. One who specializes especially professionally in the evaluation and appreciation of literary or artistic works: a film critic; a dance critic.
iii. One who tends to make harsh or carping judgments; a faultfinder. [Mostly bores and climate sceptics, yes?]
So (ii) then: I don’t see why it’s just ‘literary or artistic works’ that deserve ‘evaluation and appreciation’. Remember that critics praise as well as pillory (and in my view, the best ones always make an effort to find what is valuable in a work). The critic is also there to offer context, draw analogies and comparisons, point to predecessors. (The sceptic might here scoff “Oh yeah, very valuable in science – the predecessors of E=mc2?” To which my answer is here). I also feel that the best critics don’t try to tell you what to think, but just suggest things it might be worth thinking about.
4. Some of these folks will be disappointed – in particular, those who seem to think that the column is going to be concerned mainly with highlighting why science has lost its way, or ignores deep philosophical conundrums, or fails in its social duty. I really hope to be able to touch on some of those issues (that is, to consider whether they’re really true), and I have much sympathy with some of what Nicholas Maxwell has written. But my themes will generally be considerably less grand and more specific, perhaps even parochial. Weekly critics tend to review what’s just opened at the Royal Court, not the state of British theatre, right? Besides, it’s important that I’m realistic about what can be attempted (let alone achieved) in this format. Remember that this is a weekly column in a newspaper, not an academic thesis. I have 600 words, and then you get Lucy Mangan.
All we want to try for, really, is a somewhat different way of writing about science: not merely explaining who did what and why it will transform our lives (which of course it mostly doesn’t), but writing about science as something with its own internal social dynamics, methodological dilemmas, cultural pressures and drivers, and as something that reflects and is reflected by the broader culture. That’s what I have generally attempted to do in my books already. And I want to make it very clear that I don’t claim any great originality in taking this perspective. Many writers have done it before, and doubtless better. It’s just that there is rarely a chance to discuss science in this way in newspapers, where it is all too often given its own little geeks’ ghetto. Indeed, Ben Goldacre’s Bad Science was one of the first efforts that successfully broke that mould. What’s new(ish) is not the idea but the opportunity.
Friday, December 02, 2011
Diamond vibrations neither here nor there
Here’s the pre-edited version of my latest news story for Nature online.
_________________________________________________
Two objects big enough for the eye to see have been placed in a weirdly connected quantum state.
A pair of diamond crystals has been spookily linked by quantum entanglement by researchers working in Oxford, Canada and Singapore.
This means that vibrations detected in the crystals could not be meaningfully assigned to one or other of them: both crystals were simultaneously vibrating and not vibrating.
Quantum entanglement is well established between quantum particles such as atoms at ultra-cold temperatures. But like most quantum effects, it doesn’t usually tend to survive either at room temperature or in objects large enough to see with the naked eye.
The team, led by Ian Walmsley of Oxford University, found a way to overcome both those limitations – demonstrating that the weird consequences of quantum theory don’t just apply at very small scales.
The result is “clever and convincing” according to Andrew Cleland, a specialist in the quantum behaviour of nanometre-scale objects at the University of California at Santa Barbara.
Entanglement was first mooted by Albert Einstein and two of his coworkers in 1935, ironically as an illustration of why quantum theory could not tell the whole story about the microscopic world.
Einstein considered two quantum particles that interact with each other so that their quantum states become interdependent. If the first particle is in state A, say, then the other must be in state B, and vice versa. The particles are then said to be entangled.
Until a measurement is made on one of the particles, its state is undetermined: it can be regarded as being in both states A and B simultaneously, known as a superposition. But a measurement ‘collapses’ this superposition into just one state or the other.
The trouble is, Einstein said, that if the particles are entangled then this measurement determines which state the other particle is in too – even if they have become separated by a vast distance. The effect of the measurement is transmitted instantaneously to the other particle, via what Einstein called ‘spooky action at a distance’. That can’t be right, he argued.
But it is, as countless experiments have since shown. Quantum entanglement is not only real but could be useful. Entangled photons of light have been used to transmit information in a way that cannot be intercepted and read without that being detectable – a technique called quantum cryptography.
And entangled quantum states of atoms or light can be used in quantum computing, where the superposition states allow much more information to be encoded in them than in conventional two-state bits.
But superpositions and entanglement are usually seen as delicate states, easily disrupted by random atomic jostling in a warm environment. This scrambling also tends to happen very quickly if the quantum states contain many interacting particles – in other words, for larger objects.
Walmsley and colleagues got round this by entangling synchronized atomic vibrations called phonons in diamond. Phonons – wavelike motions of many atoms, rather like sound waves in air – occur in all solids. But in diamond, the stiffness of the atomic lattice means that the phonons have very high frequencies and energy, and are therefore not usually active even at room temperature.
The researchers used a laser pulse to stimulate phonon vibrations in two crystals 3 mm across and 15 cm apart. They say that each phonon involves the coherent vibration of about 10**16 atoms, corresponding to a region of the crystal about 0.05 mm wide and 0.25 mm long – large enough to see with the naked eye.
There are three crucial conditions for getting entangled phonons in the two diamonds. First, a phonon must be excited with just one photon from the laser’s stream of photons. Second, this photon must be sent through a ‘beam splitter’ which directs it into one crystal or the other. If the path isn’t detected, then the photon can be considered to go both ways at once: to be in a superposition of trajectories. The resulting phonon is then in an entangled superposition too.
“If we can’t tell from which diamond the photon came, then we can’t determine in which diamond the phonon resides”, Walmsley explains. “Hence the phonon is ‘shared’ between the two diamonds.”
The third condition is that the photon must not only excite a phonon – also, part of its energy must be converted into a lower-energy photon, called a Stokes photon, that signals the presence of the phonon.
“When we detect the Stokes photon we know we have created a phonon, but we can’t know even in principle in which diamond it now resides”, says Walmsley. “This is the entangled state, for which neither the statement ‘this diamond is vibrating’ nor ‘this diamond is not vibrating’ is true.”
To verify that it’s been made, the researchers fire a second laser pulse into the two crystals to ‘read out’ the phonon, from which it draws extra energy. All the necessary conditions are satisfied only very rarely during the experiment. “They have to perform an astronomical number of attempts to get a very finite number of desired outcome”, says Cleland.
He doubts that there will be any immediate applications, partly because the entanglement is so short-lived. “I am not sure where this particular work will go from here”, he says. “I can’t think of a particular use for entanglement that lasts for only a few picoseconds [10**-12 s].”
But Walmsley is more optimistic. “Diamond could form the basis of a powerful technology for practical quantum information processing”, he says. “The optical properties of diamond make it ideal for producing tiny optical circuits on chips.”
1. K. C. Lee et al., Science 334, 1253-1256 (2011).
_________________________________________________
Two objects big enough for the eye to see have been placed in a weirdly connected quantum state.
A pair of diamond crystals has been spookily linked by quantum entanglement by researchers working in Oxford, Canada and Singapore.
This means that vibrations detected in the crystals could not be meaningfully assigned to one or other of them: both crystals were simultaneously vibrating and not vibrating.
Quantum entanglement is well established between quantum particles such as atoms at ultra-cold temperatures. But like most quantum effects, it doesn’t usually tend to survive either at room temperature or in objects large enough to see with the naked eye.
The team, led by Ian Walmsley of Oxford University, found a way to overcome both those limitations – demonstrating that the weird consequences of quantum theory don’t just apply at very small scales.
The result is “clever and convincing” according to Andrew Cleland, a specialist in the quantum behaviour of nanometre-scale objects at the University of California at Santa Barbara.
Entanglement was first mooted by Albert Einstein and two of his coworkers in 1935, ironically as an illustration of why quantum theory could not tell the whole story about the microscopic world.
Einstein considered two quantum particles that interact with each other so that their quantum states become interdependent. If the first particle is in state A, say, then the other must be in state B, and vice versa. The particles are then said to be entangled.
Until a measurement is made on one of the particles, its state is undetermined: it can be regarded as being in both states A and B simultaneously, known as a superposition. But a measurement ‘collapses’ this superposition into just one state or the other.
The trouble is, Einstein said, that if the particles are entangled then this measurement determines which state the other particle is in too – even if they have become separated by a vast distance. The effect of the measurement is transmitted instantaneously to the other particle, via what Einstein called ‘spooky action at a distance’. That can’t be right, he argued.
But it is, as countless experiments have since shown. Quantum entanglement is not only real but could be useful. Entangled photons of light have been used to transmit information in a way that cannot be intercepted and read without that being detectable – a technique called quantum cryptography.
And entangled quantum states of atoms or light can be used in quantum computing, where the superposition states allow much more information to be encoded in them than in conventional two-state bits.
But superpositions and entanglement are usually seen as delicate states, easily disrupted by random atomic jostling in a warm environment. This scrambling also tends to happen very quickly if the quantum states contain many interacting particles – in other words, for larger objects.
Walmsley and colleagues got round this by entangling synchronized atomic vibrations called phonons in diamond. Phonons – wavelike motions of many atoms, rather like sound waves in air – occur in all solids. But in diamond, the stiffness of the atomic lattice means that the phonons have very high frequencies and energy, and are therefore not usually active even at room temperature.
The researchers used a laser pulse to stimulate phonon vibrations in two crystals 3 mm across and 15 cm apart. They say that each phonon involves the coherent vibration of about 10**16 atoms, corresponding to a region of the crystal about 0.05 mm wide and 0.25 mm long – large enough to see with the naked eye.
There are three crucial conditions for getting entangled phonons in the two diamonds. First, a phonon must be excited with just one photon from the laser’s stream of photons. Second, this photon must be sent through a ‘beam splitter’ which directs it into one crystal or the other. If the path isn’t detected, then the photon can be considered to go both ways at once: to be in a superposition of trajectories. The resulting phonon is then in an entangled superposition too.
“If we can’t tell from which diamond the photon came, then we can’t determine in which diamond the phonon resides”, Walmsley explains. “Hence the phonon is ‘shared’ between the two diamonds.”
The third condition is that the photon must not only excite a phonon – also, part of its energy must be converted into a lower-energy photon, called a Stokes photon, that signals the presence of the phonon.
“When we detect the Stokes photon we know we have created a phonon, but we can’t know even in principle in which diamond it now resides”, says Walmsley. “This is the entangled state, for which neither the statement ‘this diamond is vibrating’ nor ‘this diamond is not vibrating’ is true.”
To verify that it’s been made, the researchers fire a second laser pulse into the two crystals to ‘read out’ the phonon, from which it draws extra energy. All the necessary conditions are satisfied only very rarely during the experiment. “They have to perform an astronomical number of attempts to get a very finite number of desired outcome”, says Cleland.
He doubts that there will be any immediate applications, partly because the entanglement is so short-lived. “I am not sure where this particular work will go from here”, he says. “I can’t think of a particular use for entanglement that lasts for only a few picoseconds [10**-12 s].”
But Walmsley is more optimistic. “Diamond could form the basis of a powerful technology for practical quantum information processing”, he says. “The optical properties of diamond make it ideal for producing tiny optical circuits on chips.”
1. K. C. Lee et al., Science 334, 1253-1256 (2011).
Thursday, December 01, 2011
Beautiful labs
Here is my latest Crucible column for the December issue of Chemistry World.
_________________________________________________________________
Fresh from visiting some science departments in China, I figure that, in appearance, these places don’t vary much the world over. They have the same pale corridors lined with boxy offices or neutral-hued, cluttered lab spaces; the same wood-clad lecture theatres with their raked seating and projection screens (few sliding blackboards now survive); the same posters of recent research lining the walls. They are unambiguously work places: functional, undemonstrative, bland.
Yet people spend their working lives here, day after day and sometimes night after night. Doesn’t all this functionalist severity and gloom stifle creativity? Clearly it needn’t, but increasingly we seem to suspect that conducive surroundings can offer stimulus to the advancement of knowledge. When the Wellcome Wing of the biochemistry department at Cambridge was designed and built in the early 1960s, its rectilinear modernist simplicity realised in concrete and glass was merely the order of the day, and celebrated by some (notably the influential architectural critic Nikolaus Pevsner) for its precision [1]. Today, stained and weathered, it fares less well, engendering that feeling I get from my old copy of Cotton & Wilkinson that learning chemistry is a dour affair.
Yet no longer are labs and scientific institutions built just to place walls around the benches and fume cupboards. Increasingly, for example, their design takes account of how best to encourage researchers to engage in informal discussions over coffee: comfy seating, daylight and blackboards are supplied to lubricate the exchanges. The notion that all serious work has to take place out of sight behind closed doors has yielded to the advent of open atria and glass walls, exemplified by the new biochemistry laboratory at Oxford, designed by Hawkins/Brown, which opened three years ago at a cost of nearly £50m. Not only does this space take its cue from the open-plan office, but it also follows the corporate habit of adorning the interior with expensive artworks, such as the flock of resin birds that hang suspended in the atrium. Some might grumble that the likes of Hans Krebs and Dorothy Hodgkin did not seem to need art around them to think big thoughts – but the department’s Mark Sansom has eloquently defended thus the value of the project’s artistic component: “if you have a greater degree of visual literacy, you reflect more on both the way you represent things, and also the way that may limit the way you think about them” [2]. Besides, where would you rather work?
The watchword for this new approach to laboratory design is accessibility: physically, visually, intellectually. Jonathan Hodgkin in Oxford’s biochemistry department explains that, in making art a part of the new building’s design, “part of our aim is to humanize the image of science for the public" [2]. Similarly, Terry Farrell, who was behind the dramatic (and controversial) redesign of the Royal Institution in London, says that his aim was to reconfigure the place “not as a museum but as a living, working, lively and engaging institution, which will inspire an enthusiasm for science in future generations” [3]. Even someone like me who loved the dusty, crammed warren that was the old RI has to admire the result, although the compromises to the research lab space contributed to the internal tensions of the project.
Or take the striking glass facades of the new Frick Chemistry Laboratory at Princeton, whose chief architect Michael Hopkins says that "We wanted to inspire the younger students by letting them see the workings of the department.” A common theme is to use the design to echo the science, as for example in the double-helical staircase of the European Molecular Biology Laboratory’s Advanced Training Centre in Heidelberg.
However, not all beautiful labs are new ones, a point illustrated in a recent list of “the 10 most beautiful college science labs” compiled by the US-based OnlineColleges.net. While some of these have been selected for their sleek contemporary feel – the Frick building is one, and the stunning new Physical Sciences Building that abuts Cornell’s neoclassical Baker Laboratory is another – others are more venerable. Who could quibble, for example, with the inclusion of Harvard’s Mallinckrodt Laboratory, an imposing neoclassical edifice built in the 1920s and home to the chemistry department? And then there is the Victorian gothic of the ‘Abbot’s kitchen’ in the Oxford inorganic chemistry labs, a delightful feature that I shamefully overlooked on many a tramp along South Parks Road to coax more crystals out of solution. Or the ivy-coated mock-gothic of Chicago’s Ryerson Physical Laboratory, where Robert Millikan deduced the electron’s charge.
Among these ‘beautiful labs’, chemistry seems to be represented disproportionately. Have we chemists perhaps a stronger aesthetic sensibility?
References
1. M. Kemp, Nature 395, 849 (1998).
2. G. Ferry, Nature 457, 541 (2009).
3. T. Farrell, Interiors and the Legacy of Postmodernism (Laurence King, London, 2011).
Monday, November 28, 2011
Building a better foam
I have a story on Nature’s blog about a nice new paper on ‘minimal foams’, which finally reports evidence for the Weaire-Phelan foam reported several years ago to be a more energetically favourable structure than Kelvin’s, postulated in 1887. Denis Weaire has written a nice (but goodness me, pricey) account of this so-called Kelvin Problem. And I get to show last year’s holiday snaps in Beijing…
_________________________________________________________________
Physicists working at Trinity College in Dublin, Ireland, have finally made the perfect foam. Whereas most Dubliners might consider that to be the head of a pint of Guinness, Denis Weaire and his coworkers have a more sophisticated answer.
‘Perfect’ here means the lowest-energy configuration of packed bubbles of equal size. This is a compromise. Making a soap film costs energy proportional to the film’s surface area. But the many interlocking faces of an array of polyhedral bubbles in a foam also have be mechanically stable. The Belgian scientist J. A. P. Plateau calculated in the nineteenth century that three soaps films are mechanically stable when they meet at angles of 120o, whereas four films meet at the tetrahedral angle of about 109.5o.
So what bubble shape minimizes the total surface area while (more or less) satisfying Plateau’s rules? That’s essentially the same as asking what shape balloons, or any squashy spheres, will adopt when squeezed together. Scientists including the French zoologist Georges Buffon have pondered that, using lead shot and garden peas, for centuries. The Irish scientist Lord Kelvin thought he had the answer in 1887: the ‘perfect foam’ is one in which the cells are truncated octahedra, with eight hexagonal faces and six square ones – provided that the faces are a little curved to better fit Plateau’s rules.
Kelvin’s solution was thought to be optimal for a long time, but there was no formal proof. Then in 1994 Weaire and his colleague Robert Phelan found a better way. It wasn’t so elegant – the structure had a repeating unit of eight polyhedra, six of them with 14 faces and two with 12, all with hexagons and imperfect pentagons and again slight curved (see first pic above). This has 0.3 percent less surface area than Kelvin’s foam.
But does it really exist? The duo found no definitive evidence of their ideal foam in experiments (conducted with washing-up liquid). Now there is. The key was to find the right container. Normal containers have flat walls, which the Weaire-Phelan (WP) foam won’t sit comfortably against. But physicist Ruggero Gabbrielli from the University of Trento figured that a container with walls shaped to fit the WP foam might encourage it to form. He has collaborated with Weaire and his colleagues, along with mathematician Kenneth Brakke at Susquehanna University in Pennsylvania, to design and make one out of plastic.
When the researchers filled this container with equal sized bubbles, they found that the six layers of about 1500 bubbles were ordered into the WP structure (see second pic above). They describe their results in a paper to be published in the Philosophical Magazine Letters.
This isn’t actually the first time that the WP foam has been made. But the previous example was built by hand, one cell at a time, from girders and plastic sheets, to comprise the walls of the iconic Olympic Swimming Stadium in Beijing (see third pic above).
Christmas reading
John Whitfield’s new book about reputation, People Will Talk, is out, and I am, to be honest, envious – and I haven’t even read it yet. First, John has picked such a great and timely topic. And second, I know that he will have covered it brilliantly. Yes, this is a shameless plug for my pals, but I really want John’s book to get the attention it deserves, and not get lost among all the pre-Christmas celebrity memoirs.
And while I’m plugging, look out for the debut novel Random Walk by Alexandra Claire, published by Gomer. I’m only part of the way through, but enjoying it for much more profound reasons than the fact that it quotes from my Critical Mass at the beginning (and not just because I’m a Cymruphile either).
And while I’m plugging, look out for the debut novel Random Walk by Alexandra Claire, published by Gomer. I’m only part of the way through, but enjoying it for much more profound reasons than the fact that it quotes from my Critical Mass at the beginning (and not just because I’m a Cymruphile either).
Thursday, November 24, 2011
The sun and moon in Italy
Now that it’s happened, I can say that there’s definitely something uniquely challenging about having your fiction translated. The Sun and Moon Corrupted has become, in Italian, La Città del sole e della Luna (The City of the Sun and the Moon). I can live with that change, not least because I like the resonance with Tommaso Campanella’s visionary early seventeenth-century work The City of the Sun, which becomes somewhat apt. But how have the voices translated? In particular (this Italian illiterate wonders), how have they dealt with the eccentric English of Karl Neder and his fellow Eastern Europeans? How does one translate the Brixton riots and the Wapping news era – the whole oppressive gloom of the middle Thatcher years – to Italians?
Whatever the case, Edizioni Dedalo have done a nice job on the superficial level to which I am constrained: I like the faux-naif cover. I just hope there’s still enough disposable income in Italy for people to read it.
Friday, November 18, 2011
Surprise prize
The Royal Society Winton Prize for science books was awarded last night. I have written a piece on it for Prospect’s blog. Here it is for convenience.
_________________________________________________________________
Part of the pleasure of the presentation of the Royal Society Winton Prize for Science Books on Thursday night was that it was happening at all. Having lost its corporate sponsor (Rhône-Poulenc, subsequently merged to Aventis) after 2006, the prize was nobly supported by the Royal Society alone for the past four years but looked increasingly in danger of folding. Now it has been rescued by the British investment firm Winton Capital Management, who have agreed to back it for five years. So popular science still has its Man Booker.
The winning title, Gavin Pretor-Pinney’s The Wavewatcher’s Companion (Bloomsbury), was a surprise. In both cover and content, it looks like a sequel to Pretor-Pinney’s previously successful The Cloudspotter’s Guide, but it won over the judges with what Richard Holmes, chair of the judging panel, called “old-fashioned charm and wit”. Like many of the best science books, it doesn’t at first seem to be about science at all, but is a celebration of the ubiquity of waves of all sorts, from sonar to football crowds.
‘Wit’ seems to have been a valuable feature. Holmes commented on how often humour was employed in the submitted books. That’s encouraging – not because science books have previously been dour, but because they have often had a tendency towards leaden adolescent humour of the “imagine finding that in your sandwich!” variety. This sort of thing wouldn’t have passed muster with the erudite Holmes, whose The Age of Wonder (2009 winner of the prize) was, among many other praiseworthy things, a model of the wry footnote.
But another issue bothered some of the attendees. As the six white male shortlisted authors sat on the stage, broadcaster Vivienne Parry asked “Where are all the girls?” (Tucked up in bed, one was tempted to reply, but you could see her point.) The (typically gender-balanced) judges confessed that this had been a serious concern, but one that they could do nothing about. It’s even worse when you look at the prize’s history: only one woman has ever won it (anthropologist Pat Shipman in 1997), and then as a co-author. Parry is herself one of the very few women to have been shortlisted.
A glib answer is that this just reflects the lack of women in science. But that isn’t the case for science journalism and publishing. It is mercifully free of the male-domination still evident in the lab: at least half of the editorial staff of Nature are women, and this is fairly representative. Plenty of female science writers and scientists have authored books. And the imbalance is all the more troubling when compared to the strong female showing in other non-fiction literary awards such as the Samuel Johnson. So “what’s that about?”, asked science journalist Ian Sample, also on the science book prize shortlist, in response to Parry’s question. No one seemed to know.
_________________________________________________________________
Part of the pleasure of the presentation of the Royal Society Winton Prize for Science Books on Thursday night was that it was happening at all. Having lost its corporate sponsor (Rhône-Poulenc, subsequently merged to Aventis) after 2006, the prize was nobly supported by the Royal Society alone for the past four years but looked increasingly in danger of folding. Now it has been rescued by the British investment firm Winton Capital Management, who have agreed to back it for five years. So popular science still has its Man Booker.
The winning title, Gavin Pretor-Pinney’s The Wavewatcher’s Companion (Bloomsbury), was a surprise. In both cover and content, it looks like a sequel to Pretor-Pinney’s previously successful The Cloudspotter’s Guide, but it won over the judges with what Richard Holmes, chair of the judging panel, called “old-fashioned charm and wit”. Like many of the best science books, it doesn’t at first seem to be about science at all, but is a celebration of the ubiquity of waves of all sorts, from sonar to football crowds.
‘Wit’ seems to have been a valuable feature. Holmes commented on how often humour was employed in the submitted books. That’s encouraging – not because science books have previously been dour, but because they have often had a tendency towards leaden adolescent humour of the “imagine finding that in your sandwich!” variety. This sort of thing wouldn’t have passed muster with the erudite Holmes, whose The Age of Wonder (2009 winner of the prize) was, among many other praiseworthy things, a model of the wry footnote.
But another issue bothered some of the attendees. As the six white male shortlisted authors sat on the stage, broadcaster Vivienne Parry asked “Where are all the girls?” (Tucked up in bed, one was tempted to reply, but you could see her point.) The (typically gender-balanced) judges confessed that this had been a serious concern, but one that they could do nothing about. It’s even worse when you look at the prize’s history: only one woman has ever won it (anthropologist Pat Shipman in 1997), and then as a co-author. Parry is herself one of the very few women to have been shortlisted.
A glib answer is that this just reflects the lack of women in science. But that isn’t the case for science journalism and publishing. It is mercifully free of the male-domination still evident in the lab: at least half of the editorial staff of Nature are women, and this is fairly representative. Plenty of female science writers and scientists have authored books. And the imbalance is all the more troubling when compared to the strong female showing in other non-fiction literary awards such as the Samuel Johnson. So “what’s that about?”, asked science journalist Ian Sample, also on the science book prize shortlist, in response to Parry’s question. No one seemed to know.
Saturday, November 05, 2011
Identity crisis
This is not me, though I kind of wish it was.
Everyone (except, I am willing to bet, my daughters) has namesakes, but there’s genuine scope for confusion here, as I’ve just discovered. There’s also a young medical writer called Philip Ball. I think someone should book us all to talk on the same platform.
PS Funnily enough, I've just discovered that it goes further. This nice discussion of my appearances at the Edinburgh Book Festival claims to direct the reader to my "surprising" web site - which is probably even more of a surprise when, with a double "l" in my name, it in fact takes you here.
Everyone (except, I am willing to bet, my daughters) has namesakes, but there’s genuine scope for confusion here, as I’ve just discovered. There’s also a young medical writer called Philip Ball. I think someone should book us all to talk on the same platform.
PS Funnily enough, I've just discovered that it goes further. This nice discussion of my appearances at the Edinburgh Book Festival claims to direct the reader to my "surprising" web site - which is probably even more of a surprise when, with a double "l" in my name, it in fact takes you here.
Tuesday, November 01, 2011
Who is human?
Back from two weeks in China, with some things to catch up on. I have a feature article on graphene in the latest issue of the BBC's science magazine Focus - not available online, sadly. And an article on pattern and shape in the glossy lifestyle & sports magazine POC - also not (yet) available on the web, but I'll put the piece on my website shortly. And here is a book review I did for the Observer on 7 October.
__________________________________________________________
What It Means to be Human:
Reflections from 1791 to the Present
Joanna Bourke
Virago, 2011
ISBN 978-1-84408-644-3
“Are women animals?”, asked a correspondent to the Times in 1872 who described herself only as “An Earnest Englishwoman.” Her point was not that women should be regarded as less than fully human, but that they already were – to such a degree that they would have more rights if they could at least be granted the same status as cats, dogs and horses. The law could be more punitive to a man who ill-treated his horse than to one who murdered his wife.
Inmates at Guantánamo Bay made precisely the same case. Noticing a dog in an air-conditioned kennel next to Camp X-Ray, a British detainee said to the guards “I want his rights” – only to be told “That dog is a member of the US army.” Clive Stafford Smith, representing the inmates, declared that “it would be a huge step for mankind of the judges gave our clients the same rights as the animals.”
As these cases illustrate, historian Joanna Bourke’s survey is not so much about the boundaries of humankind as about the way in which some humans have systematically denied full personhood to others, particularly women, children and other (generally non-European) races and cultures. She would have helped her argument by keeping that distinction more clear. When for example she remarks apropos of slavery that it questions “who is truly human and who is merely ‘property’”, only to follow with the suggestion that “the claim that some humans are property rather than true ‘persons’ is still rampant”, the confusion muddies the point.
Although the forms of denigration that Bourke considers are certainly ‘dehumanizing’, they don’t usually challenge biological or species identity. Rather, they erect hierarchies of human worth, development, and supposed intellectual and spiritual capacity. All the same, her well-made thesis is that this tendency has commonly pushed the oppressed group towards the realm of beasts, whether via the bird-like ‘twittering’ of women or the ‘simian’ countenance of African slaves.
It is an ugly spectacle to see with what insufferable smugness and pseudoscientific justification these judgements have been repeatedly made by white Western males. And of course it would be nonsense to pretend that we all know better now. Yet there is something a little paralysing about this detailed exposé of the obviously pernicious. It is not to belittle the evils of slavery, racism, female oppression and the Holocaust to say that they are, in themselves, scarcely news.
There is also a strong risk of presentism in all this: judging the past as if it were the present. While it is no response to protest that no one knew any better in those days (not least because plenty of women and slaves certainly did), one is left wondering how to contextualize Darwin’s references to “savages… on [a] par with Monkeys” and his chauvinistic hierarchy of races relative to, say, Thackeray’s or Carlyle’s hysterical aversion to African-Americans. It is surely an oversight that nothing is made of Darwin’s anti-slavery motivation in showing that humankind is truly one species, given how thoroughly this was recently documented by Adrian Desmond and James Moore.
The kind of exclusivity that Bourke explores is at least as old as slavery itself, which occasionally means that one feels the absence of the long view. The nastiness and bigotry on display here would be found in spades in the Middle Ages or ancient Greece, albeit differently nuanced. Bourke shows how fears of animalization in the use of animal tissue in medicine have remained more or less unchanged from Jenner’s cowpox vaccinations in 1796 to xenografts of animal organs today. But it seems a shame not to consider the same themes in Thomas Shadwell’s play The Virtuoso (1676), where he satirized the animal-to-human transfusion experiments of the Royal Society. And when one critic of vaccination worried that it might induce ladies to “receive the embraces of the bull”, there are significant echoes of the legendary coupling of Pasiphae and Minos’s beautiful bull to produce the monstrous Minotaur.
But within the scope that Bourke has set herself, she has found some extraordinary material, such as the rejuvenation experiments of Serge Voronoff in the 1920s. These involved placing slices of simian testicle inside a man’s scrotum under local anaesthetic. An analogous anti-aging procedure for women was harder to arrange, but in any event deemed less important (not everything stays the same, then). Women did, however, worry about receiving the advances of septuagenarians whose renewed sexual vigour was said to be “abnormal both in degree and character”.
No wonder it is an embarrassment to endocrinologists that this is how their field began, although I didn’t need to be told that twice in the same chapter. Such repetition is not the only evidence of some loose editing. Lapses into the gnomic wink-wink traits of literary theory are mercifully rare, but to define molecular biologist James Watson as a “leading Darwin scholar” is eccentric at best. Perhaps that’s part and parcel with the neglect of modern genomics, the most egregious omission in the book.
Yet if the narrative is patchy, it is more than a collection of historical curiosities. Bourke’s critique of the concept of human rights opens an important debate on a complacent ideal, while her cross-examination of animal welfare should give all parties pause for thought. And she is quite right to say that modern biomedical science genuinely does now complicate the definition of humanity in ways that we are ill equipped, ethically and philosophically, to confront.
__________________________________________________________
What It Means to be Human:
Reflections from 1791 to the Present
Joanna Bourke
Virago, 2011
ISBN 978-1-84408-644-3
“Are women animals?”, asked a correspondent to the Times in 1872 who described herself only as “An Earnest Englishwoman.” Her point was not that women should be regarded as less than fully human, but that they already were – to such a degree that they would have more rights if they could at least be granted the same status as cats, dogs and horses. The law could be more punitive to a man who ill-treated his horse than to one who murdered his wife.
Inmates at Guantánamo Bay made precisely the same case. Noticing a dog in an air-conditioned kennel next to Camp X-Ray, a British detainee said to the guards “I want his rights” – only to be told “That dog is a member of the US army.” Clive Stafford Smith, representing the inmates, declared that “it would be a huge step for mankind of the judges gave our clients the same rights as the animals.”
As these cases illustrate, historian Joanna Bourke’s survey is not so much about the boundaries of humankind as about the way in which some humans have systematically denied full personhood to others, particularly women, children and other (generally non-European) races and cultures. She would have helped her argument by keeping that distinction more clear. When for example she remarks apropos of slavery that it questions “who is truly human and who is merely ‘property’”, only to follow with the suggestion that “the claim that some humans are property rather than true ‘persons’ is still rampant”, the confusion muddies the point.
Although the forms of denigration that Bourke considers are certainly ‘dehumanizing’, they don’t usually challenge biological or species identity. Rather, they erect hierarchies of human worth, development, and supposed intellectual and spiritual capacity. All the same, her well-made thesis is that this tendency has commonly pushed the oppressed group towards the realm of beasts, whether via the bird-like ‘twittering’ of women or the ‘simian’ countenance of African slaves.
It is an ugly spectacle to see with what insufferable smugness and pseudoscientific justification these judgements have been repeatedly made by white Western males. And of course it would be nonsense to pretend that we all know better now. Yet there is something a little paralysing about this detailed exposé of the obviously pernicious. It is not to belittle the evils of slavery, racism, female oppression and the Holocaust to say that they are, in themselves, scarcely news.
There is also a strong risk of presentism in all this: judging the past as if it were the present. While it is no response to protest that no one knew any better in those days (not least because plenty of women and slaves certainly did), one is left wondering how to contextualize Darwin’s references to “savages… on [a] par with Monkeys” and his chauvinistic hierarchy of races relative to, say, Thackeray’s or Carlyle’s hysterical aversion to African-Americans. It is surely an oversight that nothing is made of Darwin’s anti-slavery motivation in showing that humankind is truly one species, given how thoroughly this was recently documented by Adrian Desmond and James Moore.
The kind of exclusivity that Bourke explores is at least as old as slavery itself, which occasionally means that one feels the absence of the long view. The nastiness and bigotry on display here would be found in spades in the Middle Ages or ancient Greece, albeit differently nuanced. Bourke shows how fears of animalization in the use of animal tissue in medicine have remained more or less unchanged from Jenner’s cowpox vaccinations in 1796 to xenografts of animal organs today. But it seems a shame not to consider the same themes in Thomas Shadwell’s play The Virtuoso (1676), where he satirized the animal-to-human transfusion experiments of the Royal Society. And when one critic of vaccination worried that it might induce ladies to “receive the embraces of the bull”, there are significant echoes of the legendary coupling of Pasiphae and Minos’s beautiful bull to produce the monstrous Minotaur.
But within the scope that Bourke has set herself, she has found some extraordinary material, such as the rejuvenation experiments of Serge Voronoff in the 1920s. These involved placing slices of simian testicle inside a man’s scrotum under local anaesthetic. An analogous anti-aging procedure for women was harder to arrange, but in any event deemed less important (not everything stays the same, then). Women did, however, worry about receiving the advances of septuagenarians whose renewed sexual vigour was said to be “abnormal both in degree and character”.
No wonder it is an embarrassment to endocrinologists that this is how their field began, although I didn’t need to be told that twice in the same chapter. Such repetition is not the only evidence of some loose editing. Lapses into the gnomic wink-wink traits of literary theory are mercifully rare, but to define molecular biologist James Watson as a “leading Darwin scholar” is eccentric at best. Perhaps that’s part and parcel with the neglect of modern genomics, the most egregious omission in the book.
Yet if the narrative is patchy, it is more than a collection of historical curiosities. Bourke’s critique of the concept of human rights opens an important debate on a complacent ideal, while her cross-examination of animal welfare should give all parties pause for thought. And she is quite right to say that modern biomedical science genuinely does now complicate the definition of humanity in ways that we are ill equipped, ethically and philosophically, to confront.
Wednesday, October 05, 2011
Quantum renaissance
Here’s a piece I wrote for the latest issue of Prospect, where it is published with a few small changes. (At least, it started out along these lines - there were numerous iterations, and I somewhat lost track.)
__________________________________________
Quantum mechanics is more than a hundred years old, but we still don’t understand it. Recent years have, however, seen a fresh enthusiasm for exploring the questions about what quantum theory means that were swept under the rug by its founders. Advances in experimental methods make it possible to test ideas about weird and counter-intuitive quantum effects and how they give rise to an apparently different set of physical laws at the everyday scale—that is, to examine in what sense things exist.
In 1900 the German physicist Max Planck suggested that light – a form of electromagnetic waves – consist of tiny, indivisible packets of energy. These particles, called photons, are the “quanta” of light. Five years later Albert Einstein showed how this quantum hypothesis explained the way light kicks electrons out of metals — the photoelectric effect (it was for this, not the theory of relativity, that he won his Nobel). The early pioneers of quantum theory quickly discovered that the seemingly innocuous idea that energy is grainy has bizarre implications. Objects can be in many places at once. Particles behave like waves and vice versa. The mere act of witnessing an event alters what is witnessed. Perhaps the quantum world is constantly branching into multiple universes.
As long as you just accept these paradoxes, quantum theory works fine and so scientists routinely adopt the approach memorably described by Cornell physicist David Mermin, as “shut up and calculate.” They use quantum mechanics to calculate everything from the strength of metal alloys to the shapes of molecules. Routine application of the theory underpins the miniaturization of electronics, medical MRI imaging and the development of solar cells, to name just a few burgeoning new technologies. Quantum mechanics is one of the most reliable theories in all of science: its predictions of how light and matter interact match experimental measurements to the eighth decimal place.
But the question of how to interpret the theory — what it tells us about the physical universe—was never resolved by founders such as Niels Bohr, Werner Heisenberg and Erwin Schrödinger. Famously, Einstein himself was unhappy about how quantum theory leaves so much to chance: it pronounces only on the relative probabilities of how the world is arranged, not on how things fundamentally are. Most physicists now accept something like Bohr and Heisenberg’s Copenhagen interpretation: there is no essential reality beyond the quantum description, nothing more fundamental and definite than probabilities. Bohr coined the notion of “complementarity” to express this need to relinquish the expectation of a deeper reality beneath the equations. If you measure a quantum object, you might find it in a particular state. But it makes no sense to ask if it was in that state before you looked. All that can be said is that it had a particular probability of being so. It’s not that you don’t “know,” but rather that the question has no physical meaning.
Einstein attacked this idea in a thought experiment in which two quantum particles were arranged to have interdependent states, whereby if one were aligned in one direction, say, then the other had to be aligned in the opposite direction. Suppose these particles are allowed to move many light years apart, and then you measure the state of one of them. Quantum theory insists that this instantly determines the state of the other. Again, it’s not that you simply don’t know until you measure. It is that the state of the particles is literally undecided until then. But this implies that the effect of the measurement is transmitted instantly, and therefore faster than light, across cosmic distances to the other particle. Surely that’s absurd, Einstein argued. But it isn’t. Experiments have now established beyond doubt that this instantaneous action at a distance, called entanglement, is real—that’s just how quantum mechanics is.
This is not an abstruse oddity. Quantum entanglement is being exploited in quantum cryptography, where a message is encoded in entangled quantum particles so that it is impossible to intercept and read the message without the tampering being detected. Entanglement is also being used in quantum computing, where the ability of quantum particles to exist in many states at once allows huge numbers of calculations to be conducted simultaneously, greatly accelerating the solution of certain mathematical problems. Although these technologies are still in early development, already there are signs of commercial interest. Earlier this year the Canadian company D-Wave Systems announced the first sale of a quantum computer to Lockheed Martin, while fibre-optic-based quantum cryptography was used (admittedly more for publicity than for extra security) to transmit ballot information in the 2007 Swiss parliamentary elections. “Discussions of relations between information and physical reality are now of interest not just because of foundational motivation but because such questions can have practical implications,” says Wojciech Zurek, a quantum theorist at the Los Alamos National Laboratory in New Mexico, US.
The quantum renaissance hinges mostly on experimental innovations. Until the 1970s, experiments on quantum fundamentals relied mostly on indirect inference. But now it’s possible to make and probe individual quantum objects with great precision. Many technological advances have contributed to this, among them the advent of laser light composed of photons of identical, precise energy, the ability to make measurements with immense precision in time, space and mass, methods to hold individual atoms in electrical and magnetic traps (the subject of the 1997 Nobel prize in physics), and the manipulation of light with fibre optics (motivated by developments in optical telecommunications). These same techniques have made quantum information technology, such as quantum cryptography and computing, viable.
Even if you accept the paradoxical aspects of quantum theory and just use the maths, the fundamental questions won’t go away. For example, if the act of measurement turns probabilities into certainties, how exactly does it do that? Physicists have long spoken of measurements “collapsing the wavefunction,” which expresses how the smeared-out, wave-like mathematical entity encoding all possible quantum states (the wavefunction) becomes focused into a particular place or state. But this was seen largely as metaphor. The collapse had to be imposed by fiat, since it didn’t feature in the mathematical theory. Many physicists, such as Roger Penrose of Oxford University, now believe that this collapse is a real physical event, rather like the radioactive decay of an atom. If so, it requires an ingredient that lies outside current quantum theory. Penrose argues that the missing element is gravity, and that we’d understand wavefunction collapse if only we could marry quantum theory to general relativity, one of the major lacunae in contemporary physics.
Physicist Dik Bouwmeester of the University of California at San Diego and his coworkers hope to test that idea by placing tiny mirrors in quantum ‘superposition’ states in which they are in several places at once, and then watch their wavefunction collapse into a single location, triggered by a ‘measurement’ in which photons are reflected from them. Ignacio Cirac and Oriol Romero-Isart at the Max Planck Institute for Quantum Optics in Garching, Germany, recently outlined an experimental method for placing nano-sized objects of about a nanometre in size, containing thousands or millions of atoms, into superposition states using light to trap and probe them, which would allow tests of such wavefunction-collapse theories.
Wavefunction collapse is part of the reason why the world doesn’t follow quantum rules all the way up. If it did, they wouldn’t seem counter-intuitive at all. It’s only because we’re used to our coffee cups being on our desk or in the dishwasher, but not both at once, that it seems so unreasonable for photons or electrons not to behave that way. At some scale, the quantum-ness of the microscopic world gives way to classical, Newtonian physics. Why? The generally accepted answer is a process called decoherence: crudely speaking, interactions of a quantum entity with its teeming environment act rather like a measurement, collapsing superpositions into a well-defined state. In this view, large objects obey classical physics not because of their size as such but because they contain more particles and thus experience more interactions, so decohering instantly.
But that doesn’t fully resolve the issue—as shown by Schrödinger’s famous cat. In his thought experiment, Erwin Schrödinger imagined a cat that is poisoned, or not, depending on the outcome of a single quantum event, all of which is concealed inside a box. Since the outcome of the event is undetermined until observation collapses the wavefunction, quantum theory seemed to insist that, until the box is opened, the cat would be both alive and dead. Physicists used to evade that absurdity by insisting that somehow the bigness of the cat would bring about decoherence even without any observation, so that it would be either alive or dead but not both.
Yet one can imagine suppressing decoherence by creating a Schrödinger cat experiment that is well isolated from its surroundings. Then what? Ask old-school “shut up and calculate” physicists if the cat can be simultaneously alive and dead, and they are likely to assert that this will still be censored somehow or other. But younger physicists may well answer “why not?”
Perhaps we can simply do the experiment. The size of a cat makes it still nigh impossible to suppress decoherence, but a microscopic “cat” is more amenable to isolation. Cirac and Romero-Isart have proposed an experiment in which the cat is replaced by a virus, held in a light trap and coaxed by laser light into a quantum superposition of states. They say it might even work for tiny aquatic animals called tardigrades or water bears, which, unlike viruses, are unambiguously living or dead. It’s not obvious how to set up an experiment like Schrödinger’s, but simply placing a living creature in two places at once would be mind-boggling enough.
For whatever reason, the fact is that everyday objects are always in a single state and we can make measurements on them without altering that state: we have never sighted a Schrödinger cat. Physicists Anthony Leggett, a Nobel laureate at the University of Illinois, and Anupam Garg of Northwestern University, also in Illinois, call these conditions macrorealism. But is our classical world truly macrorealistic, or does it just look that way? Leggett and Garg showed in theory how to distinguish a macrorealistic world from one that isn’t. Such tests are even tougher to conduct than those on wavefunction collapse, says Romero-Isart, but he thinks that his proposed experiment on nano-objects could make a start.
Zurek, meanwhile, has developed a theory of how a fundamentally quantum world can look classical without really being macrorealistic. Whereas measuring a quantum system will alter it, classical systems can be probed without changing them: fifty people can read this text without thereby spontaneously altering the words. But in Zurek’s scheme, this may be true of quantum systems too if they can leave many imprints on their environment (which is actually what we observe). Each observer comes along and sees an imprint, and because each is the same, they all agree on what properties the system has. But only certain quantum states have this ability to create many identical imprints—in a sense these robust states are thus “selected” in a quasi-darwinian way, and so out of all the possible quantum attributes of the system, these are the ones we ascribe to the object. It’s as though a ripe apple can create lots of redness imprints, which enable us to agree that it is red, while also possessing other quantum attributes that can’t be assigned a definite value in this way.
Zurek says this means that the environment of an object is not an “innocent bystander who simply reports what has happened”, but rather, “an accomplice in the ‘crime’, selecting and transforming some of the fragile quantum states into robust, objectively existing classical states.” Ideas like this, however strange they might sound at first, can be made consistent with current quantum theory precisely because that theory leaves so much unanswered. But perhaps not for much longer.
__________________________________________
Quantum mechanics is more than a hundred years old, but we still don’t understand it. Recent years have, however, seen a fresh enthusiasm for exploring the questions about what quantum theory means that were swept under the rug by its founders. Advances in experimental methods make it possible to test ideas about weird and counter-intuitive quantum effects and how they give rise to an apparently different set of physical laws at the everyday scale—that is, to examine in what sense things exist.
In 1900 the German physicist Max Planck suggested that light – a form of electromagnetic waves – consist of tiny, indivisible packets of energy. These particles, called photons, are the “quanta” of light. Five years later Albert Einstein showed how this quantum hypothesis explained the way light kicks electrons out of metals — the photoelectric effect (it was for this, not the theory of relativity, that he won his Nobel). The early pioneers of quantum theory quickly discovered that the seemingly innocuous idea that energy is grainy has bizarre implications. Objects can be in many places at once. Particles behave like waves and vice versa. The mere act of witnessing an event alters what is witnessed. Perhaps the quantum world is constantly branching into multiple universes.
As long as you just accept these paradoxes, quantum theory works fine and so scientists routinely adopt the approach memorably described by Cornell physicist David Mermin, as “shut up and calculate.” They use quantum mechanics to calculate everything from the strength of metal alloys to the shapes of molecules. Routine application of the theory underpins the miniaturization of electronics, medical MRI imaging and the development of solar cells, to name just a few burgeoning new technologies. Quantum mechanics is one of the most reliable theories in all of science: its predictions of how light and matter interact match experimental measurements to the eighth decimal place.
But the question of how to interpret the theory — what it tells us about the physical universe—was never resolved by founders such as Niels Bohr, Werner Heisenberg and Erwin Schrödinger. Famously, Einstein himself was unhappy about how quantum theory leaves so much to chance: it pronounces only on the relative probabilities of how the world is arranged, not on how things fundamentally are. Most physicists now accept something like Bohr and Heisenberg’s Copenhagen interpretation: there is no essential reality beyond the quantum description, nothing more fundamental and definite than probabilities. Bohr coined the notion of “complementarity” to express this need to relinquish the expectation of a deeper reality beneath the equations. If you measure a quantum object, you might find it in a particular state. But it makes no sense to ask if it was in that state before you looked. All that can be said is that it had a particular probability of being so. It’s not that you don’t “know,” but rather that the question has no physical meaning.
Einstein attacked this idea in a thought experiment in which two quantum particles were arranged to have interdependent states, whereby if one were aligned in one direction, say, then the other had to be aligned in the opposite direction. Suppose these particles are allowed to move many light years apart, and then you measure the state of one of them. Quantum theory insists that this instantly determines the state of the other. Again, it’s not that you simply don’t know until you measure. It is that the state of the particles is literally undecided until then. But this implies that the effect of the measurement is transmitted instantly, and therefore faster than light, across cosmic distances to the other particle. Surely that’s absurd, Einstein argued. But it isn’t. Experiments have now established beyond doubt that this instantaneous action at a distance, called entanglement, is real—that’s just how quantum mechanics is.
This is not an abstruse oddity. Quantum entanglement is being exploited in quantum cryptography, where a message is encoded in entangled quantum particles so that it is impossible to intercept and read the message without the tampering being detected. Entanglement is also being used in quantum computing, where the ability of quantum particles to exist in many states at once allows huge numbers of calculations to be conducted simultaneously, greatly accelerating the solution of certain mathematical problems. Although these technologies are still in early development, already there are signs of commercial interest. Earlier this year the Canadian company D-Wave Systems announced the first sale of a quantum computer to Lockheed Martin, while fibre-optic-based quantum cryptography was used (admittedly more for publicity than for extra security) to transmit ballot information in the 2007 Swiss parliamentary elections. “Discussions of relations between information and physical reality are now of interest not just because of foundational motivation but because such questions can have practical implications,” says Wojciech Zurek, a quantum theorist at the Los Alamos National Laboratory in New Mexico, US.
The quantum renaissance hinges mostly on experimental innovations. Until the 1970s, experiments on quantum fundamentals relied mostly on indirect inference. But now it’s possible to make and probe individual quantum objects with great precision. Many technological advances have contributed to this, among them the advent of laser light composed of photons of identical, precise energy, the ability to make measurements with immense precision in time, space and mass, methods to hold individual atoms in electrical and magnetic traps (the subject of the 1997 Nobel prize in physics), and the manipulation of light with fibre optics (motivated by developments in optical telecommunications). These same techniques have made quantum information technology, such as quantum cryptography and computing, viable.
Even if you accept the paradoxical aspects of quantum theory and just use the maths, the fundamental questions won’t go away. For example, if the act of measurement turns probabilities into certainties, how exactly does it do that? Physicists have long spoken of measurements “collapsing the wavefunction,” which expresses how the smeared-out, wave-like mathematical entity encoding all possible quantum states (the wavefunction) becomes focused into a particular place or state. But this was seen largely as metaphor. The collapse had to be imposed by fiat, since it didn’t feature in the mathematical theory. Many physicists, such as Roger Penrose of Oxford University, now believe that this collapse is a real physical event, rather like the radioactive decay of an atom. If so, it requires an ingredient that lies outside current quantum theory. Penrose argues that the missing element is gravity, and that we’d understand wavefunction collapse if only we could marry quantum theory to general relativity, one of the major lacunae in contemporary physics.
Physicist Dik Bouwmeester of the University of California at San Diego and his coworkers hope to test that idea by placing tiny mirrors in quantum ‘superposition’ states in which they are in several places at once, and then watch their wavefunction collapse into a single location, triggered by a ‘measurement’ in which photons are reflected from them. Ignacio Cirac and Oriol Romero-Isart at the Max Planck Institute for Quantum Optics in Garching, Germany, recently outlined an experimental method for placing nano-sized objects of about a nanometre in size, containing thousands or millions of atoms, into superposition states using light to trap and probe them, which would allow tests of such wavefunction-collapse theories.
Wavefunction collapse is part of the reason why the world doesn’t follow quantum rules all the way up. If it did, they wouldn’t seem counter-intuitive at all. It’s only because we’re used to our coffee cups being on our desk or in the dishwasher, but not both at once, that it seems so unreasonable for photons or electrons not to behave that way. At some scale, the quantum-ness of the microscopic world gives way to classical, Newtonian physics. Why? The generally accepted answer is a process called decoherence: crudely speaking, interactions of a quantum entity with its teeming environment act rather like a measurement, collapsing superpositions into a well-defined state. In this view, large objects obey classical physics not because of their size as such but because they contain more particles and thus experience more interactions, so decohering instantly.
But that doesn’t fully resolve the issue—as shown by Schrödinger’s famous cat. In his thought experiment, Erwin Schrödinger imagined a cat that is poisoned, or not, depending on the outcome of a single quantum event, all of which is concealed inside a box. Since the outcome of the event is undetermined until observation collapses the wavefunction, quantum theory seemed to insist that, until the box is opened, the cat would be both alive and dead. Physicists used to evade that absurdity by insisting that somehow the bigness of the cat would bring about decoherence even without any observation, so that it would be either alive or dead but not both.
Yet one can imagine suppressing decoherence by creating a Schrödinger cat experiment that is well isolated from its surroundings. Then what? Ask old-school “shut up and calculate” physicists if the cat can be simultaneously alive and dead, and they are likely to assert that this will still be censored somehow or other. But younger physicists may well answer “why not?”
Perhaps we can simply do the experiment. The size of a cat makes it still nigh impossible to suppress decoherence, but a microscopic “cat” is more amenable to isolation. Cirac and Romero-Isart have proposed an experiment in which the cat is replaced by a virus, held in a light trap and coaxed by laser light into a quantum superposition of states. They say it might even work for tiny aquatic animals called tardigrades or water bears, which, unlike viruses, are unambiguously living or dead. It’s not obvious how to set up an experiment like Schrödinger’s, but simply placing a living creature in two places at once would be mind-boggling enough.
For whatever reason, the fact is that everyday objects are always in a single state and we can make measurements on them without altering that state: we have never sighted a Schrödinger cat. Physicists Anthony Leggett, a Nobel laureate at the University of Illinois, and Anupam Garg of Northwestern University, also in Illinois, call these conditions macrorealism. But is our classical world truly macrorealistic, or does it just look that way? Leggett and Garg showed in theory how to distinguish a macrorealistic world from one that isn’t. Such tests are even tougher to conduct than those on wavefunction collapse, says Romero-Isart, but he thinks that his proposed experiment on nano-objects could make a start.
Zurek, meanwhile, has developed a theory of how a fundamentally quantum world can look classical without really being macrorealistic. Whereas measuring a quantum system will alter it, classical systems can be probed without changing them: fifty people can read this text without thereby spontaneously altering the words. But in Zurek’s scheme, this may be true of quantum systems too if they can leave many imprints on their environment (which is actually what we observe). Each observer comes along and sees an imprint, and because each is the same, they all agree on what properties the system has. But only certain quantum states have this ability to create many identical imprints—in a sense these robust states are thus “selected” in a quasi-darwinian way, and so out of all the possible quantum attributes of the system, these are the ones we ascribe to the object. It’s as though a ripe apple can create lots of redness imprints, which enable us to agree that it is red, while also possessing other quantum attributes that can’t be assigned a definite value in this way.
Zurek says this means that the environment of an object is not an “innocent bystander who simply reports what has happened”, but rather, “an accomplice in the ‘crime’, selecting and transforming some of the fragile quantum states into robust, objectively existing classical states.” Ideas like this, however strange they might sound at first, can be made consistent with current quantum theory precisely because that theory leaves so much unanswered. But perhaps not for much longer.
Wednesday, September 28, 2011
Two for the diary
I wrote a couple of items for the Diary section of the October issue of Prospect. One was used in truncated form; the other wasn’t. Here are both of them.
____________________________________________________
Turkey’s prime minister Recep Tayyip Erdoğan recently outlined his vision for ‘Islamist-led democratic capitalism’: “Management of people, management of science and management of money.” It is becoming clear what ‘management’ means here. Erdoğan’s government has been steadily bringing various public bodies under direct state control, of which the latest are the Turkish Academy of Sciences (TÜBA) and the scientific funding agency TÜBITAK. The move has appalled many Turkish scientists, who consider independent scientific research a basic democratic freedom. The government has claimed that TÜBA was functioning poorly. But the absence of any prior consultation adds to the impression that this is essentially a political move, perhaps to muzzle an organization seen as too secular and left-leaning. “Academics will be increasingly careful about what they say, and what topics they teach and research”, says Erol Gelenbe, an electronic engineer and TÜBA member working at Imperial College in London. One obvious concern is whether ‘Islamist-led’ science will suppress Darwinism. Turkey already has the lowest public acceptance of the theory of evolution in all of Europe, and TÜBA drew criticism on this issue during the 2009 Darwin Year celebrations. Stem-cell research is also unlikely to find governmental favour. But Gelenbe believes that religious considerations will “affect all areas of the sciences, especially the human and social sciences”. He suspects it is only a matter of time before TÜBA begins appointing theologians.
******
Images of Americans boarding up in preparation for Hurricane Katia reminded Europeans of how little they need to fear extreme weather. The worst Katia could do was to rouse a blustery day in Scotland with a flick of her tail. But don’t count on it staying this way. Hurricane-like events similar to those that appear in the tropical Atlantic and Pacific have been occasionally seen in the Mediterranean. These so-called Medicanes have been predicted to multiply and intensify, possibly reaching full hurricane force, as global temperatures rise, since high sea surface temperatures are the engine of hurricanes. You might want to think twice before booking for Majorca in 2050.
____________________________________________________
Turkey’s prime minister Recep Tayyip Erdoğan recently outlined his vision for ‘Islamist-led democratic capitalism’: “Management of people, management of science and management of money.” It is becoming clear what ‘management’ means here. Erdoğan’s government has been steadily bringing various public bodies under direct state control, of which the latest are the Turkish Academy of Sciences (TÜBA) and the scientific funding agency TÜBITAK. The move has appalled many Turkish scientists, who consider independent scientific research a basic democratic freedom. The government has claimed that TÜBA was functioning poorly. But the absence of any prior consultation adds to the impression that this is essentially a political move, perhaps to muzzle an organization seen as too secular and left-leaning. “Academics will be increasingly careful about what they say, and what topics they teach and research”, says Erol Gelenbe, an electronic engineer and TÜBA member working at Imperial College in London. One obvious concern is whether ‘Islamist-led’ science will suppress Darwinism. Turkey already has the lowest public acceptance of the theory of evolution in all of Europe, and TÜBA drew criticism on this issue during the 2009 Darwin Year celebrations. Stem-cell research is also unlikely to find governmental favour. But Gelenbe believes that religious considerations will “affect all areas of the sciences, especially the human and social sciences”. He suspects it is only a matter of time before TÜBA begins appointing theologians.
******
Images of Americans boarding up in preparation for Hurricane Katia reminded Europeans of how little they need to fear extreme weather. The worst Katia could do was to rouse a blustery day in Scotland with a flick of her tail. But don’t count on it staying this way. Hurricane-like events similar to those that appear in the tropical Atlantic and Pacific have been occasionally seen in the Mediterranean. These so-called Medicanes have been predicted to multiply and intensify, possibly reaching full hurricane force, as global temperatures rise, since high sea surface temperatures are the engine of hurricanes. You might want to think twice before booking for Majorca in 2050.
Wednesday, September 21, 2011
Chemistry's Grand Challenges
I have an article in the latest (October) issue of Scientific American that looks at ten big challenges for chemistry in the coming decades. It’s presented by the Sci Am editors as “big mysteries”, though I’m not too sure quite how well that fits: these are not issues about which we’re totally in the dark, but rather, ones that seem to present either challenges to our fundamental understanding or our technological capability. The topics were decided in collaboration with the editors – I’m happy that all justify inclusion, though left to my own devices I’d probably have a slightly different list. The article grew to huge proportions in preparation, before being trimmed severely. So here is the full original text – or rather, an unholy hybrid of that and some of the changes made during the editing process. It's a big post for a blog, but hopefully of some value. And it includes an intro which was snipped out in toto.
______________________________________
Introduction
There aren’t many novels with chemistry in them, but one of the most famous has a Professor Waldman of the University of Ingolstadt say this: “Chemistry is that branch of natural philosophy in which the greatest improvements have been and may be made.” Waldman is the tutor of Victor Frankenstein in Mary Shelley’s classic from 1818, and he inspires his student to make the discovery that triggers the book’s dark tale.
This association imputes a Faustian aspect to chemistry. But that, like Waldman’s optimism, was transferred in the twentieth century first to physics and then to biology. Chemistry seemed to be left behind as a ‘finished’ science, now just a matter of engineering and devoid of the grand questions that Shelley – a devotee of Humphry Davy – seemed to glimpse in chemistry two hundred years ago. What happened?
Perhaps the answer is that chemistry became too versatile for its own good. It inveigled its way into so many areas of study and production, from semiconductor manufacturing to biomedicine, that we lost sight of it. The core of chemistry remains in making molecules and materials, but these are so diverse – drugs, paints, plastics, microscopic machines – that it is hard to see them as parts of a united discipline.
In this Year of Chemistry, it’s good to take stock – not just to remind ourselves why chemistry is central to our lives, but to consider where it is headed. Here are ten of the key challenges that chemistry faces today. Needless to say, there is no definitive list of this sort, and while all of these ten directions are important, their main value here is perhaps to illustrate that Waldman’s words still remain true. Several of these challenges are concerned with practical applications, as befits chemistry’s role as the most applied and arguably the most useful of the central sciences. But there are also questions about foundations, for the popular idea that chemistry is now conceptually understood, and that all we have to do is use it, is false. It has been only in the past several decades, for example, that the centrality of the non-covalent bond in the chemistry of life has been appreciated, and this sort of ‘temporary stickiness’ of molecules has been recognized as a key aspect of any technological applications, from molecular machines and nanotechnology to the development of surface coatings. Chemistry retains deep intellectual as well as practical challenges.
The last word should also go to Shelley’s Professor Waldman, who tells Victor Frankenstein that “a man would make but a very sorry chemist if he attended to that department of human knowledge alone”. You could perhaps say the same for any branch of science, but it particularly true for chemistry, which depends not just on understanding the world but of finding creative expressions of that knowledge. The creative opportunities for chemists lie everywhere: in making vehicles cleaner, producing artificial leaves, inventing new colours for artists, altering the fate of cells and comprehending the fate of stars. Chemistry is as limitless as art, because it is one.
1. The origins of life, and how life could be different on other planets.
The chemical origin of life used to be a rather parochial topic. That’s not to diminish the profundity, or the difficulty, of the question of how life began on Earth. But now that we have a better view of some of the strange and potentially fertile environments in our solar system – the occasional flows of water on Mars, the petrochemical seas of Saturn’s moon Titan and the cold, salty oceans that seem to lurk under the ice of Jupiter’s moons Europa and Ganymede – the origin of terrestrial life seems only a part of a grander question: under what circumstances can life arise, and how widely can its chemical basis vary? That issue is made even more rich by the discovery over the past 16 years of more than 500 extrasolar planets orbiting other stars – worlds of bewildering variety, forcing us to broaden our imagination about the possible chemistries of life. For instance, while NASA has long pursued the view that liquid water is a prerequisite, now we’re not so sure. How about liquid ammonia, or formamide (CHONH2), or an oily solvent like liquid methane, or supercritical hydrogen on Jupiter? And why should life restrict itself to DNA and proteins – after all, several artificial chemical systems have now been made that exhibit a kind of replication from the component parts without relying on nucleic acids. All you need, it seems, is a molecular system that can serve as a template for making a copy, and then detach itself.
Fixating on terrestrial life is a hang-up, but if we don’t, it’s hard to know where to begin. Looking at life on Earth, says chemist Steven Benner of the University of Florida, “we have no way to decide whether the similarities [such as the use of DNA and proteins] reflect common ancestry or the needs of life universally.” But if we retreat into saying that we’ve got to stick with what we know, he says, “we have no fun.”
All the same, Earth is the only locus of life that we know of, and so it makes sense to start here in trying to understand how matter can come alive and, eventually, know itself. This process seems to have begun extremely quickly in geological terms: there are fossil signs of early life dating back almost to the time that the oceans first formed. On that basis, it looks easy – some suspect, even inevitable. The challenge is no longer to come up with vaguely plausible scenarios, for there are plenty – polymerization catalysed by minerals, chemical complexity fuelled by hydrothermal vents, the RNA world. No, the game is to figure out how to make these more than just suggestive reactions coddled in the test tube. Researchers have made conspicuous progress in recent years, showing for example that certain relatively simple chemicals can spontaneously react to form the more complex building blocks of living systems, such as amino acids and nucleotides, the building blocks of DNA and RNA. In 2009, a team led by John Sutherland, now at the MRC Laboratory of Molecular Biology in Cambridge, England, was able to demonstrate the formation of nucleotides from molecules likely to have existed in the primordial broth. Other researchers have focused on the ability of some RNA strands to act as enzymes, providing evidence in support of the RNA world hypothesis. Through such steps, scientists may progressively bridge the gap from inanimate matter to self- replicating, self-sustaining systems.
Perhaps the dawn of synthetic biology, which includes the construction of primitive lifelike entities from scratch, will help to bridge the gap between the geological formation of simple organic ingredients, as demonstrated by Harold Urey and Stanley Miller in their famous ‘spark’ experiments more than 50 years ago, and the earliest cells.
2. Understanding the nature of the chemical bond and modeling chemistry on the computer.
“The chemistry of the future”, wrote the zoologist D’Arcy Wentworth Thompson in 1917, “must deal with molecular mechanics by the methods and in the strict language of mathematics”. Just 10 years later that seemed possible: the physicists Walter Heitler and Fritz London showed how to describe a chemical bond using the equations of then nascent quantum theory, and the great American chemist Linus Pauling proposed that bonds form when the electron orbitals of different atoms can overlap in space. A competing theory by Robert Mulliken and Friedrich Hund suggested that bonds are the result of atomic orbitals merging into “molecular orbitals” that extend over more than one atom. Theoretical chemistry seemed about to become a branch of physics.
Nearly 100 years later the molecular-orbital picture has become the most common one, but there is still no consensus among chemists that it is always the best way to look at molecules. The reason is that this model of molecules and all others are based on simplifying assumptions and are thus approximate, partial descriptions. In reality, a molecule is a bunch of atomic nuclei in a cloud of electrons, with opposing electrostatic forces fighting a constant tug-of-war with one another, and all components constantly moving and reshuffling. Existing models of the molecule usually try to crystallize such a dynamic entity into a static one and may capture some of its salient properties but neglect others.
Quantum theory is unable to supply a unique definition of chemical bonds that accords with the intuition of chemists whose daily business it is to make and break them. There are now many ways of assigning bonds to the quantum description of molecules as electrons and nuclei. According to quantum chemist Dominik Marx of the University of Bochum in Germany, “some are useful in some cases but fail in others and vice versa”. As a result, he says, “there will always be a search, and thus controversy, for ‘the best method’”.
This is no obstacle to calculating the structures and properties of molecules from quantum first principles – something that can be done to great accuracy if the number of electrons is relatively small. “Computational chemistry can be pushed to the level of utmost realism and complexity”, says Marx. As a result, computer calculations can increasingly be regarded as a kind of virtual experiment that predicts the outcome of a reaction.
But the challenge is to extend these approaches to increasingly complex cases. On the one hand, that may mean simply modelling more molecules. Can a computer model capture the complicated environment inside cells, for example, where many molecules large and small interact, aggregate and react within the responsive, protean medium of salty water? At the moment, most descriptions of such processes use highly simplified descriptions of bonding in which atoms are little more than balls on springs. Can computational chemistry help us understand, say, the detailed workings of a vast biomolecular machine like the ribosome?
On the other hand, can computational methods capture complex chemical processes and behavior, such as catalysis? Attempts to do so tend at the moment to rely on ways of bridging the calculations to intuitive expectations. One promising approach, being developed by Jörg Behler at Bochum, uses neural networks to deduce the energy surfaces on which these reactions happen. It also remains hard to predict subtle behaviour such as superconductivity. But already new materials have been discovered by computation – perhaps in times to come that will become the norm.
3. Graphene and carbon nanotechnology: sculpting with carbon.
The discovery of fullerenes – hollow, cagelike molecules made entirely of carbon – in 1985 was literally the start of something much bigger. The polyhedral shells of these molecules showed how the flat sheets of carbon atoms that make up graphite – where they are joined into hexagonal rings tiled side by side, like chicken wire – can be curved by including some pentagonal rings. With precisely 12 pentagons, the structure curls up into a closed shell. Six years later tubes of graphite-like carbon just a few nanometers in diameter, called carbon nanotubes, fostered the idea that this sort of carbon can be moulded into all manner of curved nanoscale structures. Being hollow, extremely strong and stiff, and electrically conducting, carbon nanotubes promised applications ranging from high-strength carbon composites to tiny wires and electronic devices, miniature molecular capsules and water-filtration membranes.
Now graphite itself has moved centre stage, thanks to the discovery that it can be separated into individual sheets, called graphene, that could supply the fabric for ultra-miniaturized, cheap and robust electronic circuitry. Graphene garnered the 2010 Nobel prize in physics, but the success of this and other forms of carbon nanotechnology might ultimately depend on chemistry. For one thing, ‘wet’ chemical methods may prove the cheapest and simplest for separating graphite into its component sheets. “Graphene can be patterned so that the interconnect and placement problems of carbon nanotubes are overcome”, says carbon specialist Walt de Heer of the Georgia Institute of Technology.
Some feel, however, that graphene has so far been over-hyped in a way that plays down the hurdles to making it a viable technology. “The hype is extreme”, says de Heer. “Many of the newly claimed superlative graphene properties are really graphite properties ‘under new management’ and were known and used for a very long time.” He believes graphitic electronics has not yet been shown to be viable. “The best that has been done to date is to show that ultrathin graphite (including graphene) can be gated [switched electronically, as in transistors]. But the gating is quite poor, since you cannot turn it completely off. Most people would not consider this to be even a starting point for electronics.” And he says that existing methods of graphene patterning are so crude that the edges undo any advantage that graphene nanoribbons have to offer. However, narrow ribbons and networks can be made to measure with atomic precision by using the techniques of organic chemistry to build them up from ‘polyaromatic’ molecules, in which several hexagonal carbon rings are linked together like little fragments of a graphene sheet. It seems quite possible that graphene technology will depend on clever chemistry.
[Watch this space: I’ve just written a piece on graphene for BBC’s pop-sci magazine Focus, which explores all these things in greater depth.]
4. Artificial photosynthesis.
Of all the sources of ‘clean energy’ available to us, sunlight seems the most tantalizing. With every sunrise comes a reminder of the vast resource of which we currently tap only a pitiful fraction. The main problem is cost: the expense of conventional photovoltaic panels made of silicon still restricts their use. But life on Earth, almost all of which is ultimately solar-powered by photosynthesis, shows that solar cells don’t have to be terribly efficient if, like leaves, they can be made abundantly and cheaply enough.
Yet ‘artificial photosynthesis’ and the ‘artificial leaf’ are slippery concepts. Do they entail converting solar to chemical energy, just as the leaf uses absorbed sunlight to make the biological ‘energy molecule’ ATP? Or must the ‘artificial leaf’ mimic photosynthesis by splitting water to make hydrogen – a fuel – and oxygen?
“Artificial photosynthesis means different things to different people”, says photochemist Devens Gust of Arizona State University. “Some people call virtually any sort of solar energy conversion that involves electricity or fuels artificial photosynthesis.” Gust himself reserves the term for photochemical systems that make fuels using sunlight: “I like to define it as the use of the fundamental scientific principles underlying natural photosynthesis for the design of technological solar-energy conversion systems.”
“One of the holy grails of solar energy research is using sunlight to produce fuels”, Gust explains. “In order to make a fuel, we need not only energy from sunlight, but a source of electrons, and some material to reduce to a fuel with those electrons. The source of electrons has to be water, if the process is to be carried out on a scale anything like that of human energy usage. The easiest way to make a fuel from this is to use the electrons to reduce the protons to hydrogen gas.” Nathan S. Lewis and his collaborators at Caltech are developing an artificial leaf that would do just that using silicon nanowires.
MIT chemist Daniel Nocera and his coworkers have recently announced an ‘artificial leaf’: a device the size of a credit card in which silicon solar cells and a photocatalyst of metals such as nickel and cobalt split water into hydrogen and oxygen which can then be used to drive fuel cells. Nocera estimates that a gallon of water would provide enough fuel to power a home in developing countries for a day. “Our goal is to make each home its own power station”, he says. His start-up company Sun Catalytix aims to take the technology to a commercial level.
But “water oxidation is not a solved problem, even at a fundamental level”, according to Gust. “Cobalt catalysts such as the one that Nocera uses, and newly-discovered catalysts based on other common metals are promising”, he says, but there is still no potentially inexpensive, ideal catalyst. “We don’t know how the natural photosynthetic catalyst, which is based on four manganese atoms and a calcium atom, works”, Gust adds.
Carbon-based fuels are easier than hydrogen to transport, store and integrate with current technologies. Photosynthesis makes carbon-based fuels (sugars, ATP) using sunlight. Gust and his colleagues have been working on making molecular assemblies for artificial photosynthesis that more closely mimic their biological inspiration. “We know how to make artificial antenna systems and photosynthetic reaction centers that work in the lab, but questions about stability remain, as they are usually based at least in part on organic molecules.” He admits that “we are not very close to a technologically useful catalyst for converting carbon dioxide to a useful liquid fuel.” On the other hand, he says, “the recent increase in funding, worldwide for solar fuels has meant that many more researchers have gotten into the game.” If this funding can be preserved, he anticipates “really significant advances.” Let’s hope so, since as Gust says, “we desperately need a fuel or energy source that is abundant, inexpensive, environmentally benign, and readily available.”
5. Devising catalysts for making biofuels.
The demand for biofuels – fuels made by conversion of organic matter, primarily plants – isn’t driven just by concern for the environment. While it’s true that a biofuel economy is notionally sustainable – carbon emissions from burning the fuels are balanced by the carbon dioxide taken up to grow the fuel crops – the truth is that it’s increasingly hard to find any good alternatives. Organic liquids (oil and petroleum) remain the main energy source globally, and are forecast to do so at least until the mid-century. But several estimates say that, at current production rates, we have only about 50 years worth of oil reserves left. What’s more, most of these are in politically unstable parts of the world. And currently soaring prices are expected to continue – the days of cheap oil are over.
There’s nothing new about biofuels: time was when there was only wood to burn in winter, or peat or dried animal dung. But that’s a very inefficient way to use the energy bound up in carbon-based molecules. Today’s biofuels are mostly ethanol made from fermenting corn, sugar-cane or switchgrass, or biodiesel, an ester made from the lipids in rapeseed or soybean oils. The case for biofuels seems easy to make – as well as being potentially greener and offering energy security, they can come from crops grown on land unsuitable for food agriculture, and can boost rural economies.
But the initial optimism about biofuels cooled quickly. For one thing, they threaten to displace food crops, particularly in developing countries where selling biofuels abroad can be more lucrative than feeding people at home. And the numbers are daunting: meeting current oil demand will mean requisitioning huge areas of arable land. But these figures depend crucially on how efficiently the carbon is used. Some parts of plants, particularly the resinous lignin, can’t easily be turned into biofuel, especially by biological fermentation. Finding new chemical catalysts to assist this process looks essential if biofuels are to fly.
One of the challenges of breaking down lignin – cracking open ‘aromatic C-O bonds’: benzene rings bridged by an oxygen – was recently met by John Hartwig and Alexey Sergeev of the University of Illinois, who found a nickel-based catalyst that will do the trick. Hartwig points out that, if biomass is to supply non-fossil-fuel chemical feedstocks as well as fuels, it will need to offer aromatic compounds – of which lignin is the only major potential source.
It’s a small part of a huge list of challenges: “There are issues at every level”, says Hartwig. Some of these are political – a carbon tax, for example, could decide the economical viability of biofuels. But many are chemical. The changes in infrastructure and engineering needed for an entirely new liquid fuel (more or less pure alcohol) are so vast that it seems likely the biofuels will need to be compatible with existing technology – in other words, to be hydrocarbons. That means converting the oxidized compounds in plant matter to reduced ones. Not only does this require catalysts, but it also demands a source of hydrogen – either from fossil fuels or ideally, but dauntingly, from splitting of water.
And fuels will need to be liquid for easy transportation along pipelines. But biomass is primarily solid. Liquefaction would need to happen on site where the plant is harvested. And one of the difficulties for catalytic conversion is the extreme impurity of the reagent – classical chemical synthesis does not tend to allow for reagents such as ‘wood’. “There’s no consensus on how all this will be done in the end”, says Hartwig. But an awful lot of any solution lies with the chemistry, especially with finding the right catalysts. “Almost every industrial reaction on a large scale has a catalyst associated”, Hartwig points out.
6. Understanding the chemical basis of thought and memory.
The brain is a chemical computer. Interactions between the neurons that form its circuitry are mediated by molecules: neurotransmitters that pass across the synaptic spaces where one neural cell wires up to another. This chemistry of the mind is perhaps at its most impressive in the operation of memory, in which abstract principles and concepts – a telephone number, say – are imprinted in states of the neural network by sustained chemical signals. How does chemistry create a memory that is at the same time both persistent and dynamic: susceptible to recall, revision and forgetting?
We now know that a cascade of biochemical processes, leading to a change in production of neurotransmitter molecules at the synapse, triggers ‘learning’ for habitual reflexes. But even this ‘simple’ aspect of learning has short- and long-term stages. Meanwhile, more complex so-called ‘declarative’ memory (of people, places and so on) has a different mechanism and location in the brain, involving the activation by the excitatory neurotransmitter glutamate of a protein called the NMDA receptor. Blocking these receptors with drugs prevents memory retention for many types of declarative memory.
Our everyday declarative memories are often encoded in a process called long-term potentiation (LTP), which involves NMDA receptors and in accompanied by an expansion of the synapse, the region of a neuron involved in its communication with others. As the synapse grows, so does the ‘strength’ of its connection with neighbours. The biochemistry of this process has been clarified in the past several years. It involves stimulation of the formation of filaments within the neuron made from the protein actin – the basic scaffolding of the cell, which determine its size and shape. But that process can be undone during a short period before the change is consolidated by biochemical agents that block the newly formed filaments.
Once encoded, long-term memory for both simple and complex learning is actively maintained by switching on genes that produce proteins. It now appears that this can involve a self-perpetuating chemical reaction of a prion, a protein molecule that can switch between two different conformations. This switching process was first discovered for its role in neurodegenerative disease, but prion mechanisms have now been found to have normal, beneficial functions too. The prion protein is switched from a soluble to an insoluble, aggregated state that can then perpetuate itself autocatalytically, and which ‘marks’ a particular synapse to retain a memory.
There are still big gaps in the story of how memory works, many of which await filling with the chemical details. How, for example, is memory recalled once it has been stored? “This is a deep problem whose analysis is just beginning”, says neuroscientist and Nobel laureate Eric Kandel of Columbia University. It may involve the neurotransmitters dopamine and acetylcholine. And what happens at the molecular level when things go wrong, for example in Alzheimer’s-related memory loss and other cognitive disorders that affect memory? Addressing and perhaps even reversing such problems will require a deeper understanding of the many biochemical processes in memory storage, including a better understanding of the chemistry of prions – which in turn seems to point us increasingly towards a more fundamental grasp of protein structure and how it is shaped by evolution.
Getting to grips with the chemistry of memory offers the enticing, and controversial, prospect of pharmacological enhancement. Some memory-boosting substances are already known: neuropeptides, sex steroids and chemicals that act on receptors for nicotine, glutamate, serotonin and other neurotransmitters and their mimics have all been shown to enhance memory. In fact, according to neurobiologist Gary Lynch at the University of California at Irvine, the complex sequence of steps leading to long-term learning and memory means that there are a large number of potential targets for such ‘memory drugs’. However, there’s so far little evidence that known memory boosters improve cognitive processing more generally – that’s to say, it’s not clear that they actually make you smarter. Moreover, just about all studies so far have been on rodents and monkeys, not humans.
Yet it seems entirely possible that effective memory enhancers will be found. Naturally, such possibilities raise a host of ethical and social questions. One might argue that using such drugs is not so different from taking vitamins to improve health, or sleeping pills to get a much-needed good rest, and that it can’t be a bad thing to allow people to become brighter. But can it be right for cognitive enhancement to be available only for those who can afford it? In manipulating the brain’s chemistry, are we modifying the self? As our knowledge and capabilities advance, such ethical questions will become unavoidable.
7. Understanding the chemical basis of epigenetics.
Cells, like humans, become less versatile and more narrowly focused as they age. Pluripotent stem cells present in the early embryo can develop into any tissue type; but as the embryo grows, cells ‘differentiate’, acquiring specific roles (such as blood, muscle or nerve cells) that remain fixed in their progeny. One of the revolutionary discoveries in research on cloning and stem cells, however, is that this process isn’t irreversible. Cells don’t lose genes as they differentiate, retaining only those they need. Rather, the genes are switched off but remain latent – and can be reactivated. The recent discovery that a cocktail of just four proteins is sufficient to cause mature differentiated cells to revert to stem-cell-like status, becoming induced pluripotent cells, might not only transform regenerative medicine but also alters our view of how the human body grows from a fertilized egg.
Like all of biology, this issue has chemistry at its core. It’s slowly becoming clear that the versatility of stem cells, and its gradual loss during differentiation, results from the chemical changes taking place in the chromosomes. Whereas the old idea of biology makes it a question of which genes you have, it is now clear that an equally important issue is which genes you use. The formation of the human body is a matter of chemically modifying the stem cells’ initial complement of genes to turn them on and off.
What is particularly exciting and challenging for chemists is that this process seems to involve chemical events happening at size scales greater than those of atoms and molecules: at the so-called mesoscale, involving the interaction and organization of large molecular groups and assemblies. Chromatin, the mixture of DNA and proteins that makes up chromosomes, has a hierarchical structure. The double helix is wound around cylindrical particles made from proteins called histones, and this ‘string of beads’ is then bundled up into higher-order structures that are poorly understood. Yet it seems that cells exert great control over this packing – how and where a gene is packed into chromatin may determine whether it is ‘active’ or not. Cells have specialized enzymes for reshaping chromatin structure, and these have a central role in cell maturation and differentiation. Chromatin in embryonic stem cells seems to have a much looser, open structure: as some genes fall inactive, the chromatin becomes increasingly lumpy and organized. “The chromatin seems to fix and maintain or stabilize the cells’ state”, says pathologist Bradley Bernstein of the Massachusetts General Hospital in Boston.
What’s more, this process is accompanied by chemical modification of both DNA and histones. Small-molecule tags become attached to them, acting as labels that modify or silence the activity of genes. The question of to what extent mature cells can be returned to pluripotency – whether iPS cells are as good as true stem cells, which is a vital issue for their use in regenerative medicine – seems to hinge largely on how far this so-called epigenetic marking can be reset. If iPS cells remember their heritage (as it seems they partly do), their versatility and value could be compromised. On the other hand, some histone marks seem actually to preserve the pluripotent state.
It is now clear that there is another entire chemical language of genetics – or rather, of epigenetics – beyond the genetic code of the primary DNA sequence, in which some of the cell’s key instructions are written. “The concept that the genome and epigenome form an integrated system is crucial”, says geneticist Bryan Turner of the University of Birmingham in the UK.
The chemistry of chromatin and particularly of histone modifications may be central to how the influence of our genes gets modified by environmental factors. “It provides a platform through which environmental components such as toxins and foodstuffs can influence gene expression”, says Turner. “We are now beginning to understand how environmental factors influence gene function and how they contribute to human disease. Whether or not a genetic predisposition to disease manifests itself will often depend on environmental factors operating through these epigenetic pathways. Switching a gene on or off at the wrong time or in the wrong tissue can have effects on cell function that are just as devastating as a genetic mutation, so it’s hardly surprising that epigenetic processes are increasingly implicated in human diseases, including cancer.”
8. Finding new ways to make complex molecules.
The core business of chemistry is a practical, creative one: making molecules. But the reasons for doing that have changed. Once the purpose of constructing a large natural molecule such as vitamin B12 by painstaking atom-by-atom assembly was to check the molecular structure. If what you build, knowing were each atom is going, is the same as what nature makes, it presumably has the same structure. But we’re now good enough at deducing structures from methods such as X-ray crystallography – often for molecules that it would be immensely hard to make anyway – that this justification is hard to sustain.
Maybe it’s worth making a molecule because it is useful – as a drug, say. That’s true, but the more complicated the molecule, the less useful its synthesis from scratch (‘total synthesis’) tends to be, because of the cost and the small yield of the product after dozens of individual steps. Better, often, to extract the molecule from natural sources, or to use living organisms to make it or part of it, for example by equipping bacteria or yeast with the necessary enzymes.
And total synthesis is typically slow – even if rarely as slow as the 11-year project to make vitamin B12 that began in 1961. Yet new molecules and drugs are often needed very fast – for example, new antibiotics to outstrip the rise of resistant microorganisms.
As a result, total synthesis is “a lot harder to justify than it once was”, according to industrial chemist Derek Lowe. It’s a great training ground for chemists, but are there now more practical ways to make molecules? One big hope was combinatorial chemistry, in which new and potentially useful molecules were made by a random assembly of building blocks followed by screening to identify those that do a job well. Once hailed as the future of medicinal chemistry, ‘combi-chem’ fell from favour as it failed to generate anything useful.
But after the initial disappointments, combi-chem may enjoy a brighter second phase. It seems likely to work only if you can make a wide enough range of molecules and find good ways of picking out the minuscule amounts of successful ones. Biotechnology might help here – for example, each molecule could be linked to a DNA-based ‘barcode’ that both identifies it and aids its extraction. Or cell-based methods might coax combinatorial schemes towards products with particular functions using guided (‘directed’) evolution in the test tube.
There are other new approaches to bond-making too, which draw on nature’s mastery of uniting fragments in highly selective yet mild ways. Proteins, for example, have a precise sequence of amino acids determined by the base sequence of the messenger RNA molecule on which they are assembled in the ribosome. Using this model, future chemists might program molecular fragments to assemble autonomously in highly selective ways, rather than relying on the standard approach of total synthesis that involves many independent steps, including cumbersome methods for protecting the growing molecule from undesirable side reactions. For example, David Liu at Harvard University and his coworkers have devised a molecule-making strategy inspired by nature’s use of nucleic-acid templates to specify the order in which units are linked together. They tagged small molecules with short DNA strands that ‘programme’ them for linkage on a DNA template. And they have created a ‘DNA walker’ which can step along a template strand sequentially attaching small molecules dangling from the strand to produce a macromolecular chain – a process highly analogous to protein synthesis on the ribosome, essentially free from undesirable side reactions. This could be a handy way to tailor new drugs. “Many molecular life scientists believe that macromolecules will play an increasingly central, if not dominant, role in the future of therapeutics”, says Liu.
9. Integrating chemistry: creating a chemical information technology.
Increasingly, chemists don’t simply want to make molecules but also to communicate with them: to make chemistry an information technology that will interface with anything from living cells to conventional computers and fibre-optic telecommunications. In part, this is an old idea: biosensors in which chemical reactions are used to report on concentrations of glucose in the blood date back to the 1960s, although only recently has their use for monitoring diabetes been cheap, portable and widespread. Chemical sensing has countless applications – to detect contaminants in food and water at very low concentrations, say, or to monitor pollutants and trace gases in the atmosphere.
But it is in biomedicine that chemical sensors have the most dramatic potential. Some of the products of cancer genes circulate in the bloodstream long before the condition becomes apparent to regular clinical tests – if they could be detected early, prognoses would be vastly improved. Rapid genomic profiling would enable drug regimes to be tailored to individual patients, reducing risks of side-effects and allowing some medicines to be used that today are hampered by their dangers to a genetic minority. Some chemists foresee continuous, unobtrusive monitoring of all manner of biochemical markers of health and disease, perhaps in a way that is coupled remotely to alarm systems in doctors’ surgeries or to automated systems for delivering remedial drug treatments. All of this depends on developing chemical methods for sensing and signaling with high selectivity and often at very low concentrations. “Advances are needed in improving the sensitivity of such systems so that biological intermediates can be detected a much lower levels”, says chemist Allen Bard of the University of Texas at Austin. “This raises a lot of challenges. But such analyses could help in the early detection of disease.”
Integrated chemical information systems might go much further still. Prototype ‘DNA computers’ have been developed in which strands of bespoke DNA in the blood can detect, diagnose and respond to disease-related changes in gene activity. Clever chemistry can also couple biological processes to electronic circuitry, for example so that nerve cells can ‘speak’ to computers. Information processing and logic operations can be conducted between individual molecules. The photosynthetic molecular apparatus of some organisms even seems able to manipulate energy using the quantum rules that physicists are hoping to exploit in super-powerful quantum computers. It is conceivable that mixtures of molecules might act as super-fast quantum computers to simulate the quantum behavior of other molecules, in ways that are too computationally intensive on current machines. According to chemistry Nobel laureate Jean-Marie Lehn of the University of Strasbourg, this move of chemistry towards what he calls a science of informed (and informative) matter “will profoundly influence our perception of chemistry, how we think about it, how we perform it.”
10. Exploring the limits of applicability of the periodic table, and new forms of matter that lie outside it.
The periodic tables that adorn the walls of classrooms are now having to be constantly revised, because the number of elements keeps growing. Using particle accelerators to crash atomic nuclei together, scientists can create new ‘superheavy’ elements, with more protons and neutrons than the 92 or so elements found in nature. These engorged nuclei are not very stable – they decay radioactively, often within a tiny fraction of a second. But while they exist, the new ‘synthetic’ elements such as seaborgium (element 106) and hassium (108) are like any other insofar as they have well defined chemical properties. In dazzling experiments, the properties of both of these synthetic elements have been investigated from just a handful of the elusive atoms in the instant before they fall apart.
Such studies probe not just the physical but the conceptual limits of the periodic table: do these superheavy elements continue to display the trends and regularities in chemical behavior that make the table periodic in the first place? Some do, and some don’t. In particular, such massive nuclei hold on to the atoms’ innermost electrons so tightly that they move at close to the speed of light. Then the effects of special relativity increase their mass and play havoc with the quantum energy states on which their chemistry – and thus the table’s periodicity – depends.
Because nuclei are thought to be stabilized by particular ‘magic numbers’ of protons and neutrons, some researchers hope to find an ‘island of stability’, a little beyond the current capabilities of element synthesis, in which these superheavies live for longer. But is there any fundamental limit to their size? A simple calculation suggests that relativity prohibits electrons from being bound to nuclei of more than 137 protons. But more sophisticated calculations defy that limit. “The periodic system will not end at 137; in fact it will never end”, insists nuclear physicist Walter Greiner of the Johann Wolfgang Goethe University in Frankfurt, Germany. The experimental test of that claim remains a long way off.
Besides extending the periodic table, chemists are stepping outside it. Conventional wisdom has it that the table enumerates all the ingredients that chemists have at their disposal. But that’s not quite true. For one thing, it has been found that small clusters of atoms can act collectively like single ‘giant’ atoms of other elements. A so-called ‘superatom’ of aluminum containing precisely 13 atoms will behave like a giant iodine atom, while an Al14 cluster behaves like an alkaline earth metal. “We can take one element and have it mimic several different elements in the Periodic Table”, says Shiv Khanna of Virginia Commonwealth University in Richmond, Virginia. It’s not yet clear how far this superatom concept can be pushed, but according to one of its main advocates, A. Welford Castleman of Pennsylvania State University, it potentially makes the periodic table three-dimensional, each element being capable of mimicking several others in suitably sized clusters. There’s no fundamental reason why such superatoms have to contain just one element either, nor why the ‘elements’ they mimic need be analogues of others in the table.
Furthermore, physicists have made synthetic atoms that are not like traditional ones at all, with nuclei of protons (and perhaps neutrons) surrounded by electrons. The electron’s heavier cousin the muon can replace the electron in ‘muonium’, a kind of heavy hydrogen. And the anti-electron, or positron, can act as the positive nucleus of ‘positronium’, a super-light analogue of hydrogen. A slightly heftier version of ‘light hydrogen’ has been made that substitutes the central proton for a positively charged muon. These synthetic atoms have been used to test aspects of the quantum theory of chemical reactions. And by comparing the spectrum of muonium with that of ordinary hydrogen, researchers have been able to obtain a new, more accurate value for the mass of the proton.
______________________________________
Introduction
There aren’t many novels with chemistry in them, but one of the most famous has a Professor Waldman of the University of Ingolstadt say this: “Chemistry is that branch of natural philosophy in which the greatest improvements have been and may be made.” Waldman is the tutor of Victor Frankenstein in Mary Shelley’s classic from 1818, and he inspires his student to make the discovery that triggers the book’s dark tale.
This association imputes a Faustian aspect to chemistry. But that, like Waldman’s optimism, was transferred in the twentieth century first to physics and then to biology. Chemistry seemed to be left behind as a ‘finished’ science, now just a matter of engineering and devoid of the grand questions that Shelley – a devotee of Humphry Davy – seemed to glimpse in chemistry two hundred years ago. What happened?
Perhaps the answer is that chemistry became too versatile for its own good. It inveigled its way into so many areas of study and production, from semiconductor manufacturing to biomedicine, that we lost sight of it. The core of chemistry remains in making molecules and materials, but these are so diverse – drugs, paints, plastics, microscopic machines – that it is hard to see them as parts of a united discipline.
In this Year of Chemistry, it’s good to take stock – not just to remind ourselves why chemistry is central to our lives, but to consider where it is headed. Here are ten of the key challenges that chemistry faces today. Needless to say, there is no definitive list of this sort, and while all of these ten directions are important, their main value here is perhaps to illustrate that Waldman’s words still remain true. Several of these challenges are concerned with practical applications, as befits chemistry’s role as the most applied and arguably the most useful of the central sciences. But there are also questions about foundations, for the popular idea that chemistry is now conceptually understood, and that all we have to do is use it, is false. It has been only in the past several decades, for example, that the centrality of the non-covalent bond in the chemistry of life has been appreciated, and this sort of ‘temporary stickiness’ of molecules has been recognized as a key aspect of any technological applications, from molecular machines and nanotechnology to the development of surface coatings. Chemistry retains deep intellectual as well as practical challenges.
The last word should also go to Shelley’s Professor Waldman, who tells Victor Frankenstein that “a man would make but a very sorry chemist if he attended to that department of human knowledge alone”. You could perhaps say the same for any branch of science, but it particularly true for chemistry, which depends not just on understanding the world but of finding creative expressions of that knowledge. The creative opportunities for chemists lie everywhere: in making vehicles cleaner, producing artificial leaves, inventing new colours for artists, altering the fate of cells and comprehending the fate of stars. Chemistry is as limitless as art, because it is one.
1. The origins of life, and how life could be different on other planets.
The chemical origin of life used to be a rather parochial topic. That’s not to diminish the profundity, or the difficulty, of the question of how life began on Earth. But now that we have a better view of some of the strange and potentially fertile environments in our solar system – the occasional flows of water on Mars, the petrochemical seas of Saturn’s moon Titan and the cold, salty oceans that seem to lurk under the ice of Jupiter’s moons Europa and Ganymede – the origin of terrestrial life seems only a part of a grander question: under what circumstances can life arise, and how widely can its chemical basis vary? That issue is made even more rich by the discovery over the past 16 years of more than 500 extrasolar planets orbiting other stars – worlds of bewildering variety, forcing us to broaden our imagination about the possible chemistries of life. For instance, while NASA has long pursued the view that liquid water is a prerequisite, now we’re not so sure. How about liquid ammonia, or formamide (CHONH2), or an oily solvent like liquid methane, or supercritical hydrogen on Jupiter? And why should life restrict itself to DNA and proteins – after all, several artificial chemical systems have now been made that exhibit a kind of replication from the component parts without relying on nucleic acids. All you need, it seems, is a molecular system that can serve as a template for making a copy, and then detach itself.
Fixating on terrestrial life is a hang-up, but if we don’t, it’s hard to know where to begin. Looking at life on Earth, says chemist Steven Benner of the University of Florida, “we have no way to decide whether the similarities [such as the use of DNA and proteins] reflect common ancestry or the needs of life universally.” But if we retreat into saying that we’ve got to stick with what we know, he says, “we have no fun.”
All the same, Earth is the only locus of life that we know of, and so it makes sense to start here in trying to understand how matter can come alive and, eventually, know itself. This process seems to have begun extremely quickly in geological terms: there are fossil signs of early life dating back almost to the time that the oceans first formed. On that basis, it looks easy – some suspect, even inevitable. The challenge is no longer to come up with vaguely plausible scenarios, for there are plenty – polymerization catalysed by minerals, chemical complexity fuelled by hydrothermal vents, the RNA world. No, the game is to figure out how to make these more than just suggestive reactions coddled in the test tube. Researchers have made conspicuous progress in recent years, showing for example that certain relatively simple chemicals can spontaneously react to form the more complex building blocks of living systems, such as amino acids and nucleotides, the building blocks of DNA and RNA. In 2009, a team led by John Sutherland, now at the MRC Laboratory of Molecular Biology in Cambridge, England, was able to demonstrate the formation of nucleotides from molecules likely to have existed in the primordial broth. Other researchers have focused on the ability of some RNA strands to act as enzymes, providing evidence in support of the RNA world hypothesis. Through such steps, scientists may progressively bridge the gap from inanimate matter to self- replicating, self-sustaining systems.
Perhaps the dawn of synthetic biology, which includes the construction of primitive lifelike entities from scratch, will help to bridge the gap between the geological formation of simple organic ingredients, as demonstrated by Harold Urey and Stanley Miller in their famous ‘spark’ experiments more than 50 years ago, and the earliest cells.
2. Understanding the nature of the chemical bond and modeling chemistry on the computer.
“The chemistry of the future”, wrote the zoologist D’Arcy Wentworth Thompson in 1917, “must deal with molecular mechanics by the methods and in the strict language of mathematics”. Just 10 years later that seemed possible: the physicists Walter Heitler and Fritz London showed how to describe a chemical bond using the equations of then nascent quantum theory, and the great American chemist Linus Pauling proposed that bonds form when the electron orbitals of different atoms can overlap in space. A competing theory by Robert Mulliken and Friedrich Hund suggested that bonds are the result of atomic orbitals merging into “molecular orbitals” that extend over more than one atom. Theoretical chemistry seemed about to become a branch of physics.
Nearly 100 years later the molecular-orbital picture has become the most common one, but there is still no consensus among chemists that it is always the best way to look at molecules. The reason is that this model of molecules and all others are based on simplifying assumptions and are thus approximate, partial descriptions. In reality, a molecule is a bunch of atomic nuclei in a cloud of electrons, with opposing electrostatic forces fighting a constant tug-of-war with one another, and all components constantly moving and reshuffling. Existing models of the molecule usually try to crystallize such a dynamic entity into a static one and may capture some of its salient properties but neglect others.
Quantum theory is unable to supply a unique definition of chemical bonds that accords with the intuition of chemists whose daily business it is to make and break them. There are now many ways of assigning bonds to the quantum description of molecules as electrons and nuclei. According to quantum chemist Dominik Marx of the University of Bochum in Germany, “some are useful in some cases but fail in others and vice versa”. As a result, he says, “there will always be a search, and thus controversy, for ‘the best method’”.
This is no obstacle to calculating the structures and properties of molecules from quantum first principles – something that can be done to great accuracy if the number of electrons is relatively small. “Computational chemistry can be pushed to the level of utmost realism and complexity”, says Marx. As a result, computer calculations can increasingly be regarded as a kind of virtual experiment that predicts the outcome of a reaction.
But the challenge is to extend these approaches to increasingly complex cases. On the one hand, that may mean simply modelling more molecules. Can a computer model capture the complicated environment inside cells, for example, where many molecules large and small interact, aggregate and react within the responsive, protean medium of salty water? At the moment, most descriptions of such processes use highly simplified descriptions of bonding in which atoms are little more than balls on springs. Can computational chemistry help us understand, say, the detailed workings of a vast biomolecular machine like the ribosome?
On the other hand, can computational methods capture complex chemical processes and behavior, such as catalysis? Attempts to do so tend at the moment to rely on ways of bridging the calculations to intuitive expectations. One promising approach, being developed by Jörg Behler at Bochum, uses neural networks to deduce the energy surfaces on which these reactions happen. It also remains hard to predict subtle behaviour such as superconductivity. But already new materials have been discovered by computation – perhaps in times to come that will become the norm.
3. Graphene and carbon nanotechnology: sculpting with carbon.
The discovery of fullerenes – hollow, cagelike molecules made entirely of carbon – in 1985 was literally the start of something much bigger. The polyhedral shells of these molecules showed how the flat sheets of carbon atoms that make up graphite – where they are joined into hexagonal rings tiled side by side, like chicken wire – can be curved by including some pentagonal rings. With precisely 12 pentagons, the structure curls up into a closed shell. Six years later tubes of graphite-like carbon just a few nanometers in diameter, called carbon nanotubes, fostered the idea that this sort of carbon can be moulded into all manner of curved nanoscale structures. Being hollow, extremely strong and stiff, and electrically conducting, carbon nanotubes promised applications ranging from high-strength carbon composites to tiny wires and electronic devices, miniature molecular capsules and water-filtration membranes.
Now graphite itself has moved centre stage, thanks to the discovery that it can be separated into individual sheets, called graphene, that could supply the fabric for ultra-miniaturized, cheap and robust electronic circuitry. Graphene garnered the 2010 Nobel prize in physics, but the success of this and other forms of carbon nanotechnology might ultimately depend on chemistry. For one thing, ‘wet’ chemical methods may prove the cheapest and simplest for separating graphite into its component sheets. “Graphene can be patterned so that the interconnect and placement problems of carbon nanotubes are overcome”, says carbon specialist Walt de Heer of the Georgia Institute of Technology.
Some feel, however, that graphene has so far been over-hyped in a way that plays down the hurdles to making it a viable technology. “The hype is extreme”, says de Heer. “Many of the newly claimed superlative graphene properties are really graphite properties ‘under new management’ and were known and used for a very long time.” He believes graphitic electronics has not yet been shown to be viable. “The best that has been done to date is to show that ultrathin graphite (including graphene) can be gated [switched electronically, as in transistors]. But the gating is quite poor, since you cannot turn it completely off. Most people would not consider this to be even a starting point for electronics.” And he says that existing methods of graphene patterning are so crude that the edges undo any advantage that graphene nanoribbons have to offer. However, narrow ribbons and networks can be made to measure with atomic precision by using the techniques of organic chemistry to build them up from ‘polyaromatic’ molecules, in which several hexagonal carbon rings are linked together like little fragments of a graphene sheet. It seems quite possible that graphene technology will depend on clever chemistry.
[Watch this space: I’ve just written a piece on graphene for BBC’s pop-sci magazine Focus, which explores all these things in greater depth.]
4. Artificial photosynthesis.
Of all the sources of ‘clean energy’ available to us, sunlight seems the most tantalizing. With every sunrise comes a reminder of the vast resource of which we currently tap only a pitiful fraction. The main problem is cost: the expense of conventional photovoltaic panels made of silicon still restricts their use. But life on Earth, almost all of which is ultimately solar-powered by photosynthesis, shows that solar cells don’t have to be terribly efficient if, like leaves, they can be made abundantly and cheaply enough.
Yet ‘artificial photosynthesis’ and the ‘artificial leaf’ are slippery concepts. Do they entail converting solar to chemical energy, just as the leaf uses absorbed sunlight to make the biological ‘energy molecule’ ATP? Or must the ‘artificial leaf’ mimic photosynthesis by splitting water to make hydrogen – a fuel – and oxygen?
“Artificial photosynthesis means different things to different people”, says photochemist Devens Gust of Arizona State University. “Some people call virtually any sort of solar energy conversion that involves electricity or fuels artificial photosynthesis.” Gust himself reserves the term for photochemical systems that make fuels using sunlight: “I like to define it as the use of the fundamental scientific principles underlying natural photosynthesis for the design of technological solar-energy conversion systems.”
“One of the holy grails of solar energy research is using sunlight to produce fuels”, Gust explains. “In order to make a fuel, we need not only energy from sunlight, but a source of electrons, and some material to reduce to a fuel with those electrons. The source of electrons has to be water, if the process is to be carried out on a scale anything like that of human energy usage. The easiest way to make a fuel from this is to use the electrons to reduce the protons to hydrogen gas.” Nathan S. Lewis and his collaborators at Caltech are developing an artificial leaf that would do just that using silicon nanowires.
MIT chemist Daniel Nocera and his coworkers have recently announced an ‘artificial leaf’: a device the size of a credit card in which silicon solar cells and a photocatalyst of metals such as nickel and cobalt split water into hydrogen and oxygen which can then be used to drive fuel cells. Nocera estimates that a gallon of water would provide enough fuel to power a home in developing countries for a day. “Our goal is to make each home its own power station”, he says. His start-up company Sun Catalytix aims to take the technology to a commercial level.
But “water oxidation is not a solved problem, even at a fundamental level”, according to Gust. “Cobalt catalysts such as the one that Nocera uses, and newly-discovered catalysts based on other common metals are promising”, he says, but there is still no potentially inexpensive, ideal catalyst. “We don’t know how the natural photosynthetic catalyst, which is based on four manganese atoms and a calcium atom, works”, Gust adds.
Carbon-based fuels are easier than hydrogen to transport, store and integrate with current technologies. Photosynthesis makes carbon-based fuels (sugars, ATP) using sunlight. Gust and his colleagues have been working on making molecular assemblies for artificial photosynthesis that more closely mimic their biological inspiration. “We know how to make artificial antenna systems and photosynthetic reaction centers that work in the lab, but questions about stability remain, as they are usually based at least in part on organic molecules.” He admits that “we are not very close to a technologically useful catalyst for converting carbon dioxide to a useful liquid fuel.” On the other hand, he says, “the recent increase in funding, worldwide for solar fuels has meant that many more researchers have gotten into the game.” If this funding can be preserved, he anticipates “really significant advances.” Let’s hope so, since as Gust says, “we desperately need a fuel or energy source that is abundant, inexpensive, environmentally benign, and readily available.”
5. Devising catalysts for making biofuels.
The demand for biofuels – fuels made by conversion of organic matter, primarily plants – isn’t driven just by concern for the environment. While it’s true that a biofuel economy is notionally sustainable – carbon emissions from burning the fuels are balanced by the carbon dioxide taken up to grow the fuel crops – the truth is that it’s increasingly hard to find any good alternatives. Organic liquids (oil and petroleum) remain the main energy source globally, and are forecast to do so at least until the mid-century. But several estimates say that, at current production rates, we have only about 50 years worth of oil reserves left. What’s more, most of these are in politically unstable parts of the world. And currently soaring prices are expected to continue – the days of cheap oil are over.
There’s nothing new about biofuels: time was when there was only wood to burn in winter, or peat or dried animal dung. But that’s a very inefficient way to use the energy bound up in carbon-based molecules. Today’s biofuels are mostly ethanol made from fermenting corn, sugar-cane or switchgrass, or biodiesel, an ester made from the lipids in rapeseed or soybean oils. The case for biofuels seems easy to make – as well as being potentially greener and offering energy security, they can come from crops grown on land unsuitable for food agriculture, and can boost rural economies.
But the initial optimism about biofuels cooled quickly. For one thing, they threaten to displace food crops, particularly in developing countries where selling biofuels abroad can be more lucrative than feeding people at home. And the numbers are daunting: meeting current oil demand will mean requisitioning huge areas of arable land. But these figures depend crucially on how efficiently the carbon is used. Some parts of plants, particularly the resinous lignin, can’t easily be turned into biofuel, especially by biological fermentation. Finding new chemical catalysts to assist this process looks essential if biofuels are to fly.
One of the challenges of breaking down lignin – cracking open ‘aromatic C-O bonds’: benzene rings bridged by an oxygen – was recently met by John Hartwig and Alexey Sergeev of the University of Illinois, who found a nickel-based catalyst that will do the trick. Hartwig points out that, if biomass is to supply non-fossil-fuel chemical feedstocks as well as fuels, it will need to offer aromatic compounds – of which lignin is the only major potential source.
It’s a small part of a huge list of challenges: “There are issues at every level”, says Hartwig. Some of these are political – a carbon tax, for example, could decide the economical viability of biofuels. But many are chemical. The changes in infrastructure and engineering needed for an entirely new liquid fuel (more or less pure alcohol) are so vast that it seems likely the biofuels will need to be compatible with existing technology – in other words, to be hydrocarbons. That means converting the oxidized compounds in plant matter to reduced ones. Not only does this require catalysts, but it also demands a source of hydrogen – either from fossil fuels or ideally, but dauntingly, from splitting of water.
And fuels will need to be liquid for easy transportation along pipelines. But biomass is primarily solid. Liquefaction would need to happen on site where the plant is harvested. And one of the difficulties for catalytic conversion is the extreme impurity of the reagent – classical chemical synthesis does not tend to allow for reagents such as ‘wood’. “There’s no consensus on how all this will be done in the end”, says Hartwig. But an awful lot of any solution lies with the chemistry, especially with finding the right catalysts. “Almost every industrial reaction on a large scale has a catalyst associated”, Hartwig points out.
6. Understanding the chemical basis of thought and memory.
The brain is a chemical computer. Interactions between the neurons that form its circuitry are mediated by molecules: neurotransmitters that pass across the synaptic spaces where one neural cell wires up to another. This chemistry of the mind is perhaps at its most impressive in the operation of memory, in which abstract principles and concepts – a telephone number, say – are imprinted in states of the neural network by sustained chemical signals. How does chemistry create a memory that is at the same time both persistent and dynamic: susceptible to recall, revision and forgetting?
We now know that a cascade of biochemical processes, leading to a change in production of neurotransmitter molecules at the synapse, triggers ‘learning’ for habitual reflexes. But even this ‘simple’ aspect of learning has short- and long-term stages. Meanwhile, more complex so-called ‘declarative’ memory (of people, places and so on) has a different mechanism and location in the brain, involving the activation by the excitatory neurotransmitter glutamate of a protein called the NMDA receptor. Blocking these receptors with drugs prevents memory retention for many types of declarative memory.
Our everyday declarative memories are often encoded in a process called long-term potentiation (LTP), which involves NMDA receptors and in accompanied by an expansion of the synapse, the region of a neuron involved in its communication with others. As the synapse grows, so does the ‘strength’ of its connection with neighbours. The biochemistry of this process has been clarified in the past several years. It involves stimulation of the formation of filaments within the neuron made from the protein actin – the basic scaffolding of the cell, which determine its size and shape. But that process can be undone during a short period before the change is consolidated by biochemical agents that block the newly formed filaments.
Once encoded, long-term memory for both simple and complex learning is actively maintained by switching on genes that produce proteins. It now appears that this can involve a self-perpetuating chemical reaction of a prion, a protein molecule that can switch between two different conformations. This switching process was first discovered for its role in neurodegenerative disease, but prion mechanisms have now been found to have normal, beneficial functions too. The prion protein is switched from a soluble to an insoluble, aggregated state that can then perpetuate itself autocatalytically, and which ‘marks’ a particular synapse to retain a memory.
There are still big gaps in the story of how memory works, many of which await filling with the chemical details. How, for example, is memory recalled once it has been stored? “This is a deep problem whose analysis is just beginning”, says neuroscientist and Nobel laureate Eric Kandel of Columbia University. It may involve the neurotransmitters dopamine and acetylcholine. And what happens at the molecular level when things go wrong, for example in Alzheimer’s-related memory loss and other cognitive disorders that affect memory? Addressing and perhaps even reversing such problems will require a deeper understanding of the many biochemical processes in memory storage, including a better understanding of the chemistry of prions – which in turn seems to point us increasingly towards a more fundamental grasp of protein structure and how it is shaped by evolution.
Getting to grips with the chemistry of memory offers the enticing, and controversial, prospect of pharmacological enhancement. Some memory-boosting substances are already known: neuropeptides, sex steroids and chemicals that act on receptors for nicotine, glutamate, serotonin and other neurotransmitters and their mimics have all been shown to enhance memory. In fact, according to neurobiologist Gary Lynch at the University of California at Irvine, the complex sequence of steps leading to long-term learning and memory means that there are a large number of potential targets for such ‘memory drugs’. However, there’s so far little evidence that known memory boosters improve cognitive processing more generally – that’s to say, it’s not clear that they actually make you smarter. Moreover, just about all studies so far have been on rodents and monkeys, not humans.
Yet it seems entirely possible that effective memory enhancers will be found. Naturally, such possibilities raise a host of ethical and social questions. One might argue that using such drugs is not so different from taking vitamins to improve health, or sleeping pills to get a much-needed good rest, and that it can’t be a bad thing to allow people to become brighter. But can it be right for cognitive enhancement to be available only for those who can afford it? In manipulating the brain’s chemistry, are we modifying the self? As our knowledge and capabilities advance, such ethical questions will become unavoidable.
7. Understanding the chemical basis of epigenetics.
Cells, like humans, become less versatile and more narrowly focused as they age. Pluripotent stem cells present in the early embryo can develop into any tissue type; but as the embryo grows, cells ‘differentiate’, acquiring specific roles (such as blood, muscle or nerve cells) that remain fixed in their progeny. One of the revolutionary discoveries in research on cloning and stem cells, however, is that this process isn’t irreversible. Cells don’t lose genes as they differentiate, retaining only those they need. Rather, the genes are switched off but remain latent – and can be reactivated. The recent discovery that a cocktail of just four proteins is sufficient to cause mature differentiated cells to revert to stem-cell-like status, becoming induced pluripotent cells, might not only transform regenerative medicine but also alters our view of how the human body grows from a fertilized egg.
Like all of biology, this issue has chemistry at its core. It’s slowly becoming clear that the versatility of stem cells, and its gradual loss during differentiation, results from the chemical changes taking place in the chromosomes. Whereas the old idea of biology makes it a question of which genes you have, it is now clear that an equally important issue is which genes you use. The formation of the human body is a matter of chemically modifying the stem cells’ initial complement of genes to turn them on and off.
What is particularly exciting and challenging for chemists is that this process seems to involve chemical events happening at size scales greater than those of atoms and molecules: at the so-called mesoscale, involving the interaction and organization of large molecular groups and assemblies. Chromatin, the mixture of DNA and proteins that makes up chromosomes, has a hierarchical structure. The double helix is wound around cylindrical particles made from proteins called histones, and this ‘string of beads’ is then bundled up into higher-order structures that are poorly understood. Yet it seems that cells exert great control over this packing – how and where a gene is packed into chromatin may determine whether it is ‘active’ or not. Cells have specialized enzymes for reshaping chromatin structure, and these have a central role in cell maturation and differentiation. Chromatin in embryonic stem cells seems to have a much looser, open structure: as some genes fall inactive, the chromatin becomes increasingly lumpy and organized. “The chromatin seems to fix and maintain or stabilize the cells’ state”, says pathologist Bradley Bernstein of the Massachusetts General Hospital in Boston.
What’s more, this process is accompanied by chemical modification of both DNA and histones. Small-molecule tags become attached to them, acting as labels that modify or silence the activity of genes. The question of to what extent mature cells can be returned to pluripotency – whether iPS cells are as good as true stem cells, which is a vital issue for their use in regenerative medicine – seems to hinge largely on how far this so-called epigenetic marking can be reset. If iPS cells remember their heritage (as it seems they partly do), their versatility and value could be compromised. On the other hand, some histone marks seem actually to preserve the pluripotent state.
It is now clear that there is another entire chemical language of genetics – or rather, of epigenetics – beyond the genetic code of the primary DNA sequence, in which some of the cell’s key instructions are written. “The concept that the genome and epigenome form an integrated system is crucial”, says geneticist Bryan Turner of the University of Birmingham in the UK.
The chemistry of chromatin and particularly of histone modifications may be central to how the influence of our genes gets modified by environmental factors. “It provides a platform through which environmental components such as toxins and foodstuffs can influence gene expression”, says Turner. “We are now beginning to understand how environmental factors influence gene function and how they contribute to human disease. Whether or not a genetic predisposition to disease manifests itself will often depend on environmental factors operating through these epigenetic pathways. Switching a gene on or off at the wrong time or in the wrong tissue can have effects on cell function that are just as devastating as a genetic mutation, so it’s hardly surprising that epigenetic processes are increasingly implicated in human diseases, including cancer.”
8. Finding new ways to make complex molecules.
The core business of chemistry is a practical, creative one: making molecules. But the reasons for doing that have changed. Once the purpose of constructing a large natural molecule such as vitamin B12 by painstaking atom-by-atom assembly was to check the molecular structure. If what you build, knowing were each atom is going, is the same as what nature makes, it presumably has the same structure. But we’re now good enough at deducing structures from methods such as X-ray crystallography – often for molecules that it would be immensely hard to make anyway – that this justification is hard to sustain.
Maybe it’s worth making a molecule because it is useful – as a drug, say. That’s true, but the more complicated the molecule, the less useful its synthesis from scratch (‘total synthesis’) tends to be, because of the cost and the small yield of the product after dozens of individual steps. Better, often, to extract the molecule from natural sources, or to use living organisms to make it or part of it, for example by equipping bacteria or yeast with the necessary enzymes.
And total synthesis is typically slow – even if rarely as slow as the 11-year project to make vitamin B12 that began in 1961. Yet new molecules and drugs are often needed very fast – for example, new antibiotics to outstrip the rise of resistant microorganisms.
As a result, total synthesis is “a lot harder to justify than it once was”, according to industrial chemist Derek Lowe. It’s a great training ground for chemists, but are there now more practical ways to make molecules? One big hope was combinatorial chemistry, in which new and potentially useful molecules were made by a random assembly of building blocks followed by screening to identify those that do a job well. Once hailed as the future of medicinal chemistry, ‘combi-chem’ fell from favour as it failed to generate anything useful.
But after the initial disappointments, combi-chem may enjoy a brighter second phase. It seems likely to work only if you can make a wide enough range of molecules and find good ways of picking out the minuscule amounts of successful ones. Biotechnology might help here – for example, each molecule could be linked to a DNA-based ‘barcode’ that both identifies it and aids its extraction. Or cell-based methods might coax combinatorial schemes towards products with particular functions using guided (‘directed’) evolution in the test tube.
There are other new approaches to bond-making too, which draw on nature’s mastery of uniting fragments in highly selective yet mild ways. Proteins, for example, have a precise sequence of amino acids determined by the base sequence of the messenger RNA molecule on which they are assembled in the ribosome. Using this model, future chemists might program molecular fragments to assemble autonomously in highly selective ways, rather than relying on the standard approach of total synthesis that involves many independent steps, including cumbersome methods for protecting the growing molecule from undesirable side reactions. For example, David Liu at Harvard University and his coworkers have devised a molecule-making strategy inspired by nature’s use of nucleic-acid templates to specify the order in which units are linked together. They tagged small molecules with short DNA strands that ‘programme’ them for linkage on a DNA template. And they have created a ‘DNA walker’ which can step along a template strand sequentially attaching small molecules dangling from the strand to produce a macromolecular chain – a process highly analogous to protein synthesis on the ribosome, essentially free from undesirable side reactions. This could be a handy way to tailor new drugs. “Many molecular life scientists believe that macromolecules will play an increasingly central, if not dominant, role in the future of therapeutics”, says Liu.
9. Integrating chemistry: creating a chemical information technology.
Increasingly, chemists don’t simply want to make molecules but also to communicate with them: to make chemistry an information technology that will interface with anything from living cells to conventional computers and fibre-optic telecommunications. In part, this is an old idea: biosensors in which chemical reactions are used to report on concentrations of glucose in the blood date back to the 1960s, although only recently has their use for monitoring diabetes been cheap, portable and widespread. Chemical sensing has countless applications – to detect contaminants in food and water at very low concentrations, say, or to monitor pollutants and trace gases in the atmosphere.
But it is in biomedicine that chemical sensors have the most dramatic potential. Some of the products of cancer genes circulate in the bloodstream long before the condition becomes apparent to regular clinical tests – if they could be detected early, prognoses would be vastly improved. Rapid genomic profiling would enable drug regimes to be tailored to individual patients, reducing risks of side-effects and allowing some medicines to be used that today are hampered by their dangers to a genetic minority. Some chemists foresee continuous, unobtrusive monitoring of all manner of biochemical markers of health and disease, perhaps in a way that is coupled remotely to alarm systems in doctors’ surgeries or to automated systems for delivering remedial drug treatments. All of this depends on developing chemical methods for sensing and signaling with high selectivity and often at very low concentrations. “Advances are needed in improving the sensitivity of such systems so that biological intermediates can be detected a much lower levels”, says chemist Allen Bard of the University of Texas at Austin. “This raises a lot of challenges. But such analyses could help in the early detection of disease.”
Integrated chemical information systems might go much further still. Prototype ‘DNA computers’ have been developed in which strands of bespoke DNA in the blood can detect, diagnose and respond to disease-related changes in gene activity. Clever chemistry can also couple biological processes to electronic circuitry, for example so that nerve cells can ‘speak’ to computers. Information processing and logic operations can be conducted between individual molecules. The photosynthetic molecular apparatus of some organisms even seems able to manipulate energy using the quantum rules that physicists are hoping to exploit in super-powerful quantum computers. It is conceivable that mixtures of molecules might act as super-fast quantum computers to simulate the quantum behavior of other molecules, in ways that are too computationally intensive on current machines. According to chemistry Nobel laureate Jean-Marie Lehn of the University of Strasbourg, this move of chemistry towards what he calls a science of informed (and informative) matter “will profoundly influence our perception of chemistry, how we think about it, how we perform it.”
10. Exploring the limits of applicability of the periodic table, and new forms of matter that lie outside it.
The periodic tables that adorn the walls of classrooms are now having to be constantly revised, because the number of elements keeps growing. Using particle accelerators to crash atomic nuclei together, scientists can create new ‘superheavy’ elements, with more protons and neutrons than the 92 or so elements found in nature. These engorged nuclei are not very stable – they decay radioactively, often within a tiny fraction of a second. But while they exist, the new ‘synthetic’ elements such as seaborgium (element 106) and hassium (108) are like any other insofar as they have well defined chemical properties. In dazzling experiments, the properties of both of these synthetic elements have been investigated from just a handful of the elusive atoms in the instant before they fall apart.
Such studies probe not just the physical but the conceptual limits of the periodic table: do these superheavy elements continue to display the trends and regularities in chemical behavior that make the table periodic in the first place? Some do, and some don’t. In particular, such massive nuclei hold on to the atoms’ innermost electrons so tightly that they move at close to the speed of light. Then the effects of special relativity increase their mass and play havoc with the quantum energy states on which their chemistry – and thus the table’s periodicity – depends.
Because nuclei are thought to be stabilized by particular ‘magic numbers’ of protons and neutrons, some researchers hope to find an ‘island of stability’, a little beyond the current capabilities of element synthesis, in which these superheavies live for longer. But is there any fundamental limit to their size? A simple calculation suggests that relativity prohibits electrons from being bound to nuclei of more than 137 protons. But more sophisticated calculations defy that limit. “The periodic system will not end at 137; in fact it will never end”, insists nuclear physicist Walter Greiner of the Johann Wolfgang Goethe University in Frankfurt, Germany. The experimental test of that claim remains a long way off.
Besides extending the periodic table, chemists are stepping outside it. Conventional wisdom has it that the table enumerates all the ingredients that chemists have at their disposal. But that’s not quite true. For one thing, it has been found that small clusters of atoms can act collectively like single ‘giant’ atoms of other elements. A so-called ‘superatom’ of aluminum containing precisely 13 atoms will behave like a giant iodine atom, while an Al14 cluster behaves like an alkaline earth metal. “We can take one element and have it mimic several different elements in the Periodic Table”, says Shiv Khanna of Virginia Commonwealth University in Richmond, Virginia. It’s not yet clear how far this superatom concept can be pushed, but according to one of its main advocates, A. Welford Castleman of Pennsylvania State University, it potentially makes the periodic table three-dimensional, each element being capable of mimicking several others in suitably sized clusters. There’s no fundamental reason why such superatoms have to contain just one element either, nor why the ‘elements’ they mimic need be analogues of others in the table.
Furthermore, physicists have made synthetic atoms that are not like traditional ones at all, with nuclei of protons (and perhaps neutrons) surrounded by electrons. The electron’s heavier cousin the muon can replace the electron in ‘muonium’, a kind of heavy hydrogen. And the anti-electron, or positron, can act as the positive nucleus of ‘positronium’, a super-light analogue of hydrogen. A slightly heftier version of ‘light hydrogen’ has been made that substitutes the central proton for a positively charged muon. These synthetic atoms have been used to test aspects of the quantum theory of chemical reactions. And by comparing the spectrum of muonium with that of ordinary hydrogen, researchers have been able to obtain a new, more accurate value for the mass of the proton.