I seem to have ended 2010 with a little cluster of articles here and there. In Physics World I have a feature on single-molecule sequencing of DNA using nanopores – an exciting area that I’m now convinced is going to pay off some time soon, and which will demonstrate that advances in understanding of biology still frequently hinge on the technical capability that physics and chemistry supply. Oddly the December issue of Physics World seems still not to be in circulation or live online, but there’s a preview of the piece here. In Nature I have a couple of pieces to mark the Year of Chemistry in 2011 – an In Retrospect perspective on Linus Pauling’s classic text The Nature of the Chemical Bond and, as the main course to that hors d’oeuvre, an article on changing views of the chemical bond. The first of these is the first item below (the long version, with material that was rightly cut for the published version); the second is too long for that, but will appear in this week’s issue of Nature. I have a follow-up on the Peter Debye story below as my Crucible column in the January Chemistry World; that’s the second item below. And finally, I have a piece in New Humanist that trails my next book Unnatural, coming out in February, which picks up on the forthcoming production of Frankenstein at the National Theatre, directed by Danny Boyle. I’m greatly looking forward to that performance, and hope to be reviewing it for Nature. The NH piece is graced by one of Martin Rowson’s fabulous illustrations – worth the cover price for this alone.
And Happy New Year to everyone.
***********************
Linus Pauling’s The Nature of the Chemical Bond has, like Newton’s Principia or Darwin’s Origin of Species, the kind of legendary status that is commonly deemed to obviate any obligation to read it. Every chemist learns of its transformative role in uniting the prevailing view of molecules as assemblies of atoms with the new quantum-mechanical picture of atomic wavefunctions. But the book is long, by chemists’ standards mathematical, and anyway we now know that there are more versatile and useful approaches to the quantum bond than Pauling’s.
Yet Pauling’s book remains a good primer on the basic facts of chemical bonding – impressive for a book almost 70 years old. That’s not to say that the book should be more widely read – there are naturally better and more relevant treatments of the subject now, and The Nature of the Chemical Bond does not benefit from the elegant prose of Darwin’s works – but it is still bracing to do so. The best preparation is to look first at what more or less contemporary textbooks have to say about bonding. To take two random examples: Inorganic Chemistry, (Macmillan, 1922), by the eminent T. Martin Lowry, professor of physical chemistry at Cambridge, barely gets beyond John Dalton’s symbolic ‘ball’ molecules and Berzelius’s Law of Multiple Proportions (elements combine in simple ratios); Outlines of Physical Chemistry (16th edn, Methuen, 1930) by George Senter of Birkbeck College, a student of Wilhelm Ostwald and Nernst, doesn’t even mention the chemical bond but speaks in terms of affinities. They are products of the nineteenth century.
It’s true that this is not entirely representative, for the problem of how to describe the chemical bond was already by then acknowledging atomic physics. The English chemist Edward Frankland introduced the term in 1866, but regarded it not as a physical connection, as implied by the practice then common of drawing lines between elemental symbols, but as a kind of force akin to that which binds the solar system. Berzelius suspected that this force was electrostatic: the attraction of oppositely charged ions. That view seemed favoured by J. J. Thomson’s discovery of the electron in 1897, since ions could result from an exchange of electrons between nuclei.
But Gilbert Lewis, another Nernst protégé at the University of California at Berkeley, argued that bonding results instead from sharing, not exchange, of electrons. More precisely, this gives rise to what Irving Langmuir later called a covalent bond, as opposed to the ionic bond that comes from electron exchange. In 1916 Lewis outlined the view that atoms are stabilized by having a full ‘octet’ of electrons, visualized as the corners of a cube, and that this might come about by sharing vertices or edges of the cubes. Langmuir popularized (in Lewis’s view, appropriated) this model, which seemed vindicated when Niels Bohr explained how the octets arise from quantum theory, as discrete electron shells.
Yet this remained a rudimentary grafting of quantum theory onto the notions that chemists used to rationalize molecular formulae. Pauling, a supremely gifted young man from a poor family in Oregon who won a scholarship to the prestigious California Institute of Technology in 1922, was convinced that chemical bonding needed instead to be understood from quantum first principles. He wasn’t (as sometimes implied) alone in that – in particular, Richard Tolman at Caltech held the same view. Pauling had a golden opportunity to develop the notion, however, when in 1926 a Guggenheim scholarship allowed him to come to Europe to visit the architects of quantum theory: Bohr at Copenhagem, Arnold Sommerfeld at Munich and Erwin Schrödinger at Zurich. He also met Fritz London and his student Walter Heitler, who in 1927 published their quantum-mechanical description of the hydrogen molecule. Here they found an approximate way to write the wavefunction of the molecule which, when inserted into the Schrödinger equation, allowed them to calculate the binding energy, in reasonable agreement with experiment.
Pauling expanded this treatment to the molecular hydrogen ion H2+, and generalized it into a description called the valence-bond model. He considered that if the wavefunction that offers the lowest energy turns out to be one that is a combination of the wavefunctions of two or more structures, the molecule can be considered to ‘resonate’ between the structures. The molecule is ten stabilized by ‘resonance energy’. “It is found that there are many substances whose properties cannot be accounted for by means of a single electronic structure of the valence-bond type, but which can be fitted into the scheme of classical valence theory by the consideration of resonance among two or more such structures.” For example, the H2+ ion can be considered a resonance between HA+ .HB and HA. HB+ The electron resonates between the two nuclei.
Pauling also showed in a paper of 1928 how the bonding in molecules such as those of four-valent carbon can be explained in terms of the concept of ‘hybridization’, in which atomic electron orbitals (here the so-called 2s and three 2p orbitals) are ‘mixed’ into hybrid orbitals with a new geometric distribution in space: for carbon, they give rise to four sp3 orbitals which create a tetrahedral covalent bonding arrangement. Thiese ideas were published in a series of papers in 1931 in the Journal of the American Chemical Society that formed the core of The Nature of the Chemical Bond. The book remained in print, with (three) revised editions, until 1960. The scope of the book is breathtaking: it brings multiple bond, ionic, metallic and hydrogen bonds all within the framework, and explains how the ideas fit with observations of bond lengths and ionic sizes in X-ray crystallography, the technique that Pauling studied from the outset at Caltech and which eventually led to his seminal work in the 1950s on the structure of proteins and nucleic acids.
Pauling acknowledges in his book that it is a bit arbitrary to divide up the bonding into particular, resonating configurations of nuclei and electrons; but he says we do that all the time. “The description of the propane molecule as involving carbon-carbon single bonds and carbon-hydrogen single bonds is arbitrary; the concepts themselves are idealizations.” The wavefunction is all that really matters.
It is one thing to say it, however, and quite another to accept this arbitrariness in the face of an alternative. In the late 1920s, Robert Mulliken at the University of Chicago and Friedrich Hund in Göttingen devised a different quantum description of chemical bonding which approximated the electron wavefunctions in another way, giving rise to ‘molecular orbitals’ in which electrons were considered to be distributed over several nuclei. This model gave a rather simpler picture for explaining molecular electronic spectra: the quantum energy levels of electrons. What is more, it could offer a single description of some molecules for which the valence-bond approach needed to invoke resonance between a great many discrete structures. This was especially true for aromatic molecules such as benzene: the VB model needed something like 48 separate structures for naphthalene, and, in the case of ferrocene described in the 3rd (1960) edition of The Nature of the Chemical Bond, no fewer than 560. Evidently, while neither the MO nor VB models could lay claim to being more fundamental or ‘correct’, the former had significant advantages from a practical point of view. This was suspected even when Pauling’s book first appeared – some reviewers criticised him for not mentioning the rival theory, while one suspected that the VB method might triumph purely because of Pauling’s superior presentational skills. Pauling himself never accepted that MO theory was generally more useful, although it was the consensus among chemists by the 1970s.
The significance of The Nature of The Chemical Bond was not so much that it pioneered the quantum-mechanical view of bonding – London and Heitler had done that – but that it made this a chemical theory, a description that chemists could appreciate rather than an abstract physical account of wavefunctions. It recognized that, for a mathematical model of physical phenomena to be useful, it needs to accommodate itself to the intuitions and heuristics that scientists need in order to talk coherently about the problem. Emerging from the forefront of physics, this was nevertheless fundamentally a book for chemists.
**********************
In Kurt Vonnegut’s 1961 novel Mother Night, an American writer named Howard Campbell is brought to trial for his crimes as a Nazi propagandist during the Second World War. The apolitical Campbell decided to remain in Germany after Hitler came to power in 1933, where he is persuaded to make English radio broadcasts of Nazi propaganda. But he has also been enlisted by an operative of the US War Department to lace his broadcasts with intelligence messages coded in coughs and pauses. This role is never made public, and Campbell is constantly threatened with exposure of his ‘Nazism’ while trying to lead an anonymous life post-war in New York.
It would be unwise to stretch too far any parallels with the life of Peter Debye, the Dutch physical chemist who won the 1936 chemistry Nobel for his work on molecular structure and dipole moments. But Mother Night came to my mind after hearing the latest suggestion that Debye, who has been reviled in the past for alleged collaboration with the pre-war Nazi regime, might have been passing on information about German war technology to a spy for the British secret service in Berlin.
The evidence for that, outlined in a paper by retired chemist Jurrie Reiding after consulting Debye’s archival documents in America, is extremely circumstantial [1]. Debye was a lifelong friend of Paul Rosbaud, an Austrian chemist who hated the Nazis and spied for the Allies during the war under the codename ‘Griffin’. Reiding says that such a friendship would be inconceivable if Debye was a Nazi sympathizer. But there are no more than vague hints about whether Debye was actually one of Rosbaud’s informants.
Debye’s links with Nazism were asserted in a 2006 book Einstein in Nederland by the Dutch journalist Sybe Rispens, and were outlined in an article ‘Nobel Laureate with dirty hands’ published in a Dutch periodical in connection with the book. Here Rispens explained (as already known to historians) that Debye, as president of the Germany Physical Society (DPG), had signed a letter in 1938 expelling Jews from the society. Panicked by the media exposé, the University of Utrecht removed Debye’s name from its institute for nanomaterials science, while the University of Maastricht withdrew from an annual research prize named after Debye.
A follow-up report on the matter commissioned by the Netherlands Institute for War Documentation (NIOD) changed the accusation of collaboration to one of ‘opportunism’, and the decisions of both universities have now been reversed. But Debye’s name remained tainted in the Netherlands, despite protestation from many scientists both in Europe and in the US, where Debye worked at Cornell University after leaving Germany in 1940.
There’s good reason to think that Debye was no friend of the Nazis. He collected his Nobel prize against their expressed wishes, and they thought him far too friendly to the Jews in his role as DPG president. Indeed, he even – with Rosbaud’s assistance – helped the Jewish nuclear physicist Lise Meitner flee Germany.
And yet why did he stay in Germany so long, when others left? Roald Hoffmann at Cornell has argued that this inevitably taints Debye’s reputation. ‘In the period 1933-39’, he says, ‘Debye took on positions of administration and leadership in German science, aware that such positions would involve collaboration with the Nazi regime. The oppressive, undemocratic, and obsessively anti-Semitic nature of that regime was clear. Debye chose to stay and, through his assumption of prominent state positions within a scientific system that was part of the state, supported the substance and the image of the Nazi regime.’
Clearly Debye’s story is not one of heroic self-sacrifice; the issue is rather where mild resistance blends into passive collusion. Cornelis Gorter, a physicist at Leiden University who knew Debye well, said that (like Howard Campbell) ‘he was not at all a Nazi sympathizer but was apolitical.’ Yet it seems that, also like Campbell, his deeds can tell quite different narratives viewed from different perspectives. The accusation of opportunism in the NIOD report came largely because, having occupied positions of power in Nazi Germany, Debye went on to serve the US war effort enthusiastically, for example through his work on synthetic rubber. That could suggest ingratiating collaboration with any ruling power, but it also fits the picture of Debye striving to limit Nazi abuses before finally fleeing to oppose them more openly.
This situation is reminiscent also of the controversy about Werner Heisenberg, memorably explored in Michael Frayn’s play Copenhagen. Did Heisenberg actively drag his heels to thwart the Nazi efforts to make an atomic bomb, or did he simply get the physics wrong? Did he even know his motives himself? And if not, how can we hope to?
A clue to Debye’s position may lie in a letter he wrote to the physicist Arnold Sommerfeld just before he left Germany for good. His aim, he said, was ‘not to despair and always be ready to grab the Good which whisks by, without granting the Bad any more room than is absolutely necessary. That is a principle of which I have already made much use.’
But maybe the real moral is the one that Vonnegut adduced for Mother Night: ‘We are what we pretend to be, so we must be careful about what we pretend to be.’
1. Reiding, J. Ambix 57, 275-300 (2010).
Sunday, January 02, 2011
Thursday, December 16, 2010
All the world's words
Here's the pre-edited (but mostly identical) version of my story for Nature news on an intriguing paper in Science on data-mining of Google Books. There's the danger that in the wrong hands this kind of thing could end up supplanting textual and historical analysis with lexical statistics. But there's clearly a wealth of interesting stuff to be gleaned this way. And I thoroughly approve of a paper that is not afraid to show a sense of humour.
*********************************************************
The digitization of books by Google Books has provoked controversy over issues of copyright and book sales, but for linguists and cultural historians it could offer an unprecedented treasure trove. In a paper in Science[1], researchers at Harvard University and the Google Books team in Mountain View, California, herald a new discipline, called culturomics, which mines this literary bounty for insights into trends in what cultures can and will talk about through the written word.
Among the findings described by the collaboration, led by biologist Jean-Baptiste Michel at Harvard, are the size of the English language (around one million words in 2000), the typical ‘fame trajectories’ of well-known people, and the literary signatures of censorship such as that imposed by the German Nazi government.
‘The possibilities with such a new database, and the ability to analyze it in real time are really exciting’, says linguist Sheila Embleton of York University in Canada. She concurs with the authors’ claim that culturomics offers ‘a new type of evidence in the humanities.’
‘Quantitative analysis of this kind can reveal patterns of language usage and of the salience of a subject matter to a degree that would be impossible by other means’, agrees historian Patricia Hudson of Cardiff University in Wales.
‘The really great aspect of all this is using huge databases, but they will have to be used in careful ways, especially considering alternative explanations and teasing out the differences in alternatives from the database,’ says Royal Skousen, a linguist at Brigham Young University in Provo, Utah. But he is not won over by the term ‘culturomics’: ‘It smacks too much of ‘freakonomics’, and both terms smack of amateur sociology.’
Using statistical and computational techniques to analyse vast quantities of data in historical and linguistic research is nothing new in itself – the fields called quantitative history and quantitative linguistics are well established. But it is the sheer volume of the database created by Google Books that sets the new work apart.
So far, Google has digitized over 15 million books, representing about 12 percent of all those ever published. Michel and his colleagues performed their analyses on just a third of this sample, selected on the basis of the good quality of the digitization via optical character recognition and reliable information about the provenance, such as the date and place of publication.
The resulting data set contained over 500 billion words, mostly in English. This is far more than any single person could read: a fast reader would, without breaks for food and sleep, need 80 years to finish the books for the year 2000 alone.
Not all isolated strings of characters in texts are real words – some are common numbers, others abbreviations or typos. In fact, 51 percent of the character strings in 1900, and 31 percent in 2000, were ‘non-words’. ‘I really have trouble believing that’, admits Embleton. ‘If it’s true, it would really shake some of my foundational thoughts about English.’
By this count, the English language has grown by over 70 percent during the past 50 years, and around 8,500 new words are being added each year. Moreover, only about half of the words currently in use are apparently documented in standard dictionaries. ‘That high amount of lexical ‘dark matter’ is also very hard to believe, and would also shake some foundations’ says Embleton, adding ‘I’d love to see the data.’
In principle she can, because the researchers have made their database public. This will allow others to explore the huge number of potential questions it suggests, not just about word use but about cultural history. Michel and colleagues offer two such examples, concerned with fame and censorship.
They say that actors reach their peak of fame, as recorded in references to names, around the age of 30, while writers take a decade longer but achieve a higher peak. ‘Science is a poor route to fame’, they say. Physicists and biologists who achieve fame do so only late in life, while ‘even at their peak, mathematicians tend not to be appreciated by the public.’
Nation-specific subsets of the data can show how references to ideas, events or people drop out of sight due to state suppression. For example, the Jewish artist Marc Chagall virtually disappears from German writings in 1936-1944 (while remaining prominent in the English language), and ‘Trotsky’ and ‘Tiananmen Square’ similarly vanish in Russian and Chinese works respectively. The authors also look at trends in references to feminism, God, diet and evolution.
‘The ability, via modern technology, to look at just so much at once really opens horizons’, says Embleton. However, Hudson cautions that making effective use of such a resource will require skill and judgement, not just number-crunching.
‘How this quantitative evidence is generated – in response to what questions – and how it is interpreted are the most important factors in forming conclusions’, she says. ‘Quantitative evidence of this kind must always address suitably framed general questions, and employed alongside qualitative evidence and reasoning, or it will not be worth a great deal.’
Reference
1. Michel, J.-B. et al. Science doi:10.1126/science.1199644.
*********************************************************
The digitization of books by Google Books has provoked controversy over issues of copyright and book sales, but for linguists and cultural historians it could offer an unprecedented treasure trove. In a paper in Science[1], researchers at Harvard University and the Google Books team in Mountain View, California, herald a new discipline, called culturomics, which mines this literary bounty for insights into trends in what cultures can and will talk about through the written word.
Among the findings described by the collaboration, led by biologist Jean-Baptiste Michel at Harvard, are the size of the English language (around one million words in 2000), the typical ‘fame trajectories’ of well-known people, and the literary signatures of censorship such as that imposed by the German Nazi government.
‘The possibilities with such a new database, and the ability to analyze it in real time are really exciting’, says linguist Sheila Embleton of York University in Canada. She concurs with the authors’ claim that culturomics offers ‘a new type of evidence in the humanities.’
‘Quantitative analysis of this kind can reveal patterns of language usage and of the salience of a subject matter to a degree that would be impossible by other means’, agrees historian Patricia Hudson of Cardiff University in Wales.
‘The really great aspect of all this is using huge databases, but they will have to be used in careful ways, especially considering alternative explanations and teasing out the differences in alternatives from the database,’ says Royal Skousen, a linguist at Brigham Young University in Provo, Utah. But he is not won over by the term ‘culturomics’: ‘It smacks too much of ‘freakonomics’, and both terms smack of amateur sociology.’
Using statistical and computational techniques to analyse vast quantities of data in historical and linguistic research is nothing new in itself – the fields called quantitative history and quantitative linguistics are well established. But it is the sheer volume of the database created by Google Books that sets the new work apart.
So far, Google has digitized over 15 million books, representing about 12 percent of all those ever published. Michel and his colleagues performed their analyses on just a third of this sample, selected on the basis of the good quality of the digitization via optical character recognition and reliable information about the provenance, such as the date and place of publication.
The resulting data set contained over 500 billion words, mostly in English. This is far more than any single person could read: a fast reader would, without breaks for food and sleep, need 80 years to finish the books for the year 2000 alone.
Not all isolated strings of characters in texts are real words – some are common numbers, others abbreviations or typos. In fact, 51 percent of the character strings in 1900, and 31 percent in 2000, were ‘non-words’. ‘I really have trouble believing that’, admits Embleton. ‘If it’s true, it would really shake some of my foundational thoughts about English.’
By this count, the English language has grown by over 70 percent during the past 50 years, and around 8,500 new words are being added each year. Moreover, only about half of the words currently in use are apparently documented in standard dictionaries. ‘That high amount of lexical ‘dark matter’ is also very hard to believe, and would also shake some foundations’ says Embleton, adding ‘I’d love to see the data.’
In principle she can, because the researchers have made their database public. This will allow others to explore the huge number of potential questions it suggests, not just about word use but about cultural history. Michel and colleagues offer two such examples, concerned with fame and censorship.
They say that actors reach their peak of fame, as recorded in references to names, around the age of 30, while writers take a decade longer but achieve a higher peak. ‘Science is a poor route to fame’, they say. Physicists and biologists who achieve fame do so only late in life, while ‘even at their peak, mathematicians tend not to be appreciated by the public.’
Nation-specific subsets of the data can show how references to ideas, events or people drop out of sight due to state suppression. For example, the Jewish artist Marc Chagall virtually disappears from German writings in 1936-1944 (while remaining prominent in the English language), and ‘Trotsky’ and ‘Tiananmen Square’ similarly vanish in Russian and Chinese works respectively. The authors also look at trends in references to feminism, God, diet and evolution.
‘The ability, via modern technology, to look at just so much at once really opens horizons’, says Embleton. However, Hudson cautions that making effective use of such a resource will require skill and judgement, not just number-crunching.
‘How this quantitative evidence is generated – in response to what questions – and how it is interpreted are the most important factors in forming conclusions’, she says. ‘Quantitative evidence of this kind must always address suitably framed general questions, and employed alongside qualitative evidence and reasoning, or it will not be worth a great deal.’
Reference
1. Michel, J.-B. et al. Science doi:10.1126/science.1199644.
Thursday, December 09, 2010
Debye's dirty hands?
I have written a news story for Nature on new findings about the life of Peter Debye, who has been accused recently of colluding with the Nazis in the run-up to the Second World War. It’s very rich material (even if the new ‘revelations’ are rather indirect and add only a speculative element to the tale); I have written a piece on this for Chemistry World too, but had better wait for that to appear before posting it here. This pre-edited version is not as well structured as the final story, but contains more of the details and anecdotes, so here it is anyway. This is clearly an issue on which feelings run high, so I look forward (I think) to the feedback.
**********************************************
Peter Debye, the Dutch 1936 chemistry Nobel Laureate recently discredited by allegations of being a Nazi sympathizer, could in fact have been an anti-Nazi informer to the Allies during the approach to the Second World War, according to a new analysis of his private correspondence.
In a paper in the journal Ambix, retired chemist Jurrie Reiding in the Netherlands describes archival documents suggesting that Debye might have supplied information to a spy for the British intelligence agency MI6 in Berlin [1].
Although the new evidence is circumstantial, it adds to a mounting case for rehabilitating Debye’s name. When the Nazi links and accusations of anti-Semitism were asserted four years ago, two Dutch universities expunged Debye’s name from a research institute and an annual prize. The new paper ‘is an important and welcome contribution to the debate, which can help in arriving at a more balanced judgement’, says Ernst Homburg, a science historian at the University of Maastricht.
Debye, who worked for most of his pre-war career in Germany, became chairman of the German Physical Society (DPG) in 1937. Four years earlier, a law introduced by Hitler’s Nazi regime demanded the dismissal of all Jewish university professors. Among those who lost their posts was the pioneering nuclear physicist Lise Meitner at the University of Berlin.
In December 1938 the DPG board decided to expel the few remaining Jewish members. Debye sent a letter to members explaining this, citing ’circumstances beyond our control’ and signing off with ‘Heil Hitler!’ ‘Under the circumstances of those days, it was almost impossible not to write such a letter’, says Homburg.
Nonetheless, when this letter was described in an article titled ‘Nobel Laureate with dirty hands’ published in the Dutch newspaper Vrij Nederland in January 2006, in association with a book (in Dutch) called Einstein in Nederland by the journalist Sybe Rispens, the ensuing media controversy caused such alarm that the University of Utrecht removed Debye’s name from the institute for nanomaterials science, while the University of Maastricht in Debye’s home town withdrew its involvement in the annual Debye Prize for scientific research, sponsored by industrial benefactors the Hustinx Foundation.
This caused a storm of protest, not least from the researchers of the former Debye Institute in Utrecht. Chemist Héctor Abruña of Cornell University, where Debye worked after coming to the US in 1940 criticized the ‘rush to judgement’ and said that a university enquiry there found no evidence for the allegations.
As a result the Dutch Ministry of Education commissioned the Dutch Institute for War Documentation (NIOD) to investigate the Debye affair. Its report, released in 2007, softened the accusations to say that Debye had been guilty of ‘opportunism’ under the Nazis, but accused him of ‘keeping the back door open’ by secretly sustaining contacts with Nazi Germany while in the US.
All the same, in 2008 the Dutch government committee advised the universities of Utrecht and Maastricht to continue using Debye’s name, since the evidence of his ‘bad faith’ was equivocal. The Debye Institute at Utrecht was reinstated, and the Maastricht prize is due to be awarded again again next year. However, according to historian of chemistry Peter Morris, who edits Ambix, ‘in the Netherlands and to a lesser extent the USA this affair severely damaged Debye’s reputation.’
Critics of the Dutch universities’ initial decision have cited various arguments why Debye should not be judged too harshly or rashly. When he was chosen by the resolutely anti-Nazi Max Planck to be director of the Kaiser Wilhelm Institute of Physics (KWIP) in Berlin – a post that he occupied from 1935 until 1939 – it was precisely because he was non-German and was thought able to resist Nazi interference. Debye insisted that the place be named the Max Planck Institute when it finally opened in 1938. When the Nazis objected, Debye covered the name carved in stone over the entrance with a wooden plank – a pun that worked in German too.
And Debye accepted his Nobel Prize against the explicit wishes of the Nazis, who had commanded all Germans not to do so. He helped Meitner escape to Holland in 1938, and the Nazis opposed Debye’s chairmanship of the DPG because they considered him too friendly towards Jews. In 1940 Debye sailed to the US to give a series of prestigious lectures at Cornell – where he then stayed until his death in 1966. He aided the US war effort enthusiastically, especially through his work on polymers and synthetic rubber.
‘There were already enough arguments for Debye’s ‘rehabilitation’ before this article’, says Homburg, who calls Risbens’ book ‘heavily flawed’. But now Reiding adds a new narrative to the defence.
Debye, he says, was a friend of Paul Rosbaud, an Austrian working at the KWIP in Berlin, who was recruited by the British secret service to supply scientific information including details of the development of the V1 and V2 rockets and the German attempts to develop an atomic bomb. Rosbaud, who loathed the Nazis, remained in Berlin throughout the war, although even now information about his activities under the codename ‘Griffin’ remain classified.
Because of his consultancy with the academic Berlin publisher Springer Verlag, Rosbaud was very well connected in German science and knew Debye since at least 1930. He too played a key role in getting Meitner out of Germany, and Debye maintained the relationship with Rosbaud after the war. ‘The close friendship between Rosbaud and Debye makes it almost unquestionable that Debye was an anti-Nazi’, Reiding says.
And he points out that, as testified by other scientists to the FBI in the 1940s, Debye would have been party to some highly sensitive information about the German war technology during his time in Berlin. ‘Therefore’, Reiding says, ‘the hypothesis that Debye was a secret informant for Rosbaud does not appear too bold.’
Although Morris thinks that ‘further evidence would be needed before this case could be proved beyond doubt’, he adds that ‘I feel that there was a rush to judgement that not only failed to take into account all the aspects of Debye’s complex life but also failed to give full weight to the ambiguous nature of life under Nazi rule.’
Others question whether the new details add much to the story. ‘There seem to be two camps: those who hate Debye and deplore his actions as president of the DPG, and those who think he was a saint’, says Henk Lekkerkerker of the Debye Institute. ‘Both opinions are misleading, and the professional historians paint a more subtle and accurate picture.’
Perhaps ultimately a clue to Debye’s position lies in a letter that he wrote to the physicist Arnold Sommerfeld in December 1939, just before he left Germany for good. His aim, he said, was ‘not to despair and always be ready to grab the Good which whisks by, without granting the Bad any more room than is absolutely necessary. That is a principle of which I have already made much use.’
1. J. Reiding, Ambix 57, 275-300 (2010).
**********************************************
Peter Debye, the Dutch 1936 chemistry Nobel Laureate recently discredited by allegations of being a Nazi sympathizer, could in fact have been an anti-Nazi informer to the Allies during the approach to the Second World War, according to a new analysis of his private correspondence.
In a paper in the journal Ambix, retired chemist Jurrie Reiding in the Netherlands describes archival documents suggesting that Debye might have supplied information to a spy for the British intelligence agency MI6 in Berlin [1].
Although the new evidence is circumstantial, it adds to a mounting case for rehabilitating Debye’s name. When the Nazi links and accusations of anti-Semitism were asserted four years ago, two Dutch universities expunged Debye’s name from a research institute and an annual prize. The new paper ‘is an important and welcome contribution to the debate, which can help in arriving at a more balanced judgement’, says Ernst Homburg, a science historian at the University of Maastricht.
Debye, who worked for most of his pre-war career in Germany, became chairman of the German Physical Society (DPG) in 1937. Four years earlier, a law introduced by Hitler’s Nazi regime demanded the dismissal of all Jewish university professors. Among those who lost their posts was the pioneering nuclear physicist Lise Meitner at the University of Berlin.
In December 1938 the DPG board decided to expel the few remaining Jewish members. Debye sent a letter to members explaining this, citing ’circumstances beyond our control’ and signing off with ‘Heil Hitler!’ ‘Under the circumstances of those days, it was almost impossible not to write such a letter’, says Homburg.
Nonetheless, when this letter was described in an article titled ‘Nobel Laureate with dirty hands’ published in the Dutch newspaper Vrij Nederland in January 2006, in association with a book (in Dutch) called Einstein in Nederland by the journalist Sybe Rispens, the ensuing media controversy caused such alarm that the University of Utrecht removed Debye’s name from the institute for nanomaterials science, while the University of Maastricht in Debye’s home town withdrew its involvement in the annual Debye Prize for scientific research, sponsored by industrial benefactors the Hustinx Foundation.
This caused a storm of protest, not least from the researchers of the former Debye Institute in Utrecht. Chemist Héctor Abruña of Cornell University, where Debye worked after coming to the US in 1940 criticized the ‘rush to judgement’ and said that a university enquiry there found no evidence for the allegations.
As a result the Dutch Ministry of Education commissioned the Dutch Institute for War Documentation (NIOD) to investigate the Debye affair. Its report, released in 2007, softened the accusations to say that Debye had been guilty of ‘opportunism’ under the Nazis, but accused him of ‘keeping the back door open’ by secretly sustaining contacts with Nazi Germany while in the US.
All the same, in 2008 the Dutch government committee advised the universities of Utrecht and Maastricht to continue using Debye’s name, since the evidence of his ‘bad faith’ was equivocal. The Debye Institute at Utrecht was reinstated, and the Maastricht prize is due to be awarded again again next year. However, according to historian of chemistry Peter Morris, who edits Ambix, ‘in the Netherlands and to a lesser extent the USA this affair severely damaged Debye’s reputation.’
Critics of the Dutch universities’ initial decision have cited various arguments why Debye should not be judged too harshly or rashly. When he was chosen by the resolutely anti-Nazi Max Planck to be director of the Kaiser Wilhelm Institute of Physics (KWIP) in Berlin – a post that he occupied from 1935 until 1939 – it was precisely because he was non-German and was thought able to resist Nazi interference. Debye insisted that the place be named the Max Planck Institute when it finally opened in 1938. When the Nazis objected, Debye covered the name carved in stone over the entrance with a wooden plank – a pun that worked in German too.
And Debye accepted his Nobel Prize against the explicit wishes of the Nazis, who had commanded all Germans not to do so. He helped Meitner escape to Holland in 1938, and the Nazis opposed Debye’s chairmanship of the DPG because they considered him too friendly towards Jews. In 1940 Debye sailed to the US to give a series of prestigious lectures at Cornell – where he then stayed until his death in 1966. He aided the US war effort enthusiastically, especially through his work on polymers and synthetic rubber.
‘There were already enough arguments for Debye’s ‘rehabilitation’ before this article’, says Homburg, who calls Risbens’ book ‘heavily flawed’. But now Reiding adds a new narrative to the defence.
Debye, he says, was a friend of Paul Rosbaud, an Austrian working at the KWIP in Berlin, who was recruited by the British secret service to supply scientific information including details of the development of the V1 and V2 rockets and the German attempts to develop an atomic bomb. Rosbaud, who loathed the Nazis, remained in Berlin throughout the war, although even now information about his activities under the codename ‘Griffin’ remain classified.
Because of his consultancy with the academic Berlin publisher Springer Verlag, Rosbaud was very well connected in German science and knew Debye since at least 1930. He too played a key role in getting Meitner out of Germany, and Debye maintained the relationship with Rosbaud after the war. ‘The close friendship between Rosbaud and Debye makes it almost unquestionable that Debye was an anti-Nazi’, Reiding says.
And he points out that, as testified by other scientists to the FBI in the 1940s, Debye would have been party to some highly sensitive information about the German war technology during his time in Berlin. ‘Therefore’, Reiding says, ‘the hypothesis that Debye was a secret informant for Rosbaud does not appear too bold.’
Although Morris thinks that ‘further evidence would be needed before this case could be proved beyond doubt’, he adds that ‘I feel that there was a rush to judgement that not only failed to take into account all the aspects of Debye’s complex life but also failed to give full weight to the ambiguous nature of life under Nazi rule.’
Others question whether the new details add much to the story. ‘There seem to be two camps: those who hate Debye and deplore his actions as president of the DPG, and those who think he was a saint’, says Henk Lekkerkerker of the Debye Institute. ‘Both opinions are misleading, and the professional historians paint a more subtle and accurate picture.’
Perhaps ultimately a clue to Debye’s position lies in a letter that he wrote to the physicist Arnold Sommerfeld in December 1939, just before he left Germany for good. His aim, he said, was ‘not to despair and always be ready to grab the Good which whisks by, without granting the Bad any more room than is absolutely necessary. That is a principle of which I have already made much use.’
1. J. Reiding, Ambix 57, 275-300 (2010).
Tuesday, November 30, 2010
Chemists to the rescue?
Here's my Crucible article for the December issue of Chemistry World, which arose when I chaired a recent talk by John Emsley at the RSC.
***************************
Can chemists save the world? In his new book, targeted at the 2011 Year of Chemistry and published by the RSC, John Emsley argues in his characteristically inspirational manner that chemical innovations in areas such as biofuels, food production and clean water treatment can deliver the promise of the book’s title: A Healthy, Wealthy, Sustainable World. Emsley makes no apologies about his crusading, even propagandizing agenda, for he rightly points out that many of the biggest global challenges, from climate change to the end of oil, demand the expertise of chemistry, making it potentially the key science of the twenty-first century.
But Emsley concedes that his survey of the wonderful things that chemists have achieved in sustainable technology – converting rapseseed oil to biodiesel or to plastics feedstocks, say – does not look in depth at the economic picture. It’s a frequent and valid objection to technical innovation that it is all very well but how much does it cost in comparison to what we can do already? What’s the financial motivation, say, for China to abandon its abundant coal reserves for biofuels?
There is no blanket answer to such economic conundrums, but common to them all is the question of whether one can rely on market mechanisms to generate incentives for a desirable technology, or whether it should be nurtured by governmental or regulatory intervention. Here, as just about everywhere else right now, the issue is how ‘big’ government should be.
In the wake of the financial crisis, market fundamentalists sound less credible asserting that the market knows best, especially when it comes to societal benefits: the recent boom years were not so much generated by market mechanisms as bought on credit. But it seems equally clear that highly managed economies which subsidize unprofitable enterprises are unsustainable and risk stifling innovation. A middle course has been successfully steered by the German government’s investment in photovoltaic (PV) energy generation, where money for research and breaks for commercial companies are coupled to a concerted effort to build a market for solar power through a feed-in tariff: a guaranteed, highly competitive price for energy generated from solar panels and fed into the grid. This stimulus recognizes that new, desirable technologies may need a hand to get off the ground but need eventually to become independent. With government assistance, the German PV industry has created around 50,000 jobs, brought revenues of €5.6 billion in 2009, and made Germany the largest national source of PV power in the world. By 2020, up to 10 percent of Germany’s energy may be solar.
This is one reason why it is unrealistic to dismiss the prospects for an innovative technology on the basis that its (perhaps less desirable) rivals can currently do things more cheaply. There is a financial component to changing attitudes. Encouraging investment in a fledgling innovation can ultimately lower its price both by enabling efficiencies of scale and by supporting research into cost-cutting improvements. That was amply demonstrated by the Human Genome Project (HGP): the international decision that it was a Good Thing created the opportunity for new sequencing technologies that have reduced the cost and increased the speed of decoding an individual’s genome by orders of magnitude. Simply put, it became financially worthwhile for companies such as Illumina (spearheaded by chemists David Walt and Anthony Czarnik) to devise radical new sequencing methods. As a result, the economic hurdle to realizing the potential medical benefits of genome sequencing was lowered.
At the same time, the race between the publicly funded HGP and a private enterprise by Celera Genomics Inc., the company founded by entrepreneur Craig Venter, shows that competition can accelerate innovation. What’s more, through canny marketing the HGP engineered a favourable climate for investment and public endorsement, creating what economist Monika Gisler at ETH in Zurich and her coworkers have called a ‘social bubble’ [1]. They say that ‘governments can take advantage of the social bubble mechanism to catalyze long-term investments by the private sector, which would not otherwise be supported.’ Of course, there is a fine line between supportive publicity and hype. But this is another reminder that promising new technologies, like children, flourish best when they are neither left to fend for themselves nor mollycoddled indefinitely.
1. M. Gisler, D. Sornette & R. Woodward, preprint http://arxiv.org/abs/1003.2882 (2010).
***************************
Can chemists save the world? In his new book, targeted at the 2011 Year of Chemistry and published by the RSC, John Emsley argues in his characteristically inspirational manner that chemical innovations in areas such as biofuels, food production and clean water treatment can deliver the promise of the book’s title: A Healthy, Wealthy, Sustainable World. Emsley makes no apologies about his crusading, even propagandizing agenda, for he rightly points out that many of the biggest global challenges, from climate change to the end of oil, demand the expertise of chemistry, making it potentially the key science of the twenty-first century.
But Emsley concedes that his survey of the wonderful things that chemists have achieved in sustainable technology – converting rapseseed oil to biodiesel or to plastics feedstocks, say – does not look in depth at the economic picture. It’s a frequent and valid objection to technical innovation that it is all very well but how much does it cost in comparison to what we can do already? What’s the financial motivation, say, for China to abandon its abundant coal reserves for biofuels?
There is no blanket answer to such economic conundrums, but common to them all is the question of whether one can rely on market mechanisms to generate incentives for a desirable technology, or whether it should be nurtured by governmental or regulatory intervention. Here, as just about everywhere else right now, the issue is how ‘big’ government should be.
In the wake of the financial crisis, market fundamentalists sound less credible asserting that the market knows best, especially when it comes to societal benefits: the recent boom years were not so much generated by market mechanisms as bought on credit. But it seems equally clear that highly managed economies which subsidize unprofitable enterprises are unsustainable and risk stifling innovation. A middle course has been successfully steered by the German government’s investment in photovoltaic (PV) energy generation, where money for research and breaks for commercial companies are coupled to a concerted effort to build a market for solar power through a feed-in tariff: a guaranteed, highly competitive price for energy generated from solar panels and fed into the grid. This stimulus recognizes that new, desirable technologies may need a hand to get off the ground but need eventually to become independent. With government assistance, the German PV industry has created around 50,000 jobs, brought revenues of €5.6 billion in 2009, and made Germany the largest national source of PV power in the world. By 2020, up to 10 percent of Germany’s energy may be solar.
This is one reason why it is unrealistic to dismiss the prospects for an innovative technology on the basis that its (perhaps less desirable) rivals can currently do things more cheaply. There is a financial component to changing attitudes. Encouraging investment in a fledgling innovation can ultimately lower its price both by enabling efficiencies of scale and by supporting research into cost-cutting improvements. That was amply demonstrated by the Human Genome Project (HGP): the international decision that it was a Good Thing created the opportunity for new sequencing technologies that have reduced the cost and increased the speed of decoding an individual’s genome by orders of magnitude. Simply put, it became financially worthwhile for companies such as Illumina (spearheaded by chemists David Walt and Anthony Czarnik) to devise radical new sequencing methods. As a result, the economic hurdle to realizing the potential medical benefits of genome sequencing was lowered.
At the same time, the race between the publicly funded HGP and a private enterprise by Celera Genomics Inc., the company founded by entrepreneur Craig Venter, shows that competition can accelerate innovation. What’s more, through canny marketing the HGP engineered a favourable climate for investment and public endorsement, creating what economist Monika Gisler at ETH in Zurich and her coworkers have called a ‘social bubble’ [1]. They say that ‘governments can take advantage of the social bubble mechanism to catalyze long-term investments by the private sector, which would not otherwise be supported.’ Of course, there is a fine line between supportive publicity and hype. But this is another reminder that promising new technologies, like children, flourish best when they are neither left to fend for themselves nor mollycoddled indefinitely.
1. M. Gisler, D. Sornette & R. Woodward, preprint http://arxiv.org/abs/1003.2882 (2010).
Monday, November 29, 2010
Flight of fantasy
The chorus of disapproval that greeted Howard Flight’s remark about how cuts in child benefits will encourage ‘breeding’ among the lower social classes (or as Flight called them,‘those on benefits’) has left the impression that such comments are now to be judged in a historical vacuum, purely on the basis of whether or not they accord with a current consensus on ‘appropriateness’, or what some would sneeringly call political correctness. This solipsistic perspective is dangerously shallow.
The media coverage has largely ignored the obvious connection between Flight’s comment and the argument for eugenics originally advanced by Darwin’s cousin Francis Galton in the late nineteenth century and pursued by intellectuals on both the left and the right for a considerable part of the twentieth. Galton voiced explicitly what Flight had at least the restraint (or the nous) only to imply: given the chance, the inferior stock among the lower classes will breed like rabbits and thereby corrupt the species. Galton worried about the ‘yearly output by unfit parents of weakly children who are constitutionally incapable of growing up into serviceable citizens, and who are a serious encumbrance to the nation.’ If the harshness of their circumstances were to be alleviated by welfare, he said, then natural selection would no longer constrain the proliferation of ‘bad genes’ throughout society. In a welfare state, the gene pool of humankind would therefore degenerate.
Some eugenicists felt that the answer was to encourage the genetically superior echelons of society to breed more: educated, middle-class women (who were beginning to appreciate that there might be more to life than endless child-rearing) had a national duty to produce offspring. Some biologists, such as Julian Huxley and J.B.S. Haldane, welcomed the prospect of ectogenesis – gestation of fetuses in artificial wombs – so that it might liberate ‘good’ mothers from that onerous obligation (presumably nannies could take over once the child was ‘born’). Even conservatives who regarded such technologies with distaste felt compelled to agree that they offered the best prospect for maintaining the vitality of the species.
This approach was called ‘positive eugenics’: redressing the imbalance by propagating good genes. It is one that Flight apparently endorses, in his concern that we should not discourage the middle classes from breeding by taking away their cash perks. But the other option, also advocated by Galton, was negative eugenics: preventing breeding among the undesirables. In the many US states that introduced forced-sterilization programmes in the early twentieth century (and which ultimately sterilized around 60,000 people), this meant the mentally unstable or impaired (‘idiots and imbeciles’), as well as perhaps the ‘habitually’ unemployed, criminals and drunkards. In Nazi Germany it came also to mean those whose ‘inferiority’ was a matter of race. (There was no lack of racism in the US programmes either.)
Liberal eugenicists such as Haldane and Huxley were rather more nuanced than Flight. They argued that eugenic policies made sense only on a level playing field: while social inequalities held individuals back, there was no guarantee that ‘defective’ genes would be targeted. But once that levelling was effected, what Huxley referred to chillingly as ‘nests of defective germ plasm’ should be shown no mercy. As he put it, “The lowest strata, allegedly less well endowed genetically, are reproducing relatively too fast. Therefore birth-control methods must be taught them; they must not have too easy access to relief or hospital treatment lest the removal of the last check on natural selection should make it too easy for children to be reproduced or to survive; long unemployment should be a ground for sterilization, or at least relief should be contingent upon no further children being brought into the world.” Flight was at least socially aware enough to pull his punches in comparison to this.
Although it was mostly the taint of Nazism that put paid to eugenics (not to mention the emergence of the concept of human rights), the scientific case was eventually revealed to be spurious too, not least because there is no good reason to think that complex traits such as intelligence and sociability have isolable genetic origins that can be refined by selective breeding.
Yet the survival nonetheless of Galton’s ideas among the likes of Flight and, in previous decades, Sir Keith Joseph, should not be mistaken for a failure to keep abreast of the science. I should be surprised if Flight has even heard of Galton, and I suspect he would be surprised himself to find his remark associated with a word – eugenics – that now is (wrongly) often considered to be a product of fascist genocidal fantasies. Galton was after all only providing pseudo-scientific justification for the prejudices about breeding that the aristocracy had espoused since Plato’s time, and it is surely here that the origins of Flights remark lie. That is why what was evidently for him a casual truism represents more than just a lapse of decorum, sensitivity or political acumen. It implies that David Cameron does not merely have the poor judgement to favour loose cannons, but that he is still heir to a deep-rooted tradition of class-based bigotry.
The media coverage has largely ignored the obvious connection between Flight’s comment and the argument for eugenics originally advanced by Darwin’s cousin Francis Galton in the late nineteenth century and pursued by intellectuals on both the left and the right for a considerable part of the twentieth. Galton voiced explicitly what Flight had at least the restraint (or the nous) only to imply: given the chance, the inferior stock among the lower classes will breed like rabbits and thereby corrupt the species. Galton worried about the ‘yearly output by unfit parents of weakly children who are constitutionally incapable of growing up into serviceable citizens, and who are a serious encumbrance to the nation.’ If the harshness of their circumstances were to be alleviated by welfare, he said, then natural selection would no longer constrain the proliferation of ‘bad genes’ throughout society. In a welfare state, the gene pool of humankind would therefore degenerate.
Some eugenicists felt that the answer was to encourage the genetically superior echelons of society to breed more: educated, middle-class women (who were beginning to appreciate that there might be more to life than endless child-rearing) had a national duty to produce offspring. Some biologists, such as Julian Huxley and J.B.S. Haldane, welcomed the prospect of ectogenesis – gestation of fetuses in artificial wombs – so that it might liberate ‘good’ mothers from that onerous obligation (presumably nannies could take over once the child was ‘born’). Even conservatives who regarded such technologies with distaste felt compelled to agree that they offered the best prospect for maintaining the vitality of the species.
This approach was called ‘positive eugenics’: redressing the imbalance by propagating good genes. It is one that Flight apparently endorses, in his concern that we should not discourage the middle classes from breeding by taking away their cash perks. But the other option, also advocated by Galton, was negative eugenics: preventing breeding among the undesirables. In the many US states that introduced forced-sterilization programmes in the early twentieth century (and which ultimately sterilized around 60,000 people), this meant the mentally unstable or impaired (‘idiots and imbeciles’), as well as perhaps the ‘habitually’ unemployed, criminals and drunkards. In Nazi Germany it came also to mean those whose ‘inferiority’ was a matter of race. (There was no lack of racism in the US programmes either.)
Liberal eugenicists such as Haldane and Huxley were rather more nuanced than Flight. They argued that eugenic policies made sense only on a level playing field: while social inequalities held individuals back, there was no guarantee that ‘defective’ genes would be targeted. But once that levelling was effected, what Huxley referred to chillingly as ‘nests of defective germ plasm’ should be shown no mercy. As he put it, “The lowest strata, allegedly less well endowed genetically, are reproducing relatively too fast. Therefore birth-control methods must be taught them; they must not have too easy access to relief or hospital treatment lest the removal of the last check on natural selection should make it too easy for children to be reproduced or to survive; long unemployment should be a ground for sterilization, or at least relief should be contingent upon no further children being brought into the world.” Flight was at least socially aware enough to pull his punches in comparison to this.
Although it was mostly the taint of Nazism that put paid to eugenics (not to mention the emergence of the concept of human rights), the scientific case was eventually revealed to be spurious too, not least because there is no good reason to think that complex traits such as intelligence and sociability have isolable genetic origins that can be refined by selective breeding.
Yet the survival nonetheless of Galton’s ideas among the likes of Flight and, in previous decades, Sir Keith Joseph, should not be mistaken for a failure to keep abreast of the science. I should be surprised if Flight has even heard of Galton, and I suspect he would be surprised himself to find his remark associated with a word – eugenics – that now is (wrongly) often considered to be a product of fascist genocidal fantasies. Galton was after all only providing pseudo-scientific justification for the prejudices about breeding that the aristocracy had espoused since Plato’s time, and it is surely here that the origins of Flights remark lie. That is why what was evidently for him a casual truism represents more than just a lapse of decorum, sensitivity or political acumen. It implies that David Cameron does not merely have the poor judgement to favour loose cannons, but that he is still heir to a deep-rooted tradition of class-based bigotry.
Friday, November 26, 2010
Funny things that happened on my way to the Forum
This Sunday I appear on the BBC World Service’s ‘ideas’ programme The Forum. In principle I am there to discuss The Music Instinct, but it’s actually a round table discussion about the issues raised by all the guests; my fellows on this occasion are the bio-nanotechnologist Sam Stupp and the polemicist and writer P. J. O’Rourke, whose new book is the characteristically titled Don’t Vote: It Only Encourages the Bastards. I have followed Sam’s work for nigh on two decades: he designs peptides that self-assemble into nanostructures which can act as biodegradable scaffolds for tissue regeneration. It is very neat, and I relished the opportunity to see Sam again. O’Rourke embodies the gentlemanly, amusing Republican whose spine-chilling views on such things as gun laws and the Tea Party are moderated by such charm and worldliness (he is no friend of US xenophobes) that you feel churlish to take issue. I was simply happy to establish that his opposition to Big Government applies only to nations and not to his own home. He is also rather funny, as right-leaning polemicists often are when they are not swivel-eyed. In any event, the programme deserves to be better known – rarely does one get the chance to discuss ideas at such leisure in the broadcast media, even on the beloved BBC.
PS: I just got an update with a direct link to the site for this programme. It includes mugshots, but I can't help that now. Gone are the days when it didn't matter how you looked on the radio.
PS: I just got an update with a direct link to the site for this programme. It includes mugshots, but I can't help that now. Gone are the days when it didn't matter how you looked on the radio.
Monday, November 15, 2010
Beyond the edge of the table
Here’s my Crucible column for the November Chemistry World. It gets a bit heavy-duty towards the end – not often now (happily) that I have to go and read (and pretend to understand) textbooks about quantum electrodynamics. But by happy coincidence, I was introduced recently to the numerology (and Pauli’s enthusiasm for it) by a talk at the Royal Institution by Arthur I. Miller, which I had the pleasure of chairing.
***********************************************************
Does the Periodic Table run out? Folk legend asserts that Richard Feynman closed the curtains on the elements after the hypothetical element 137, inelegantly named untrispetium, or more appealingly dubbed feynmanium in his honour.
As physicists (and numerologists) will know, that is no arbitrary cutoff. 137 is an auspicious number – so much so that Feynman himself is said to have recommended that physicists display it prominently in their offices as a reminder of how much they don’t know. Wolfgang Pauli, whose exclusion principle explained the structure of the Periodic Table, was obsessed with the number 137, and discussed its significance over fine wine with his friend and former psychoanalyst Carl Jung – a remarkable relationship explored in Arthur I. Miller’s recent book Deciphering the Cosmic Number (W. W. Norton, 2009). When Pauli was taken ill in Zürich with pancreatic cancer in 1958 and was put in hospital room number 137, he was convinced his time had come – and he was right. For Carl Jung 137 was significant as the number associated with the Jewish mystical tradition called the Cabbalah, as pointed out to physicist Victor Weisskopf by the eminent Jewish scholar Gershom Scholem.
Numerology was not confined to mystics, however, for the ‘explanation’ of the cosmic significance of 137 offered by the astronomer Arthur Eddington was not much more than that. Yet Eddington, Pauli and Feynman were captivated by 137 for the same reason that prompted Feynman to suggest it was where the elements end. For the inverse, 1/137, is almost precisely the value of the so-called fine-structure constant (α), the dimensionless quantity that defines the strength of the electromagnetic interaction – it is in effect the ratio of the square of the electron’s charge to the product of the speed of light and the reduced Planck’s constant.
Why 137? ‘Nobody knows’, Feynman admitted, adding that ‘it’s one of the greatest damn mysteries of physics: a magic number that comes to us with no understanding by man. You might say the hand of God wrote that number, and we don’t know how He pushed his pencil.’ It’s one of the constants that must be added to fundamental physics by hand. Werner Heisenberg was convinced that the problems then plaguing quantum theory would not go away until 137 was ‘explained’. But neither he nor Pauli nor anyone else has cracked the problem. The fact that the denominator of the fine structure constant is not exactly 137, but around 137.035, doesn’t diminish the puzzle, and now this constant is at the centre of arguments about ‘fine-tuning’ of the universe: if it was just 4 percent different, atoms (and we) could not exist.
But was Feynman right about untriseptium? His argument hinged on the fact that α features in the solution of the Dirac equation for the ground-state energy of an atom’s 1s electrons. In effect, when the atomic number Z is equal to or greater than 1/α, the energy becomes imaginary, or in other words, oscillatory – there is no longer a bound state. This doesn’t in itself actually mean that there can be no atoms with Z>137, but rather, there can be no neutral atoms.
However, Feynman’s argument was predicated on a Bohr-type atom in which the nucleus is a point charge. A more accurate prediction of the limiting Z has to take the nucleus’s finite size into account, and the full calculation changes the picture. Now the energy of the 1s orbital doesn’t fall to zero until around Z=150; but actually that is in itself relatively trivial. Even though the bound-state energy becomes negative at larger Z, the 1s electrons remain localized around the nucleus.
But when Z reaches around 173, things get complicated [1]. The bound-state energy then ‘dives’ into what is called the negative continuum: a vacuum ‘sea’ of negative-energy electrons predicted by the Dirac equation. Then the 1s states mix with those in the continuum to create a bound ‘resonance’ state – but the atom remains stable. If the atom’s 1s shell is already ionized, however, containing a single hole, then the consequences are more bizarre: the intense electric field of the nucleus is predicted to pull an electron spontaneously out of the negative continuum to fill it [2]. In other words, an electron-positron pair is created de novo, and the electron plugs the gap in the 1s shell while the positron is emitted.
This behaviour was predicted in the 1970s by Burkhard Fricke of the University of Kassel, working with nuclear physicist Walter Greiner and others [1]. Experiments were conducted during that and the following decade using ‘pseudo-atoms’ – diatomic molecules of two heavy nuclei created in ion collisions – to see if analogous positron emission could be observed from the innermost molecular rather than atomic orbitals. It never was, however, and exactly what would happen for Z>173 remains unresolved.
All the same, it seems that Feynman’s argument does not after all prohibit elements above 137, or even above 173. ‘The Periodic System will not end at 137; in fact it will never end!’, says Greiner triumphantly. Whatever mysteries are posed by the spooky 137, this is apparently not one of them.
1. B. Fricke, W. Greiner & J. T. Waber, Theor. Chim. Acta 21, 235-260 (1971).
2. W. Greiner & J. Reinhardt, Quantum Electrodynamics 4th edn (Springer, Berlin, 2009).
***********************************************************
Does the Periodic Table run out? Folk legend asserts that Richard Feynman closed the curtains on the elements after the hypothetical element 137, inelegantly named untrispetium, or more appealingly dubbed feynmanium in his honour.
As physicists (and numerologists) will know, that is no arbitrary cutoff. 137 is an auspicious number – so much so that Feynman himself is said to have recommended that physicists display it prominently in their offices as a reminder of how much they don’t know. Wolfgang Pauli, whose exclusion principle explained the structure of the Periodic Table, was obsessed with the number 137, and discussed its significance over fine wine with his friend and former psychoanalyst Carl Jung – a remarkable relationship explored in Arthur I. Miller’s recent book Deciphering the Cosmic Number (W. W. Norton, 2009). When Pauli was taken ill in Zürich with pancreatic cancer in 1958 and was put in hospital room number 137, he was convinced his time had come – and he was right. For Carl Jung 137 was significant as the number associated with the Jewish mystical tradition called the Cabbalah, as pointed out to physicist Victor Weisskopf by the eminent Jewish scholar Gershom Scholem.
Numerology was not confined to mystics, however, for the ‘explanation’ of the cosmic significance of 137 offered by the astronomer Arthur Eddington was not much more than that. Yet Eddington, Pauli and Feynman were captivated by 137 for the same reason that prompted Feynman to suggest it was where the elements end. For the inverse, 1/137, is almost precisely the value of the so-called fine-structure constant (α), the dimensionless quantity that defines the strength of the electromagnetic interaction – it is in effect the ratio of the square of the electron’s charge to the product of the speed of light and the reduced Planck’s constant.
Why 137? ‘Nobody knows’, Feynman admitted, adding that ‘it’s one of the greatest damn mysteries of physics: a magic number that comes to us with no understanding by man. You might say the hand of God wrote that number, and we don’t know how He pushed his pencil.’ It’s one of the constants that must be added to fundamental physics by hand. Werner Heisenberg was convinced that the problems then plaguing quantum theory would not go away until 137 was ‘explained’. But neither he nor Pauli nor anyone else has cracked the problem. The fact that the denominator of the fine structure constant is not exactly 137, but around 137.035, doesn’t diminish the puzzle, and now this constant is at the centre of arguments about ‘fine-tuning’ of the universe: if it was just 4 percent different, atoms (and we) could not exist.
But was Feynman right about untriseptium? His argument hinged on the fact that α features in the solution of the Dirac equation for the ground-state energy of an atom’s 1s electrons. In effect, when the atomic number Z is equal to or greater than 1/α, the energy becomes imaginary, or in other words, oscillatory – there is no longer a bound state. This doesn’t in itself actually mean that there can be no atoms with Z>137, but rather, there can be no neutral atoms.
However, Feynman’s argument was predicated on a Bohr-type atom in which the nucleus is a point charge. A more accurate prediction of the limiting Z has to take the nucleus’s finite size into account, and the full calculation changes the picture. Now the energy of the 1s orbital doesn’t fall to zero until around Z=150; but actually that is in itself relatively trivial. Even though the bound-state energy becomes negative at larger Z, the 1s electrons remain localized around the nucleus.
But when Z reaches around 173, things get complicated [1]. The bound-state energy then ‘dives’ into what is called the negative continuum: a vacuum ‘sea’ of negative-energy electrons predicted by the Dirac equation. Then the 1s states mix with those in the continuum to create a bound ‘resonance’ state – but the atom remains stable. If the atom’s 1s shell is already ionized, however, containing a single hole, then the consequences are more bizarre: the intense electric field of the nucleus is predicted to pull an electron spontaneously out of the negative continuum to fill it [2]. In other words, an electron-positron pair is created de novo, and the electron plugs the gap in the 1s shell while the positron is emitted.
This behaviour was predicted in the 1970s by Burkhard Fricke of the University of Kassel, working with nuclear physicist Walter Greiner and others [1]. Experiments were conducted during that and the following decade using ‘pseudo-atoms’ – diatomic molecules of two heavy nuclei created in ion collisions – to see if analogous positron emission could be observed from the innermost molecular rather than atomic orbitals. It never was, however, and exactly what would happen for Z>173 remains unresolved.
All the same, it seems that Feynman’s argument does not after all prohibit elements above 137, or even above 173. ‘The Periodic System will not end at 137; in fact it will never end!’, says Greiner triumphantly. Whatever mysteries are posed by the spooky 137, this is apparently not one of them.
1. B. Fricke, W. Greiner & J. T. Waber, Theor. Chim. Acta 21, 235-260 (1971).
2. W. Greiner & J. Reinhardt, Quantum Electrodynamics 4th edn (Springer, Berlin, 2009).
Some like it hot
I have been slack with my postings over the past couple of weeks, so here comes the catching up. First, a Muse for Nature News on a curious paper in PNAS on the origin of life, which seemed to have a corollary not explored by the authors… (I can’t link to the PNAS paper, as it’s not yet been put online, and in the meantime languishes in that peculiar limbo that PNAS commands.)
Heat may have been necessary to ensure that the first prebiotic reactions didn’t take an eternity. If so, this could add weight to the suggestion that water is essential for life in the cosmos.
Should we be surprised to be here? Some scientists maintain that the origin of life is absurdly improbable – Nobel laureate biologist George Wald baldly stated in 1954 that ‘one has only to contemplate the magnitude of [the] task to concede that the spontaneous generation of a living organism is impossible’ [1]. Yet others look at the size of the cosmos and conclude that even such extremely low-probability events are inevitable.
The apparent fine-tuning of physical laws and fundamental constants to enable life’s existence certainly presents a profound puzzle, which the anthropic principle answers only through the profligate hypothesis of multiple universes of which we have the fortune to occupy one that is habitable. But even if we take the laws of nature as we find them, it is hard to know whether or not we should feel fortunate to exist.
One might reasonably argue that the question has little meaning while we still have only a few hundred worlds to compare, about most of which we know next to nothing (not even whether there is, or was, life on our nearest neighbour). But one piece of empirical evidence we do have seems to challenge the notion that the origin of terrestrial life was a piece of extraordinarily good fortune: the geological record implies that life began in a blink, almost the instant the oceans were formed. It is as if it was just waiting to happen – as indeed some have suggested [2]. While Darwinian evolution needed billions of years to find a route from microbe to man, it seems that going from mineral to microbe needs barely a moment.
According to a paper in the Proceedings of the National Academy of Sciences USA by Richard Wolfenden and colleagues at the University of North Carolina, that may be largely a question of chemical kinetics [3]. Just about all the key biochemical processes in living organisms are speeded up by enzyme catalysis; otherwise they would happen too slowly or indiscriminately to make metabolism and life feasible. Some key processes, such as reactions involved in biosynthesis of nucleic acids, happen at a glacial pace without enzymes. If so, how did the earliest living systems bootstrap themselves to the point where they could sustain and reproduce themselves with enzymatic assistance?
The researchers think that temperature was the key. They point out that, not only do reactions speed up with temperature more than is commonly appreciated, but that the slowest reactions speed up the most: a change from 25 C to 100 C, for example, increases the rate of some prebiotically relevant reactions by 10 million-fold.
There’s reason to believe that life may have started in hot water, for example around submarine volcanic vents, where there are abundant supplies of energy, inorganic nutrients and simple molecular building blocks. Some of the earliest branches in the phylogenetic tree of life are occupied by thermophilic organisms, which thrive in hot conditions. A hot, aqueous origin of life is probably now the leading candidate for this mysterious event.
This alone, then, could reduce the timescales needed for a primitive biochemistry to get going from millions to tens of years. What’s more, say Wolfenden and colleagues, some of the best non-enzyme catalysts of slow metabolic reactions, which might have served as prebiotic proto-enzymes, becomes more effective as the temperature is lowered. If that’s what happened on the early Earth, then once catalysis took over from simple temperature-induced acceleration, it would have not suffered as the environment cooled or as life spread to cooler regions.
If this scenario is right, it could constrain on the kinds of worlds that support life. We know that watery worlds can do this; but might other simple liquids act as solvents for different biochemistries? In general, these have lower freezing points than water, such as the liquid hydrocarbons of Saturn’s moon Titan, ammonia (on Jupiter, say), formamide (HCONH2) or water-ammonia mixtures. One can enumerate reasons why in some respects these ‘cold’ liquids might be better solvents for life than water [4]. But if the rates of prebiotic reactions were a limiting factor in life’s origin, it may be that colder seas would never move things along fast enough.
Hotter may not be better either: quite aside from the difficulty of imagining plausible biochemistries in molten silicates, complex molecules would tend more readily to fall apart in extreme heat both because bonds snap more easily and because entropy favours disintegration over union. All of which could lend credence to the suggestion of biochemist Lawrence Henderson in 1913 that water is peculiarly biophilic [5]. In the introduction to a 1958 edition of Henderson’s book, Wald wrote ‘we now believe that life… must arise inevitably wherever it can, given enough time.’ But perhaps what it needs is not so much enough time, but enough heat.
References
1. G. Wald, Sci. Am. 191, 44-53 (1954).
2. H. J. Morowitz & E. Smith, Complexity 13, 51-59 (2007).
3. R. B. Stockbridge, C. A. Lewis Jr, Y. Yuan & R. Woldenden, Proc. Natl Acad. Sci. USA doi:10.1073/pnas.1013647107.
4. S. A. Benner, in Water and Life (eds R. M. Lynden-Bell, S. Conway Morris, J. D. Barrow, J. L. Finney & C. L. Harper, Jr, Chapter 10. CRC Press, Boca Raton, 2010.
5. L. J. Henderson, The Fitness of the Environment. Macmillan, New York, 1913.
Heat may have been necessary to ensure that the first prebiotic reactions didn’t take an eternity. If so, this could add weight to the suggestion that water is essential for life in the cosmos.
Should we be surprised to be here? Some scientists maintain that the origin of life is absurdly improbable – Nobel laureate biologist George Wald baldly stated in 1954 that ‘one has only to contemplate the magnitude of [the] task to concede that the spontaneous generation of a living organism is impossible’ [1]. Yet others look at the size of the cosmos and conclude that even such extremely low-probability events are inevitable.
The apparent fine-tuning of physical laws and fundamental constants to enable life’s existence certainly presents a profound puzzle, which the anthropic principle answers only through the profligate hypothesis of multiple universes of which we have the fortune to occupy one that is habitable. But even if we take the laws of nature as we find them, it is hard to know whether or not we should feel fortunate to exist.
One might reasonably argue that the question has little meaning while we still have only a few hundred worlds to compare, about most of which we know next to nothing (not even whether there is, or was, life on our nearest neighbour). But one piece of empirical evidence we do have seems to challenge the notion that the origin of terrestrial life was a piece of extraordinarily good fortune: the geological record implies that life began in a blink, almost the instant the oceans were formed. It is as if it was just waiting to happen – as indeed some have suggested [2]. While Darwinian evolution needed billions of years to find a route from microbe to man, it seems that going from mineral to microbe needs barely a moment.
According to a paper in the Proceedings of the National Academy of Sciences USA by Richard Wolfenden and colleagues at the University of North Carolina, that may be largely a question of chemical kinetics [3]. Just about all the key biochemical processes in living organisms are speeded up by enzyme catalysis; otherwise they would happen too slowly or indiscriminately to make metabolism and life feasible. Some key processes, such as reactions involved in biosynthesis of nucleic acids, happen at a glacial pace without enzymes. If so, how did the earliest living systems bootstrap themselves to the point where they could sustain and reproduce themselves with enzymatic assistance?
The researchers think that temperature was the key. They point out that, not only do reactions speed up with temperature more than is commonly appreciated, but that the slowest reactions speed up the most: a change from 25 C to 100 C, for example, increases the rate of some prebiotically relevant reactions by 10 million-fold.
There’s reason to believe that life may have started in hot water, for example around submarine volcanic vents, where there are abundant supplies of energy, inorganic nutrients and simple molecular building blocks. Some of the earliest branches in the phylogenetic tree of life are occupied by thermophilic organisms, which thrive in hot conditions. A hot, aqueous origin of life is probably now the leading candidate for this mysterious event.
This alone, then, could reduce the timescales needed for a primitive biochemistry to get going from millions to tens of years. What’s more, say Wolfenden and colleagues, some of the best non-enzyme catalysts of slow metabolic reactions, which might have served as prebiotic proto-enzymes, becomes more effective as the temperature is lowered. If that’s what happened on the early Earth, then once catalysis took over from simple temperature-induced acceleration, it would have not suffered as the environment cooled or as life spread to cooler regions.
If this scenario is right, it could constrain on the kinds of worlds that support life. We know that watery worlds can do this; but might other simple liquids act as solvents for different biochemistries? In general, these have lower freezing points than water, such as the liquid hydrocarbons of Saturn’s moon Titan, ammonia (on Jupiter, say), formamide (HCONH2) or water-ammonia mixtures. One can enumerate reasons why in some respects these ‘cold’ liquids might be better solvents for life than water [4]. But if the rates of prebiotic reactions were a limiting factor in life’s origin, it may be that colder seas would never move things along fast enough.
Hotter may not be better either: quite aside from the difficulty of imagining plausible biochemistries in molten silicates, complex molecules would tend more readily to fall apart in extreme heat both because bonds snap more easily and because entropy favours disintegration over union. All of which could lend credence to the suggestion of biochemist Lawrence Henderson in 1913 that water is peculiarly biophilic [5]. In the introduction to a 1958 edition of Henderson’s book, Wald wrote ‘we now believe that life… must arise inevitably wherever it can, given enough time.’ But perhaps what it needs is not so much enough time, but enough heat.
References
1. G. Wald, Sci. Am. 191, 44-53 (1954).
2. H. J. Morowitz & E. Smith, Complexity 13, 51-59 (2007).
3. R. B. Stockbridge, C. A. Lewis Jr, Y. Yuan & R. Woldenden, Proc. Natl Acad. Sci. USA doi:10.1073/pnas.1013647107.
4. S. A. Benner, in Water and Life (eds R. M. Lynden-Bell, S. Conway Morris, J. D. Barrow, J. L. Finney & C. L. Harper, Jr, Chapter 10. CRC Press, Boca Raton, 2010.
5. L. J. Henderson, The Fitness of the Environment. Macmillan, New York, 1913.
Wednesday, October 27, 2010
Beanbag robotics
Here’s a neat idea that I’ve written up for my Material Witness column in the November issue of Nature Materials.
It’s a commonplace observation in robotic engineering that some of the hardest tasks for robots are the ones we do without thinking: balancing upright, say, or catching a ball. Even the simple feat of picking up objects, when considered as a problem in control systems engineering, becomes a formidable challenge. How should we position the fingers on approach, where should we grip the object, how much pressure should we apply? Answering these questions generally requires exquisite feedback between vision, motor control, and tactile sensing, not to mention (in our case) a fair degree of intuition and training.
The ingenuity that has gone into solving these problems in robotics is exhilarating, as exemplified by the very recent reports in this journal of pressure-sensing ‘smart skin’ [1,2]. But these solutions tend to be predicated on the assumption that a robotic hand will follow the human prototype in having several gripping fingers. The widespread use of this design in the animal world testifies to its virtues, but there’s no escaping the demands it makes on actuation, sensing and feedback.
Now Eric Brown of the University of Chicago and his coworkers have described a new design for a robotic gripper that dispenses altogether with these difficulties by replacing active control with passive adaptability. Their device has no fingers at all, but instead uses a soft mass that moulds itself to the shape of the object to be gripped [3]. The crucial aspect of the design is that, once configured in this way simply by pressing onto the object, the gripper undergoes a transition from soft to hard, becoming a rigid body encasing enough of the object to hold it with, in general, an appreciable force.
That is achieved by filling the body of the gripper – an elastic latex bag – with granular material, such as tiny glass spheres or, in one prototype, ground coffee. Rigidification of the conformable grainy mass is then induced by evacuating the air between the grains, causing slight compaction. This is sufficient to trigger a jamming transition: the grains enter a collective state of immobility, like that in a blocked funnel, which, as Brown’s coauthor Heinrich Jaeger explains in another preprint [4], is a non-equilibrium state directly analogous to a glass. Indeed, such a packing-induced transition between solidity and fluidity is familiar to anyone who has ever opened a vacuum-packed packet of coffee.
Once rigid, the gripper holds an object by a combination of three mechanisms: friction, suction caused by deformation of the jammed bag as it lifts, and geometrical ‘wrap-around’ interlocking. The resultant gripping force depends on the geometry of the object, but a whole variety of forms, from steel springs to raw eggs, can be securely held. What is more, the device works in the wet, and can grip several different objects at once while retaining their orientation. Much as in the case of walking robots [5], it shows how smart use of passive control can greatly simplify the engineering problem.
References
1. Takei, K. et al., Nat. Mater. 9, 821-826 (2010).
2. Mannsfeld, S. C. B. et al., Nat. Mater. 9, 859-864 (2010).
3. Brown, E. et al., preprint http://www.arxiv.org/1009.4444.
4. Jaeger, H. & Liu, A. J., preprint http://www.arxiv.org/1009.4874.
5. Collins, S. H.,Wisse, M., Ruina, A. & Tedrake, R., Science 307, 1082-1085 (2005).
Tuesday, October 26, 2010
Prospects for the Science Book Prize
I’ve just put up a more expansive comment on the Prospect blog about the demise of the Science Book Prize. Sob.
Friday, October 22, 2010
Under the bridge
I was recently sent this striking photo of a pattern in melting ice by Georg Warning in Konstanz. He asked if I’d seen anything like it in my research for The Self-Made Tapestry, in which Georg noticed the apparent similarity to my picture of Marangoni convection. That venerable tome has now been updated as Nature’s Patterns, in which I include a discussion of ice erosion patterns called penitentes, found in the Andes. Penitentes are much more strongly peaked, but it sounds to me as though the early stages of growth might resemble something like this. In the third book of the trilogy (Branches) I say the following:
The snowfields of the Andes experience a kind of erosion process that creates one of nature’s strangest spectacles. The high glaciers here can become moulded into a forest of ice spires, typically between 1 and 4 metres high, called penitentes because of their resemblance to a throng of white-hooded monks. Charles Darwin saw these eerie formations in 1835 en route from Chile to Argentina. ‘In the valleys there were several broad fields of perpetual snow’, he wrote in The Voyage of the Beagle. ‘These frozen masses, during the process of thawing, had in some parts been converted into pinnacles or columns, which, as they were high and close together, made it difficult for the cargo mules to pass. On one of these columns of ice, a frozen horse was sticking as on a pedestal, but with its hind legs straight up in the air. The animal, I suppose, must have fallen with its head downward into a hole, when the snow was continuous, and afterwards the surrounding parts must have been removed by the thaw.’
Darwin remarked that the locals believed them to be formed by wind erosion. But the process is more complicated than that, representing a classic case of pattern formation by self-amplifying feedback. The air at these great heights is so dry that sunlight falling on the ice transforms it straight into water vapour rather than melting it into liquid water. A small dimple that forms in the smooth ice surface by evaporation acts as a kind of lens that focuses the sun’s rays into the centre, and so it is excavated more quickly than the surrounding ice. It’s a little like diffusion-limited aggregation or dendritic growth in reverse: a ‘fingering’ instability penetrates into the ice rather than pushing outwards from the surface.
The process can be accelerated by a fine coating of dirt on the snow surface. As the troughs deepen they expose clean snow that is prone to further evaporation, whereas dirt in the old snow at the peaks covers the ice crystals like a cap and insulates them. You might expect that, on the contrary, snow or ice will melt faster when dirty than when clean, because the darker material will absorb more sunlight. But whether a layer of dirt acts primarily as an insulator or an absorber depends on how thick it is.
That last comment about dirt seems to establish the link, since evidently dirt in the ice traces out the ridges in this case. The underside of this ice bridge is presumably never exposed to the direct rays of the sun, but all the same there is probably some analogous process at play here.
Darwin remarked that the locals believed them to be formed by wind erosion. But the process is more complicated than that, representing a classic case of pattern formation by self-amplifying feedback. The air at these great heights is so dry that sunlight falling on the ice transforms it straight into water vapour rather than melting it into liquid water. A small dimple that forms in the smooth ice surface by evaporation acts as a kind of lens that focuses the sun’s rays into the centre, and so it is excavated more quickly than the surrounding ice. It’s a little like diffusion-limited aggregation or dendritic growth in reverse: a ‘fingering’ instability penetrates into the ice rather than pushing outwards from the surface.
The process can be accelerated by a fine coating of dirt on the snow surface. As the troughs deepen they expose clean snow that is prone to further evaporation, whereas dirt in the old snow at the peaks covers the ice crystals like a cap and insulates them. You might expect that, on the contrary, snow or ice will melt faster when dirty than when clean, because the darker material will absorb more sunlight. But whether a layer of dirt acts primarily as an insulator or an absorber depends on how thick it is.
That last comment about dirt seems to establish the link, since evidently dirt in the ice traces out the ridges in this case. The underside of this ice bridge is presumably never exposed to the direct rays of the sun, but all the same there is probably some analogous process at play here.
Thursday, October 21, 2010
None shall have prizes
Hurrah for Nick Lane, whose Life Ascending won the Royal Society Science Book Prize last night. If anyone there was in doubt that Nick’s book deserved the award, it became crystal clear during the short readings by each author before the announcement (a first for this prize) that his tight, elegant and vivid prose put him ahead of the others. Shame on me for not mentioning Nick’s book in my round-up of the year's science books in the Sunday Times last year.
But the ceremony seemed to me curiously muted, which perhaps reflects the fact that it may be the last: the Royal Society has said it cannot continue funding the prize without a sponsor, and has been unable to find one. This is tragic and baffling. The financial cost can’t be onerous: the glitzy award ceremony was ditched some years back, and there can’t be many other costs except for the modest prize money itself. Besides, as Georgina Ferry said to me recently apropos the also (more or less) defunct Association of British Science Writers Awards, it’s not about the money anyway: the winners would be just as pleased (well, almost) with the recognition alone. As well as the big literary prizes, just about every genre of fiction and non-fiction has its awards – it would be sad indeed if science writers do not, not least because this sends out the message that no one cares much about what they do. Yes, I know we writers are insecure, and that prizes are in any case mostly capricious and invidious beauty contests – but now that the science book prize looks set to vanish, it is more clear to me than ever that what I cared about is not the thought of winning it but the mere knowledge that it is there. And as I tried to say clumsily terms to the BBC, this award was a way of getting a conversation going about how and why science is communicated, and about the roles of science in society. Are the big pharma or IT companies not so keen, even in these straitened times, to see that conversation happen that they can’t find a bit of spare cash?
But the ceremony seemed to me curiously muted, which perhaps reflects the fact that it may be the last: the Royal Society has said it cannot continue funding the prize without a sponsor, and has been unable to find one. This is tragic and baffling. The financial cost can’t be onerous: the glitzy award ceremony was ditched some years back, and there can’t be many other costs except for the modest prize money itself. Besides, as Georgina Ferry said to me recently apropos the also (more or less) defunct Association of British Science Writers Awards, it’s not about the money anyway: the winners would be just as pleased (well, almost) with the recognition alone. As well as the big literary prizes, just about every genre of fiction and non-fiction has its awards – it would be sad indeed if science writers do not, not least because this sends out the message that no one cares much about what they do. Yes, I know we writers are insecure, and that prizes are in any case mostly capricious and invidious beauty contests – but now that the science book prize looks set to vanish, it is more clear to me than ever that what I cared about is not the thought of winning it but the mere knowledge that it is there. And as I tried to say clumsily terms to the BBC, this award was a way of getting a conversation going about how and why science is communicated, and about the roles of science in society. Are the big pharma or IT companies not so keen, even in these straitened times, to see that conversation happen that they can’t find a bit of spare cash?
Tuesday, October 05, 2010
Music on the brain
There was a nice conference on ‘music and the brain’ here in London last weekend, and I have a report on it on Nature News. Here’s the longer version.
The emotions teeming inside the works of the Romantic composers may have neurological explanations, as a recent meeting explored.
It’s not hard to understand why Robert Schumann should have been selected as the focus of a meeting called 'The Musical Brain', which took place last weekend in London [1]. Not only is the 200th anniversary of the German composer’s birth, but his particular ‘musical brain’ gives neuroscientists plenty to think about.
For one thing, Schumann suffered from the neurological condition called focal dystonia – a loss of muscle control that afflicts an estimated 1 in 100 professional musicians and ended Schumann’s hopes to be a concert pianist. And he seems also to have struggled with severe bipolar disorder, which apparently dictated the rhythm of his creativity and left him confined to an asylum for the last two years of his life.
Focal dystonia is sometimes called ‘musician’s cramp’, but it is not primarily a muscular problem: it begins in the brain [2]. As neuroscientist Jessica Grahn of Cambridge University explained, it stems from the way intense musical practice can over-inflate the mental representation of the relevant part of the body (usually the fingers, although it can affect lip control in brass players). Once the neural representations of the fingers overlap, they can no longer be controlled independently.
This typically manifests itself as a stiffening or curling-up of some fingers. The American pianist Leon Fleisher lost the use of his right hand in this way in 1963, and was restricted for decades to the repertoire for left hand only (much of it written for the pianist Paul Wittgenstein who lost his right arm in World War I). Although dystonia is a consequence of over-practice (or as Fleisher says, inappropriate practice techniques), there may also be a genetic predisposition to it – it is more common, for example, among men. It’s precisely because it is a neural rather than a muscle problem that dystonia is so hard to treat, and indeed there is still no genuine cure.
Schumann succumbed to this excruciating condition in his right middle finger at the age of 21 [3]. He used a home-made contraption to stretch the finger, but it may have done more harm than good. He even composed an extremely difficult piece, his Toccata Opus 7, that avoids the use of the middle finger entirely (hear it here). ‘I was hoping to convince someone to play it at the meeting’, says Grahn, ‘but it’s a bear, so no luck.’
With his performing career stalled, Schumann focused on composing – which, according to neuroscientist Eckart Alternmüller, a specialist on focal dystonia, was for us ‘a blessing, because it allowed his creative talent to be developed to masterful perfection’ [3]. But that was probably little consolation to poor Schumann, particularly as things got far worse for him. Towards the end of his life, he heard voices and was tormented by visions of angels and demons. Fearful that he might harm his wide Clara, in 1854 he attempted to drown himself in the Rhine, only to be rescued by boatmen. That was when he voluntarily entered the asylum where he stayed until his death.
Not everyone agrees that Schumann was bipolar: a recent biographer John Worthen argues that he exhibited no serious mental disturbance until the end of his life, when his psychological disintegration could have been caused by tertiary syphilis [4]. Alternatively, it has been argued that Schumann’s final ‘madness’ looks like a case of mercury poisoning, caused by the mercury medication then used to treat syphilis. But psychiatrist (and concert pianist) Richard Kogan has argued that Schumann’s well documented spells of wild creativity and sleeplessness interspersed with periods of lethargy look like a classic case of bipolar disorder.
If so, he is by no means unique among composers in wrestling with mental illness: Mozart, Beethoven, Tchaikovsky and Leonard Bernstein all seem to have done so. All of which raises the question whether we can hear the emotional turmoil in what they wrote. It’s not hard to imagine so: music critic Stephen Johnson, who introduced the life and work of Schumann at the meeting (and also has bipolar disorder), says of Schumann’s fiendish Toccata that ‘it seems exuberant, it seems it’s flying and it’s very exciting – but it’s breathless, it’s on the edge of something frightening.’
It’s not obvious, however, that we should infer a composer’s state of mind from the music. The German composer Paul Hindemith felt that, if we believed that, the leaps of emotional tone classical compositions often exhibit would compel us to be diagnosing mental disorder all the time, while even the febrile Tchaikovsky doubted that composers express their mood at the actual moment of composition. Take Mozart’s wickedly playful A Musical Joke (K.522): it was apparently the first piece he composed after his father died.
But nonetheless there can be no doubt that music does express emotion – indeed, it is one of the most powerful emotional vehicles in all of human creativity, which seems to be one reason why it can be so effective in therapeutic contexts. It was an interest in the use of music in learning and therapy, says music psychologist Katie Overy of Edinburgh University, ‘that forced me to get into the emotional aspects’.
While acknowledging that musical expression is multi-faceted, she argues that current neurological studies suggest that the activation of mirror neurons – ‘empathy circuits’ that fire both when we watch another person perform an action and when we perform it ourselves – offer a clue about how music works [5].
It may be, she says, that when we hear music, we can ‘read’ it as we would read indicators of emotional state in another person’s vocal or physical gestures. ‘Happy’ music is typically up-tempo and high-pitched, while ‘calm’ or ‘sad’ music tends to be soft, slow and low-pitched [6], because of the way these acoustic qualities mimic the actions and voices of people in those emotional states – an observation that seems to hold across cultures, as Stefan Koelsch of Sussex University, another speaker at the meeting, and his coworkers have shown recently [7].
‘Music has the capacity to tap into these qualities and expand on them’, says Overy. Pianist Ian Brown illustrated during her talk how, for example, musical expressivity involves the mimicry of singing with legato (smoothly connected notes) and speech-like phrasing. The composer and performer can then add to this effect by deploying culture-specific structures (such as major/minor keys; see here) or unexpected rhythms and harmonies: Koelsch showed that musical ‘surprises’ can elicit the same neurological signals as other types of surprise [8].
In this respect, then, support may be emerging for the suggestion of philosopher Susanne Langer that music mimics the dynamics of emotion itself – or, as psychologist Carroll Pratt put it in 1931, that ‘music sounds the way emotions feel’.
References
1. The Musical Brain: Arts, Science and the Mind, St John’s Smith Square, London, 2-3 October 2010.
2. E. Altenmüller & H.-C. Jabusch, J. Hand Therapy 22, 144-155 (2009).
3. E. Altenmüller, in J. Bogousslavsky & F. Boller (eds), Neurological Disorders in Famous Artists (Karger, Basel, 2005).
4. J. Worthen, Robert Schumann: Life and Death of a Musician (Yale University Press, New Haven, 2007).
5. I. Molnar-Szakacs & K. Overy, SCAN 1, 235-241 (2006).
6. L. L. Balkwill & W. F. Thompson, Music Perception 17, 43-64 (1999).
7. T. Fritz et al., Curr. Biol. 19, 1-4 (2009).
8. S. Koelsch, T. Frtiz & G. Schlaug, NeuroReport 19, 1815-1819 (2008).
The emotions teeming inside the works of the Romantic composers may have neurological explanations, as a recent meeting explored.
It’s not hard to understand why Robert Schumann should have been selected as the focus of a meeting called 'The Musical Brain', which took place last weekend in London [1]. Not only is the 200th anniversary of the German composer’s birth, but his particular ‘musical brain’ gives neuroscientists plenty to think about.
For one thing, Schumann suffered from the neurological condition called focal dystonia – a loss of muscle control that afflicts an estimated 1 in 100 professional musicians and ended Schumann’s hopes to be a concert pianist. And he seems also to have struggled with severe bipolar disorder, which apparently dictated the rhythm of his creativity and left him confined to an asylum for the last two years of his life.
Focal dystonia is sometimes called ‘musician’s cramp’, but it is not primarily a muscular problem: it begins in the brain [2]. As neuroscientist Jessica Grahn of Cambridge University explained, it stems from the way intense musical practice can over-inflate the mental representation of the relevant part of the body (usually the fingers, although it can affect lip control in brass players). Once the neural representations of the fingers overlap, they can no longer be controlled independently.
This typically manifests itself as a stiffening or curling-up of some fingers. The American pianist Leon Fleisher lost the use of his right hand in this way in 1963, and was restricted for decades to the repertoire for left hand only (much of it written for the pianist Paul Wittgenstein who lost his right arm in World War I). Although dystonia is a consequence of over-practice (or as Fleisher says, inappropriate practice techniques), there may also be a genetic predisposition to it – it is more common, for example, among men. It’s precisely because it is a neural rather than a muscle problem that dystonia is so hard to treat, and indeed there is still no genuine cure.
Schumann succumbed to this excruciating condition in his right middle finger at the age of 21 [3]. He used a home-made contraption to stretch the finger, but it may have done more harm than good. He even composed an extremely difficult piece, his Toccata Opus 7, that avoids the use of the middle finger entirely (hear it here). ‘I was hoping to convince someone to play it at the meeting’, says Grahn, ‘but it’s a bear, so no luck.’
With his performing career stalled, Schumann focused on composing – which, according to neuroscientist Eckart Alternmüller, a specialist on focal dystonia, was for us ‘a blessing, because it allowed his creative talent to be developed to masterful perfection’ [3]. But that was probably little consolation to poor Schumann, particularly as things got far worse for him. Towards the end of his life, he heard voices and was tormented by visions of angels and demons. Fearful that he might harm his wide Clara, in 1854 he attempted to drown himself in the Rhine, only to be rescued by boatmen. That was when he voluntarily entered the asylum where he stayed until his death.
Not everyone agrees that Schumann was bipolar: a recent biographer John Worthen argues that he exhibited no serious mental disturbance until the end of his life, when his psychological disintegration could have been caused by tertiary syphilis [4]. Alternatively, it has been argued that Schumann’s final ‘madness’ looks like a case of mercury poisoning, caused by the mercury medication then used to treat syphilis. But psychiatrist (and concert pianist) Richard Kogan has argued that Schumann’s well documented spells of wild creativity and sleeplessness interspersed with periods of lethargy look like a classic case of bipolar disorder.
If so, he is by no means unique among composers in wrestling with mental illness: Mozart, Beethoven, Tchaikovsky and Leonard Bernstein all seem to have done so. All of which raises the question whether we can hear the emotional turmoil in what they wrote. It’s not hard to imagine so: music critic Stephen Johnson, who introduced the life and work of Schumann at the meeting (and also has bipolar disorder), says of Schumann’s fiendish Toccata that ‘it seems exuberant, it seems it’s flying and it’s very exciting – but it’s breathless, it’s on the edge of something frightening.’
It’s not obvious, however, that we should infer a composer’s state of mind from the music. The German composer Paul Hindemith felt that, if we believed that, the leaps of emotional tone classical compositions often exhibit would compel us to be diagnosing mental disorder all the time, while even the febrile Tchaikovsky doubted that composers express their mood at the actual moment of composition. Take Mozart’s wickedly playful A Musical Joke (K.522): it was apparently the first piece he composed after his father died.
But nonetheless there can be no doubt that music does express emotion – indeed, it is one of the most powerful emotional vehicles in all of human creativity, which seems to be one reason why it can be so effective in therapeutic contexts. It was an interest in the use of music in learning and therapy, says music psychologist Katie Overy of Edinburgh University, ‘that forced me to get into the emotional aspects’.
While acknowledging that musical expression is multi-faceted, she argues that current neurological studies suggest that the activation of mirror neurons – ‘empathy circuits’ that fire both when we watch another person perform an action and when we perform it ourselves – offer a clue about how music works [5].
It may be, she says, that when we hear music, we can ‘read’ it as we would read indicators of emotional state in another person’s vocal or physical gestures. ‘Happy’ music is typically up-tempo and high-pitched, while ‘calm’ or ‘sad’ music tends to be soft, slow and low-pitched [6], because of the way these acoustic qualities mimic the actions and voices of people in those emotional states – an observation that seems to hold across cultures, as Stefan Koelsch of Sussex University, another speaker at the meeting, and his coworkers have shown recently [7].
‘Music has the capacity to tap into these qualities and expand on them’, says Overy. Pianist Ian Brown illustrated during her talk how, for example, musical expressivity involves the mimicry of singing with legato (smoothly connected notes) and speech-like phrasing. The composer and performer can then add to this effect by deploying culture-specific structures (such as major/minor keys; see here) or unexpected rhythms and harmonies: Koelsch showed that musical ‘surprises’ can elicit the same neurological signals as other types of surprise [8].
In this respect, then, support may be emerging for the suggestion of philosopher Susanne Langer that music mimics the dynamics of emotion itself – or, as psychologist Carroll Pratt put it in 1931, that ‘music sounds the way emotions feel’.
References
1. The Musical Brain: Arts, Science and the Mind, St John’s Smith Square, London, 2-3 October 2010.
2. E. Altenmüller & H.-C. Jabusch, J. Hand Therapy 22, 144-155 (2009).
3. E. Altenmüller, in J. Bogousslavsky & F. Boller (eds), Neurological Disorders in Famous Artists (Karger, Basel, 2005).
4. J. Worthen, Robert Schumann: Life and Death of a Musician (Yale University Press, New Haven, 2007).
5. I. Molnar-Szakacs & K. Overy, SCAN 1, 235-241 (2006).
6. L. L. Balkwill & W. F. Thompson, Music Perception 17, 43-64 (1999).
7. T. Fritz et al., Curr. Biol. 19, 1-4 (2009).
8. S. Koelsch, T. Frtiz & G. Schlaug, NeuroReport 19, 1815-1819 (2008).
Monday, October 04, 2010
The Corrections
Somehow I suspect that Jonathan Franzen doesn’t need me to feel his pain. But all the same, I do. He has just demanded the shredding of something like 80,000 copies of the UK edition of his book Freedom because the wrong version of the proofs was used for the final printing, containing lots of little typos and omissions of corrections. Several reviewers have admitted that they’d never have noticed the difference, but that’s not the point. It’s not so much about perfectionism as a kind of pride. I have never, like Franzen, taken nine years to write a book, and I don’t have the ability, and probably not the inclination, to choose words as carefully and precisely as he evidently does. But all the same, I know that errors introduced in the production process feel like a two-year-old has just scribbled over your pages – like mindless or wilful destruction. I know this is unfair – no one in the production process is trying to do other than perform their job well – but that’s how it feels. What is particularly galling is that, unless you’re Franzen, one these errors have happened, you’re stuck with them forever. It arouses that childhood feeling of a terrible injustice that you are utterly powerless to rectify. And it happens in the swanky hardback version, the version that is meant (unlike the paperback) to be an object of beauty. There are one or two pages of my previous books that I still mustn’t look at for fear that I’ll start fuming all over again.
There have been times when I have been driven to conclude that, if you leave typesetters the slightest opening for introducing a mistake, they’ll seize it. Many times I have said to myself that I would in future always insist on seeing the final, final version of the proofs before they go off for printing, only to feel, when the time came, that this would seem just too much like the neurotic author – and then to regret not doing so. It does amaze me that typesetters will interpret letters in handwritten proof corrections in such a way as to turn a perfectly obvious and ordinary word into gibberish – sometimes you can’t help feeling they are just having a laugh. And publishers often seem to feel no need to double-check corrections, or so it seems. Oh, I’m sure typesetters must be confronted with some real nightmares sometimes – pages covered in wild scribbles connected by a maze of looping arrows. I have occasionally done them no favours myself. But there just don’t seem to be enough checks built into the publishing process, which seems bizarre given how tough it is to get a book published and how cautious publishers have become about commissioning.
Tuesday, September 28, 2010
Return to Chartres
About 18 months ago I went to Chartres for the filming of a documentary about the Gothic cathedrals for Nova. The documentary is now finished, and airs in October in the US. I’ve no idea how it ended up, but there is an outtake available here which bodes well. I’m glad they managed to find a use for this footage, since it wasn’t easy taking it: up in the galleries we had to keep persuading the organist to postpone his practice, while out on the front steps we had to placate the resident alcoholics and grab takes between rain squalls. I’d frozen my gonads off by the end of it. That, no doubt, is all very thirteenth-century.
Friday, September 24, 2010
The prospect for October
Here’s the full-cream version of my Lab Report for Prospect in October.
The IPCC is in a bind. There are good arguments for reforming the way it operates, not least improving the efficiency and transparency of its review process. A recent independent assessment by the InterAcademy Council, representing all the world’s major science academies, agreed with that but concluded that the IPCC’s scientific conclusions are reliable and that it has generally worked well on a shoestring. But should its chair, Rajendra Pachauri, stay? Pachauri has been pilloried for errors that led to unjustified forecasts about melting of the Himalayan glaciers – a bad mistake, but negligible in the grand scheme. He has also been unjustly smeared over alleged conflicts of interest. There is nothing here to warrant resignation
But Pachauri’s leadership during the IPCC’s tribulations of the past year has not been inspiring, and more to the point, all leaders grow stale eventually. A change could bring fresh vigour and restore public confidence. Yet such is the aura of distrust fomented by the smear campaign that it would now be all but impossible for Pachauri to step down without being seen to validate climate sceptics’ criticisms. We are now in a mirror-image reality in which some consider Bjorn Lomborg’s U-turn on the threat of climate change more principled than Pachauri’s steadfast advocacy of the science. A reformed IPCC would be welcome, but there will be no winners.
Although it’s perhaps no surprise that the restoration of federal funding for stem-cell research in the United States under the Obama administration is not plain sailing, no one could have foreseen the oddness of the latest, potentially devastating obstacle. The injunction issued by a district court judge in Columbia against such funding stems from a case brought not by Christian ‘pro-life’ groups, who object to the destruction of human embryos in harvesting new stem-cell lines, but by two stem-cell scientists. James Sherley and Theresa Deisher work on adult stem cells and oppose research on embryonic cells, saying that the adult-cell work is both scientifically and ethically superior.
Outsiders to the US legal system will be baffled that a district judge can, by reinterpreting the meaning of a long-standing constraint on embryonic stem-cell work, force the National Institutes of Health instantly to freeze all funding, plunging work in progress into limbo and ensuring funding chaos months or years down the line. But there it is: Chief Judge Royce Lamberth has decided that the 1996 Dickey-Wicker Amendment to NIH budget legislation, prohibiting funding for research involving the creation and destruction of embryos, in fact must prohibit all embryonic stem-cell work whether or not it destroys or creates embryos. The injunction has been appealed by the US Department of Justice.
The amendment itself is probably here to stay, since it impinges also on abortion, but the current Obama policy had left room for the use, with donor consent, of embryos from fertility clinics that would otherwise be destroyed.
Sherley is a complicated character with an agenda that is hard to read. But the fact that a maverick case in a district court can wreck an entire nation’s research effort at the forefront of medical science is chilling.
Because every month now seems to bring a new complete genome sequence – now mouse-ear cress, now the panda – it might have been tempting at first to greet the announcement of the wheat genome with a touch of ennui. But no longer. Drought and flood have devastated wheat yields in Russia and China. Russia, one of the world’s biggest producers, has now imposed an export ban that has sent wheat prices soaring, threatening the food security of millions of people. The riots in Mozambique over bread prices may be just a taste of what is to come.
This is why the wheat genome sequence is one of the most important so far, and why public access to the data granted by the researchers, led by a team at Liverpool University, is so valuable and commendable. The genetic information should point to shortcuts for breeding of new, hardier varieties, as well as identifying specific genes that might be engineered to improve resistance to drought and disease.
Why, then, has wheat not been genetically sequenced sooner? The answer is sobering: the genome is not only larger than that of most crops, but is five times larger than the human genome. And some scientists have cautioned that the British work offers just a preliminary first draft: the International Wheat Genome Sequence Consortium says that there is still a lot of work to be done in sorting and ordering the raw data.
The much-vaunted medical benefits of sequencing the human genome itself have just received some vindication from the results of clinical trials of the anti-cancer drug PLX4032. The dramatic potential of the drug for shrinking skin-cancer tumours was reported in August, and is confirmed by a recent paper in Nature. But the real excitement stems from the approach: the drug was developed to target a specific carcinogenic mutation of a gene called BRAF, involved in cell growth. The problem is that there are several dangerous mutations of BRAF alone, and thousands of other genetic mutations that also cause cancer. But the new results show that targeting a particular mutation can be highly effective, hitting only those cancer cells that possess it instead of employing the scattershot attack of current cancer chemotherapies. If many mutant-specific drugs come online, rapid gene profiling of patients could enable them to be given precisely the right treatment, without the debilitating side-effects. That, however, will require the development of an awful lot of new drugs. [See my Prospect article on the problems with Big Pharma.]
Friday, September 17, 2010
Grand designs?
My review of Hawking’s new book has now been published, although you’re unlikely to stumble across it unless you live in Abu Dhabi. Since it is published pretty much verbatim, and seems to be freely accessible, I won’t post it again here.
Friday, September 10, 2010
God, the universe, and selling books
I have a comment on the Prospect blog about the way the media has been hyperventilating (see here and here (Graham Farmelo being characteristically astute) and here) about Stephen Hawking. Here is how it started out. [Incidentally, I can't figure out why my last paragraphs are reverting to Roman typeface. Sorry for this distraction.]
It’s a harsh reality of journalistic life that you will sometimes have to write up ‘news’ that is neither new nor significant, simply because your editor knows that everyone else will do so. That is the generous interpretation of the blanket media coverage of Stephen Hawking’s pronouncement that God is no longer needed to create the universe.
Hawking has form in this arena, having previously been accorded oracular status when he uttered some comment about a Theory of Everything permitting us to ‘know the Mind of God’, the kind of idle metaphor that only someone lacking any serious interest in the interface of science and religion would employ. Hawking clearly had not read Francis Bacon’s Advancement of Learning, which wisely declares that ‘if any man think, by his inquiries after material things, to discover the nature or will of God, he is indeed spoiled by vain philosophy.’ Although interpretations of Bacon’s pieties as those of a closet atheist minding his back are unlikely, he did at least have the good taste thus to dispense with God at the outset.
Let’s not be too harsh on Hawking: the man is one of the best physicists in the world. The problem is that, in the public view, this statement probably seems as absurd as saying that Messi is a good striker: a lame way of acknowledging incomparable genius. Most people will be astonished to hear that Hawking is not rated by his peers among the top ten physicists even of the 20th century, let alone of all time. They probably imagine he has so far been denied a Nobel prize out of sheer jealousy. Hawking is extremely smart, but so are others, and he is a long way from being Einstein’s successor.
More importantly, Hawking has no reputation among scientists as a deep thinker. There is nothing especially profound in what he has said to date about the social and philosophical implications of science in general and cosmology in particular. There is far more wisdom in the views of Martin Rees, John Barrow or Phil Anderson, not to mention the old favourites Einstein, Bohr and Feynman. Hawking’s latest remarks on the redundancy of God have little depth, as Paul Davies showed easily enough in the Guardian: if you have any kind of law-like regularity in the universe, the door is always open for those who like to attribute it to God. And Mary Warnock (no religious apologist) points out – or reminds us that Hume pointed out – that the Biblical God is not simply or even primarily a God who made the universe. It’s a sterile debate, as Bacon already saw.
This makes it ridiculous, then, that Hawking’s announcement in his new book The Grand Design (I’m currently reviewing this, and will post the review shortly) has been greeted as though it is the final judgement of science on the Biblical Creation: Hawking Has Spoken. Even atheists must feel some sympathy for the likes of Rowan Williams having to comment on such a shallow assertion, as though Hawking is supposed to have set the foundations of their faith quaking. Hawking is speaking about the God of Boyle and Newton, not the God of contemporary theology. (This is not to deny that millions still believe in this anachronistic, childish vision of God, who waved his fingers and made the world, but just to say that it is a bit silly to pander to it.)
So why does Hawking get awarded this status by the idolatory press? It’s time to stop being squeamish and take the bull by the horns. The Cult of Hawking is the Cult of the Great Mind in the Useless Body. It is attributable in part to a simple, ghoulish fascination with the man’s physical disability, but more so (and more troublingly) to the unspoken astonishment that a man with such severe bodily impairment can be intelligent. It speaks volumes about our persistent prejudices about disability.
Tuesday, September 07, 2010
Happy now?
Here’s the pre-edited version of my latest Muse for Nature News.
**********
Does money make you happy? It depends what you mean by happy.
You want to be happy? Here’s how: be highly educated, female, wealthy, not middle-aged (tell me about it), married and self-employed. These are among the most salient characteristics of people who describe themselves as being the most happy. Misery, meanwhile, comes from unemployment, low income, divorce and poor health.
Not rocket science, is it? Nevertheless, the booming discipline of ‘happiness studies’ continues to excite controversy. What is cause and effect, for example? Are people happier when they marry, or do happy people marry?
And what exactly do we mean by happiness: that we laugh a lot, feel optimistic and secure in our lives, are serenely calm or deliriously hedonistic? In a recent Gallup poll of national happiness, the USA came fifth, and yet at the same time came 89th from ‘best’ (out of 151) in terms of ‘worry’ and had the fifth highest stress levels. How to make sense of that? Does happiness compensate for stress, or are they ineluctably conjoined?
Besides, is happiness a desirable goal? That might seem obvious (it was to the authors of the US Declaration of Independence) – and it surely seems a better measure of human wealth than conventional ‘well-being’ economic indices such as GDP. But what if a happy nation is a selfish or profligate one? And who’s to say that the inhabitants of Aldous Huxley’s Brave New World would not, blissed out by the drug soma, have rated high on the happiness scale?
These dilemmas have deep roots. Jeremy Bentham’s utilitarian political philosophy in the nineteenth century sought to arrange for the maximum happiness for the greatest number of people, according to a so-called ‘hedonistic calculus’: a principle, however, rendered indeterminate by what has been called the ‘fallacy of double optima’, with no unique optimum.
One of the most contested issues is the relationship between happiness and income. Everyone agrees that abject poverty is miserable, but how does the relationship play out above that unfortunate state? While being female or married are all-or-nothing factors, income is quantitative: if being wealthy makes you happy, does being more wealthy make you more happy?
Since most of us are, by definition, not relatively wealthy in our society, we probably feel a glow of self-righteous satisfaction from studies suggesting there is a ‘wealth threshold’ above which happiness no longer increases [1][2]. That fits with intuition: the super-rich do not strike us as a particularly joyful bunch. (In the UK we like to wheel on the Royal Family as the prime exhibit, disregarding the fact that less representative members of society you will never find.)
But now Nobel laureate economist Daniel Kahneman and his colleague Angus Deaton at Princeton University have thrown a cat among the pigeons. In a new paper in The Proceedings of the National Academy of Sciences USA [3] they use the US data from the recent Gallup survey to argue that income does continue to impact on our evaluation of life satisfaction as we enter the realm of the rich.
Does this validate the anonymous quip that those who say money can’t buy happiness don’t know where to shop? Not exactly. Kahneman and Deaton say that previous discussions have been muddied by a failure to distinguish a sense of emotional well-being from our life evaluation. The first refers to daily experience: how much we laugh, how relaxed we feel as we go about our life. The second is a more objective overview: are we content with our family, job, house, insurance, credit rating? It is not hard to imagine the head of a big corporation feeling good about all this while never cracking a grin.
The Gallup poll surveyed more than 700,000 US residents, although Kahneman and Deaton jettison about a quarter of the responses because they appear unreliable. From the rest, they deduce that income is more closely correlated with life evaluation than with emotional well-being, and that this correlation persists for all income levels, at least up to around $160,000 per annum. While reported well-being also generally increases with income, this relationship plateaus at an income of around $75,000.
For all their ambiguities, happiness studies are closely monitored by politicians and policy makers, not least because policies that make people happy seem likely to win votes. What will they make of these findings? Is it better to promote good life evaluation, or emotional well-being?
Kahneman and Deaton refrain from taking a position – and the richness and subtlety of their data advise against glib answers. As they imply, any society should wish to improve the lot of people who have poor emotional health and are gloomy about their prospect. But their results, while complicating the previous picture, surely suggest that income (and dare one therefore add, taxation levels?) should not be regarded as a relevant happiness dial for the comfortably off. While some might be determined to extract the conclusion that, as the New York Times once put it [4], ‘maybe money does buy happiness after all’, there is a strong case here that better education, secure health provision, lowering of stress, and the nurturing of social and familial relationships offer a far greater dividend of smiles.
References
1. Easterlin, R. A. in Nations and Households in Economic Growth: Essays in Honor of Moses Abramovitz (eds P. A. David & M. W. Reder) 89-124 (Academic Press, New York, 1974). Paper available here.
2. Layard, R. Happiness: Lessons From a New Science (Penguin, New York, 2005).
3. Kahneman, D. & Deaton, A. Proc. Natl Acad. Sci. USA doi: 10.1073/pnas.1011492107.
4. Leonhardt, D. New York Times 16 April (2008).
Wednesday, September 01, 2010
The prospect for September
Bit of overmatter from my Prospect Lab Report this month, as the top story below blew up shortly before it went to press, so the last two stories below were shelved. Here, as ever, is the unexpurgated version.
It would be nice to be able to report that the much trumpeted ‘end of antibiotics’ is just a slice of media alarmism. But it isn’t. The danger that just about all our existing antibiotics will soon be powerless against resistant bacteria, as claimed in Lancet Infectious Diseases, is all too real. A paper in the journal reports the emergence and spread of strains of common pathogens, such as E. coli and the pneumonia bug K. pneumoniae, containing a gene that confers resistance against even current last-resort antibiotics called carbapenems. Such bacteria, Chris Walsh of Harvard Medical School confirms, “are on the brink of being impossible to treat with existing antibiotics.” “This is a very serious problem”, agrees Gerry Wright, a specialist in antibiotic resistance and discovery at McMaster University in Ontario. Without antibiotics, even routine surgery could cause fatal infections.
Antibiotic resistance has been with us ever since penicillin revolutionized medicine. So why the problem now? Partly, it’s simply becoming harder to find new drugs to expand the arsenal. But the difficulties also stem from practices within the pharmaceutical industry. “This is a very grim time in antibacterial drug development”, says Wright. “The reasons are complex, but the fact that many pharmaceutical companies have moved to a focus on chronic diseases is one.”
Wright is one of several specialists who have been clamouring for years about the danger. In 2004, Carl Nathan of Cornell University’s Weill Medical College decried the way companies look for profitable blockbuster antibiotics. These are general-purpose drugs for chronic infections, and their widespread use quickly elicits resistance. But if their use is restrained, profits fall and funding and expertise leaches away. This, along with regulatory hurdles, the debilitating effects of a spate of big pharma mergers, and myopic focus on hitting tried-and-tested biochemical targets in the pathogens, has now almost dried up the antibiotic development pipeline. Nathan called for an overhaul in the way new antibiotics are sought and brought to market, including a vigorous not-for-profit pharmaceutical sector.
Something certainly needs to change: this is a global problem for which the market may not offer any solution. “Multidrug resistant bacteria will only continue to spread”, says Wright. “There is no chance that the problem will go away.”
*****
The UK coalition government’s plan to dismantle the Human Fertility and Embryology Authority in its cull of ‘health quangos’ is nothing short of vandalism.
The Health Protection Agency, also on the hit list, supplies vital advice about infectious diseases to the government, public and medical profession. But that demands rather specific expertise which could at least conceivably be transferred intact within the civil service. The HFEA is different.
Set up in 1991 after much governmental procrastination in the wake of the first IVF birth (1978) and the subsequent Warnock Report (1984) on embryo research, its responsibilities ballooned as developments in embryology and assisted conception accelerated. The authority’s recent wrestling with the ethics of human-animal hybrid embryos and stem-cell research seems a long way from treatments for infertility, but there is an inextricable link between them, historically and scientifically. This is one reason why the possible plan floated by Health Secretary Andrew Lansley to parcel out the HFEA’s work to three other bodies is naïve and potentially dangerous. Decisions about these delicate matters at the forefront of reproductive and biomedical technology require a comprehensive overview of the context, and ever more so as time goes by.
The real tragedy is that the HFEA did its job so well, as attested by the fact that it managed to upset both religious (and secular) conservatives, for perceived liberalism, and scientists, for alleged restrictiveness (despite the UK having one of the most permissive embryo research frameworks in the world). The HFEA was genuinely independent, refusing to kowtow to government, scientists, IVF clinics, religious groups, or public opinion. Doubtless some of its decisions could be criticized, but they were always taken with sober, informed consideration. It was a bulwark against the hazards of both a laissez-faire free market in infertility treatment and knee-jerk reactionary prohibition. It will be a miracle if the same acumen can be assembled from the scattered remains.
*****
The announcement of an antiviral vaginal gel that can reduce HIV infection by around 50 percent is good news, however qualified. The current clinical trial, conducted by the Durban-based Centre for the AIDS Program of Research in South Africa, is modest in scale and awaits replication, along with more data on safety and a better understanding of why it doesn’t always succeed. But the great virtue of this strategy is that it gives some autonomy to women, who can reduce their chance of contracting the virus when male sexual partners refuse to use a condom. In South Africa, a third of all women between 20 and 34 are thought to be HIV-positive, and they account for around 60% of all new infections.
The gel contains an antiviral drug which interferes with a key enzyme involved in viral replication, unlike previous efforts which have sought either to inhibit the entry of viruses into cells or to kill the viruses (or infected cells) directly. Testing on 889 HIV-negative women over two years showed that regular use could reduce the chance of infection by 54%. The gel should be very cheap per dose and has few side-effects. The question now is how to balance the urgency of need against time-consuming confirmation and in-depth clinical testing.
There was more promising news with the announcement that two ‘therapeutic vaccines’ for HIV – which aim to prevent transmission from infected people rather than preventing infection in the first place – have at last shown some success in boosting immune systems debilitated by HIV. The vaccines use pieces of RNA from the virus to stimulate an immune response.
Many AIDS researchers had concluded that therapeutic vaccines would not work, and even now the response to the new trials, which report only a modest suppression of the virus, is somewhat muted. Some fear the strategy might backfire by boosting evasive viral mutations.
*****
It’s a good time to be an oil specialist: lucrative contracts beckon both from BP and from the US government as they prepare for the obligatory Natural Resource Damage Assessment. But there are strings attached in either case: you probably won’t be able to publish your research for confidentiality reasons. Some academics have already declined offers for this reason. There is of course nothing very usual about gag rules for work contracted by a private company or for government-backed research with legal implications. But it could mean that, in the absence of significant independent funding for such research, a detailed understanding of the effects of the Deepwater Horizon spill will never be made public.
On the other hand, oil clean-up technology could be improved by the carrot of a $1.4 million prize dangled by the X Prize Foundation, a Californian organization that aims to stimulate “radical breakthroughs for the benefit of humanity”. The company has previously offered a $10 million award for the development of a privately funded, manned spacecraft, which was claimed by the company Scaled Composites now working on Richard Branson’s Virgin Galactic commercial spaceflight programme. Entries for the oil prize are already being prepared. It’s good that the Foundation has noticed there are better ways to spend its money.
Subscribe to:
Posts (Atom)

