Saturday, January 29, 2011

'Frankenstein plays God' shock horror



When I wrote recently in New Humanist, apropos of my new book Unnatural, that “It is all too easy for self-appointed moralists who warn that reproductive technologies will lead to Frankenstein monsters and Brave New Worlds – whether they are the Daily Mail, the religiously motivated bioethicists who determined George W Bush’s biomedical policies, or anti-biotechnology crusaders – to tap into familiar, legendary nightmares that foreclose a grown-up debate about how, why and when to regulate the technical possibilities” (the piece was accompanied by the wonderful image above by Martin Rowson), I have to admit that I wasn’t thinking very much about the possibility that the Daily Mail would review Unnatural. But if I had been, I needn’t have worried, for it seems that folks at the Mail have ruthless mental filters that transform words into precisely what they want to hear. And so it is that Christopher Hudson’s extensive review in yesterday’s Mail is amazingly favourable, calling Unnatural a “fascinating and disturbing book”. Never mind that I pick apart the idle journalese of “playing God” – Unnatural is apparently “the story of all the Frankensteins who wanted to play God by creating mankind.” Never mind that my point about Brave New World is that it is not “eerily prophetic” at all. Never mind that the central point of the book (there’s a clue in the title) is to challenge accepted notions of “unnaturalness” – one of the key problems for human cloning is apparently “How can its unnaturalness be overcome?” Hudson says that my book “demonstrates [that cloning] could eventually destroy what it means to us to be a human being.” Well, no; I set out to demonstrate that, whatever cloning might do, it will not in fact be that. I would like to imagine that a lot of Mail readers will be in for a surprise if they buy the book on the back of this review (and please don’t let me stop you); but who knows, maybe they’ll convince themselves into drawing the same conclusions.

“From Frankenstein to clones, how our hunger to play God could be the death of us”, says the standfirst to the review. Now if only they had run it a bit earlier, that would have supplied the perfect headline for the tabloid that Martin’s Creature grasps. 

Thursday, January 27, 2011

Artificial hydrogen poses heavy challenge to quantum theory


Another piece for Nature’s online news, and while this is pretty hardcore, it is also a gorgeously bold experiment.

*****************************************************************

Analogues of hydrogen made with exotic particles test quantum chemistry to its limits.

Scientists have made new ultralight and ultraheavy forms of the element hydrogen, and investigated their chemical properties.

Donald Fleming of the University of British Columbia in Vancouver, Canada, and his coworkers have created artificial analogues of hydrogen that have masses of a little over one tenth and four times that of ordinary hydrogen. These pseudo-hydrogens both contain short-lived subatomic particles called muons, superheavy versions of the electron.

The researchers looked at how these new forms of hydrogen behave in a chemical reaction in which a lone hydrogen atom plucks another out of a two-atom hydrogen molecule – just about the simplest chemical reaction conceivable. They find that both the weedy and the bloated hydrogen atoms behave just as quantum theory predicts they should [1] – which is itself surprising.

The experiment is a ‘tour de force’, says Paul Percival of Simon Fraser University in Burnaby, Canada, a specialist in muonium chemistry.

‘I would never attempt such a difficult task myself’, Percival admits, ‘and when I first saw the proposal I was very doubtful that anything of value could be gained from the herculean effort.  Don Fleming proved me wrong. I doubt if anyone else could have achieved these results.’

A normal hydrogen atom contains a single, negatively charged electron orbiting a single positively charged proton in the nucleus. About 0.015 percent of natural hydrogen consists of the heavy isotope deuterium, in which the atoms also contain an electrically neutral neutron in the nucleus. And there is a third isotope of hydrogen (tritium) with two neutrons, produced in some nuclear reactions, but which is too dangerously radioactive for use in such experiments.

Because the chemical behaviour of atoms depends on the number of electrons they have, the three hydrogen isotopes are chemically almost identical. But the greater mass of the heavy isotopes means that they vibrate at different frequencies, and quantum theory suggests that this will produce a small difference in the rate of their chemical reactions, such as the one examined by Fleming and colleagues.

If lighter and heavier versions of hydrogen could be made, that theory could be subjected to more rigorous testing. Fleming and colleagues did this using muons produced by collisions in the Canadian particle accelerator TRIUMF in Vancouver.

Muons are related to electrons, but are more massive. “A muon is an overgrown electron – an electron on steroids – with a mass about 200 times that of an electron”, explains Richard Zare, a physical chemist at Stanford University. “But unlike the free electron the free muon falls apart, with a mean lifetime of about 2.2 microseconds.” This meant that the researchers had to work fast to study their pseudo-hydrogen.

To make the ultralight form, they substituted the proton for a positively charged muon, which has just 11 percent of the mass of a proton. And to make ultraheavy hydrogen, they replaced one of the electrons in a helium atom with a negative muon.

Helium has two electrons, two protons and two neutrons. But because it is more massive, the negative muon orbits much more tightly around the nucleus, and so in effect the atom becomes a kind of composite nucleus – the existing two-proton nucleus plus the muon – orbited by the remaining electron. So it has a mass of a little over four times that of hydrogen.

Fleming and colleagues found that the reaction rates calculated from quantum theory were close to those measured experimentally. “This gives confidence in similar theoretical methods applied to more complex systems”, says Fleming.

The good agreement wasn’t necessarily to be expected, since the calculations rely on the so-called the Born-Oppenheimer approximation which assumes that the electrons adapt their trajectories instantly to any movement of the nuclei. This is generally true for electrons, which are nearly 2000 times lighter than protons. But it wasn’t obvious that it would hold up for muons, which have a tenth of the proton’s mass.

“It surprises me at first blush that the theoretical treatments hold up so well”, says Zare. “The Born-Oppenheimer approximation is based on the small ratio of the mass of the electron to that of the mass of the nuclei. Yet suddenly the mass of the electron is increased by two-hundred-fold and all seems to be well.”

Because the muon has such a short lifetime, extending such studies to more chemically complex systems is even more challenging. However, Fleming and his colleagues propose now to look at the ‘hydrogen’ exchange reaction between the superheavy ‘hydrogen’ and methane (CH4).

References

1. Fleming, D. G. et al. Science 331, 448-450 (2011).

Monday, January 24, 2011

How words get the message across


Here is the pre-edited version of my latest news article for Nature online, with a bit of extra stuff appended for which there was no room.
***********************************************************

Languages are adapted to deliver information efficiently and smoothly.

Longer words tend to carry more information, according to research by a team of cognitive scientists at the Massachusetts Institute of Technology.

It’s a suggestion that might sound intuitively obvious, until you start to think about it. Why, then, the difference in length between ‘now’ and ‘immediately’? For many years, linguists have tended to believe that word length depended primarily on how often the word is used – a relationship discovered in the 1930s by the Harvard linguist George Kingsley Zipf [1].

Zipf believed that this link between word length and frequency stemmed from an impulse to minimize the amount of time and effort needed for speaking and writing, since it means we use more short words than long ones. But Steven Piantadosi and colleagues say that, to convey a given amount of information, it is more efficient to shorten the least informative – and therefore the most predictable – words, rather than the most frequent ones.

Zipf’s relationship is roughly correct, as implied by how much more often ‘a’, ‘the’ and ‘is’ are used in English than, say, ‘extraordinarily’. And this relationship of length to use seems to hold up in many languages. Because written and spoken length are generally similar, it applies to both speech and text.

But after analysing word use in 11 different European languages, Piantadosi and colleagues found that word length was more closely correlated with their information content than with their usage frequency. They describe their results in the Proceedings of the National Academy of Sciences USA [2].

This is a landmark study”, says linguist Roger Levy of the University of California at San Diego. “Our understanding of the relationship between word frequency and length has remained relatively static since Zipf’s discoveries’, he says, and he feels that this new study may now supply “the largest leap forward in 75 years in our understanding of how principles of communicative efficiency govern the evolution of natural language lexicons.”

Measuring the information content of a word isn’t easy, especially because it can vary depending on the context. The more predictable a word is, the less informative it is. The word ‘nine’ in ‘A stitch in time saves nine’ contains less information than it does in the phrase ‘The word that you will hear is nine’, because in the first case it is highly predictable.

The MIT group devised a method for estimating the information content of words in digitized texts by looking at how it is correlated with – and thus, predictable from – the preceding words. For just a single preceding word, Piantadosi explains that “we count up how often all pairs of words occur together in sequence, such as ‘the man’, ‘the boy’, ‘a man’, ‘a tree’ and so on. Then we use this count to estimate the probability of a word conditioned on the previous word – or more generally, the probability of any word conditioned on any preceding sequence of a given number of words.” According to information theory, the information content is then proportional to the negative logarithm of this probability.

However, physicist Damián Zanette of the Centro Atómico Bariloche in Argentina, who has studied Zipf-type relationships in linguistics, is not persuaded that this method accurately captures the real information content of a word in context. This, he says, is typically determined by a span of several surrounding hundred words, not just a few [3].

Piantadosi and colleagues suggest that the relationship of word length to information content might not only make it more efficient to convey information linguistically but also make language cognition a smoother ride for the reader or listener. If shorter and briefer words carry less information, then the density of information throughout a phrase or sentence will be smoothed out, so that it is delivered at a roughly steady rate rather than in lumps. In this way, the results suggest how the lexical structure of language might aid communication.

Surprising though it may seem, some linguists have suggested previously that communication might not in fact be the primary purpose of language – Noam Chomsky, for example, has claimed that it is about establishing social relationships. Yet according to cognitive scientist Florian Jaeger of the University of Rochester in New York, these new results “suggest that communication is a sufficiently important aspect of language to shape it over time”.


References

1. Zipf, G. The Psychobiology of Language (Routledge, London, 1936).
2. Piantadosi, S. T., Tily, H. & Gibson, E. Proc. Natl Acad. Sci. USA 10.1073/pnas.1012551108 (2011).
3. Montemurro, M. A. & Zanette, D. H. Adv. Complex Syst. 13, 135-153 (2010).


Some further comments from Steven Piantadosi in response to my questions:

PB: In terms of the possible reasons for your central finding: are you suggesting that shorter words carry less information largely so that information tends to be rather evenly distributed through both text and (because of the relationship of orthographic to phonetic length) speech, i.e. the short, 'rapid-fire' words don't carry a lot of info and so don't impose a sudden high demand on cognitive processing?

SP: Yes, that's probably the most likely theory for what's going on. There are quite a few papers in psycholinguistics showing these kinds of effects (references 7,8,9,10,12 in the paper). In Levy & Jaeger, for instance, people insert optional syntactic elements like "that" in locations where there would otherwise be a peak in information content – inserting another word helps keep information per unit time lower.

PB: In this respect, what do the findings imply for the long-standing idea that language is a compromise between the needs of the speaker and those of the listener? It rather seems the balance here is in favour of the listener, who gets a smooth rather than lumpy informational stream, whereas the speaker has to do rather more speaking than if length depended primarily on frequency. Or does your idea also optimize the total amount (time) of speaking needed to convey a given amount of information, and so benefit the speaker too?

SP: This is a really interesting issue. It could be caused by speakers thinking about what listeners would want, or it could just reflect intrinsic properties of language production systems, or both. Speakers have more trouble accessing low frequency (probably also high information content) words, so I wouldn't say that this necessarily has to come from speakers designing speech for listeners. It's true that speakers have to do more speaking, but that also means they have more time to plan and produce their utterances. It also helps listeners by giving them more time to process. I don't think we know who it's really for, yet.

PB: Finally, more for my own curiosity than anything, I can't help wondering if anything of this sort works for Chinese. Obviously one tends to lose the phonetic/orthographic link there - and while commonly used words do sometimes have simpler written characters, this is not always so. Do you nonetheless expect to see any kind of relationship between information content and the number of strokes in the characters? Does any such thing then survive in speech patterns?

SP: Ah that's interesting. I'm not sure I would necessarily predict effects in Chinese orthography per se, but it would be interesting to look – it would be a neat case for seeing if there are actually influences on the writing system. In the current work, we used orthography largely as a proxy for phonetic length. Chinese has very many monosyllabic words so its not clear that word length has much variance to be explained there. That raises the interesting question of why Chinese is like that. It may be that information content is modulated in other ways in Chinese, but I don't know.

Thursday, January 20, 2011

Unnatural events


Seems a timely point to mention that my new book Unnatural is about to appear – it’s officially released at the start of February. I have a forthcoming Opinion piece in New Scientist on the topic (5 Feb issue), and have just recorded an item about it for the Guardian books podcast. I have several talks on this (and other things) coming up in the next few months, and will put a list on my web site.

Thursday, January 13, 2011

For geeks only

That means you.

First, for anyone interested in the regulation of synthetic biology, there is a set of guidelines issued by the International Risk Governance Council in Geneva, in the writing of which I played a part.

Second, here is a little news item about lead-acid batteries with a fun bottom line (I know, it sounds unlikely).

Friday, January 07, 2011

What is a bond?

My piece on the chemical bond is now published in Nature. I hope it attracts more comment – already I’m pleased to see remarks from the IUPAC team who are redefining the hydrogen bond (I had no room to talk about this in any detail, or to supply the link), and also some comment on Bader’s perspective, to which again I could only allude in the briefest of terms – it deserves more space.

... ah, Julie's post about the inaccessibility behind Nature's firewall makes me feel bad, so here's the whole piece after all, before final editing so with a few more refs and details included:

******************************

Not so long ago the chemistry student’s standard text on the theory of chemical bonding was Charles Coulson’s Valence (1952). Absent from it was Coulson’s real view of the sticks that generations of students have drawn to link atoms into molecules. ‘A chemical bond is not a real thing: it does not exist: no one has ever seen it, no one ever can. It is a figment of imagination which we have invented,’ he wrote [1].

There is a good reason for postponing this awkward truth. The bond is literally the glue that makes the entire discipline cohere, and so to consider it an objective reality is necessary for any kind of chemical discourse. Chemistry is in fact riddled with such convenient (but contested [2]) fictions, such as electronegativity, oxidation state, tautomerism and acidity.

Disputes about the correct description of bonding have ruffled chemists’ feathers since the concept of molecular structure first emerged in the mid-nineteenth century. Now they are proliferating, as new theoretical and experimental techniques present new ways to probe and quantify chemical bonds [3]. Traditional measures such as crystallographic atomic distances and dissociation energies have been supplemented by spectroscopic techniques for determining vibrational frequencies, shifts in the electronic environment of the atom, magnetic interactions between atoms, measurements of force constants, and a host of quantum-chemical tools for calculating such aspects as electron distributions, electron localization and orbital overlap.

The nature of the chemical bond is now further complicated by the introduction of the dynamical dimension. Molecules have traditionally been regarded, if not as static, then as having platonic architectural frameworks which are merely shaken and rotated by thermal motions. The bonds get stretched and bent, but they still have an equilibrium length and strength that seems to justify their depiction as lines and stalks. Now, thanks to ultrafast spectroscopies, we are no longer restricted to these time-average values to characterize either structure or reactivity. What you ‘measure’ in a bond depends also on when you measure it.

Some chemists argue that in consequence the existence (or not) of a bond depends on how the problem is probed; others are committed to absolute criteria [4]. This difference of opinion goes to the heart of what chemistry is about: can all be reduced to quantum physics or are fuzzy heuristics essential? More pressingly, the issue of how best to describe a chemical bonding pattern has tangible implications for a wide range of problems in chemistry, from molecules in which atoms are coerced out of their usual bonding geometry [5] to the symmetric hydrogen bond (where the hydrogen is shared equally between two atoms) [6,7] and new variations on old themes such as aromaticity (special patterns of ‘smeared-out’ bonding like that in benzene) [8].

Just about every area of chemistry harbours its own bonding conundrums, almost any of which illustrate that we have a far from exhaustive understanding of the ways in which quantum rules will permit atoms to unite – and that in consequence our chemical inventiveness suffers from a limited view of the possibilities.

Carving up electrons

We can all agree on one thing: chemical bonding has something to do with electrons. Two atoms stick together because of the arrangement of electrons around their nuclei. In the nineteenth century it was commonly thought that this attraction was electrostatic: that atoms in molecules are positively or negatively ionized. That left the puzzle of how identical atoms can form diatomic molecules such as H2 and O2. American chemist G. N. Lewis proposed that bonding can instead result from the sharing of electrons to create filled shells of eight, visualized as the corners of a cube [9].

In the 1920s and 30s Linus Pauling showed how this interaction could be formulated in the language of quantum mechanics as the overlap of electron wavefunctions [10]. In essence, if two atomic orbitals each containing a single electron can overlap, a bond is formed. Pauling generalized earlier work on the quantum description of hydrogen to write an approximate equation for the wavefunction created by orbital overlap. This became known as the valence-bond (VB) description.

But an approximation is all it is. At the same time, Robert Mulliken and Friedrich Hund proposed another way to write an approximate wavefunction, which led to an alternative way to formulate bonds: not as overlaps between specific orbitals on separate atoms but as electron orbitals that extend over many atoms, called molecular orbitals (MOs). The relative merits of the VB and MO descriptions were debated furiously for several decades, with no love lost between the protagonists: Mulliken’s much-repeated maxim ‘I believe the chemical bond is not so simple as some people seem to think’ was possibly a jibe at Pauling. By the 1960s, for all Pauling’s salesmanship, it was generally agreed that MO theory was more convenient for most purposes. But the debate is not over [11], and Roald Hoffmann of Cornell University insists that ‘discarding any one of the two theories undermines the intellectual heritage of chemistry’.

Both options are imperfect, because they insist on writing the electronic wavefunction as some combination of one-electron wavefunctions. That’s also the basis of the so-called Hartree-Fock method for calculating the ground-state wavefunction and energy of a molecular system – a method that became practical in the 1950s, when computers made it possible to solve the equations numerically. But separating the wavefunction into one-electron components is a fiction, since the distribution of one electron depends upon the distributions of the others. The difference between the true ground-state energy and that calculated using the Hartree-Fock approach is called the correlation energy. More recent computational methods can capture most of the correlation energy – but none can give an exact solution. As a result, describing the quantum chemical bond remains a matter of taste: all descriptions are, in effect, approximate ways of carving up the electron distribution.

If that were the limit of the bond’s ambiguity, there would be little to argue about. It is not. There is, for example, the matter of when to regard two atoms as being bonded at all. Pauling’s somewhat tautological definition gave the game away: ‘there is a chemical bond between two atoms or groups of atoms in case that the forces acting between them are sufficient to lead to the formation of an aggregate with sufficient stability to make it convenient for the chemist to consider it as an independent molecular species’ [1]. Pauling himself admitted that although his definition will in general exclude the weak van der Waals (‘induced dipole’) attraction between entities, occasionally – as in the association of two oxygen molecules into the O4 cluster – even this force can be strong enough to be regarded as a chemical bond.

It’s no use either suggesting (as Coulson did) that a bond exists whenever the combined energy of the objects is lower than that when they are separated by an infinite distance. This is essentially always the case, at least for electrically neutral species. Even two helium atoms experience mutual van der Waals attraction, which is after all why helium is a liquid at very low temperature, but they are not generally thought to be chemically bonded as a result.

Besides, the ‘bonded or not’ question becomes context-dependent once atoms are embedded in a molecule, where they may be brought into proximity merely by geometric factors, and where there is inevitably some arbitrariness in assigning them an individual electronic configuration. The resulting ambiguities were illustrated recently when three experts on inorganic compounds failed to agree about whether two sulphur atoms in an organometallic compound are linked by a bond [12]. The argument involved different interpretations of quantum-chemistry calculations, tussles over the best criteria for identifying a bond, and evidence of precedent from comparable compounds.

All this is merely a reminder that the molecule is ultimately a set of nuclei embedded in a continuous electron cloud that stabilizes a particular configuration, which balls and sticks can sometimes idealize and sometimes not. This doesn’t mean that disputes about the nature of the chemical bond are simply semantic. It matters, for example, whether we regard a very strong multiple bond as quintuple or sextuple, even if this is a categorization that only textbooks, and not nature, recognize.

Besides, how we choose to talk about bonds can determine our ability to rationalize real chemical behaviour. For example, the different descriptions of the bonds in what are now called non-classical ions of hydrocarbons – whose relative merits were furiously debated in the 1950s and 60s – have direct implications for the way these species react. Whether to consider the bonding non-classical, in the sense that it involved electrons spread over more than two atomic nuclei, or tautomeric, involving rapid fluctuations between conventional two-atom bonds, was not just a question of convention. It had immediate consequences for organic chemistry [13]

Perhaps one might seek a distinction between bonded and not-bonded in terms of how the force between two atoms varies with their separation? Yes, there is an exponential fall-off for a covalent bond like that in H2, and a power-law decay for van der Waals attraction. But the lack of any clear distinction between these two extremes has been emphasized in the past two decades by the phenomenon of aurophilicity [14,15]. Organometallic compounds containing gold with only a few chemical groups attached tend to aggregate, forming dimers or linear chains. In aurophilic bonds, the basic interaction has the same origin as the van der Waals force: the electron clouds ‘feel’ each other’s movements, so that random fluctuations of one induce mirror-image fluctuations of the other. But that interaction is modified here by relativistic effects: the changes in electron energies resulting from their high speeds in orbitals close to gold’s highly charged, massive nuclei [15,16]. Aurophilic bonds have therefore been described as a ‘super van der Waals’ interaction. Does that make them true bonds? It’s chemically meaningful to treat them that way (they’ll even serve for cementing new ‘designer’ molecular crystals [17]), but perhaps at the cost of relinquishing potential distinctions.

In uniting ‘closed-shell’ atoms, aurophilicity has sometimes been compared to hydrogen bonding, which is of comparable strength. Hydrogen bonds have traditionally been rationalized in electrostatic terms: positively polarized hydrogen atoms drawn towards regions of high electron density, due for example to ‘lone pairs’. But the bond has some covalent, electron-sharing character too, as is clear from its directional nature (it tends to have a 180o bond angle). Quantifying that is not at all straightforward, however, and has only very recently been done experimentally [18], prompting a task group of the International Union of Pure and Applied Chemistry to propose a new definition of the hydrogen bond (open for comment until this March) to replace the older electrostatic picture [19]. It’s an indication of how new methodology can restructure thinking about apparently familiar – and vitally important – modes of bonding. Even then, the IUPAC report warns that ‘there will be borderline cases for which the interpretation of the evidence might be subjective’: an explicit admission that categorizing bonds must remain an art, informed but not wholly determined by scientific criteria.

Moving target

How dynamics colours the notion of a chemical bond is an increasingly subtle matter. Atomic motions make even a ‘simple’ molecule complex; any movement of one nucleus demands that the entire electron cloud adjusts. So a jiggle of one group of nuclei can make it easier to cleave off another.

This complication never used to matter much in chemistry. The movements were too rapid to be observable, much less exploitable. But ultrashort pulsed lasers have moved the goal posts. For example, we can pump energy into a vibrational mode to weaken a specific bond, enabling selective molecular surgery [20]. We can ask about the chemical behaviour of a molecule at a particular moment in its dynamical evolution: even a strong bond is weakened when a vibration stretches it beyond its average, equilibrium length, so in ultrafast chemistry it may no longer be meaningful to characterize bonds simply as strong or weak. As Fleming Crim of the University of Wisconsin-Madison puts it, ‘a bond is an entity described by quantum mechanics but not a fixed ‘entity’ in that it will behave differently depending on how we perturb and interrogate it.’ The trajectory of a chemical reaction must then be considered not as a simple making and breaking of bonds but as an evolution of atoms on a potential-energy surface. This was always implicit in classical drawings of transition states as molecular groupings containing dashed lines, a kind of ‘almost bond’ in the process of breaking or forming. Now that is explicitly revealed as a mere caricature of a complicated dynamical process in space and time.

Underlying most these discussions is an unspoken assumption that it is meaningful to speak, if not of a ‘bond’ as an unchanging entity, then at least of an instantaneous bound state for a particular configuration of nuclei. This assumes that the electrons can adjust more or less instantly to any change in the nuclear positions: the so-called Born-Oppenheimer approximation. Because electrons are so much lighter than nucleons, this assumption is usually justified. But some clear breakdowns of the approximation are now well documented [21]. They are best known in solid-state systems [22], and in fact superconductivity is one of the consequences, resulting from a coupling of electron and nuclear motions. Such things may also happen in molecules, particularly in the photochemistry of polyatomic molecules, which have a large number of electronic states close in energy [23]; they have also been observed for simple diatomic molecules in strong electric fields [24]. As a result, the molecular degrees of freedom may become interdependent in strange ways: rotation of the molecule, for example, can excite vibration. In such situations, the very notion of an electronic state begins to crumble [21].

Embrace the fuzziness

These advances in dynamical control of quantum states amount to nothing less than a new vision of chemistry. The static picture of molecules with specific shapes and bond strengths is replaced by one of a bag of atoms in motion, which can be moulded and coaxed into behaviours quite different from those of the equilibrium species. It does not demand that we abandon old ideas about chemical bonds, nor does it truly challenge the ability of quantum theory to describe atoms and their unions. But it recommends that we view these bonds as degrees of attraction that wax and wane – or as cartoon representations of a molecule’s perpetual tour of its free-energy landscape. At a meeting in 1970, Coulson asserted that the simple notion of a chemical bond had already become lost, and that it seemed ‘something bigger’ was needed to replace it. ‘Whether that ‘something bigger’… will come to us or not is a subject, not for this Symposium, but for another one to be held in another 50 years time’, he said [25]. That moment is almost upon us.

But we needn’t fret that the ‘rules’ of bonding are up for grabs — quite the converse. While there may be some parts of science fortunate enough to be exhaustively explained by a single, comprehensive theory, this isn’t likely to be a general attribute. We are typically faced with several theories, some overlapping, some conflicting, some just different expressions of the same thing. Our choice of theoretical framework might be determined not so much by the traditional criterion of consistency with experiment but for more subjective reasons. According to Roald Hoffmann of Cornell University, these preferences often have an aesthetic component: depending on factors such as simplicity, utility for ‘telling a story’ about chemical behaviour, the social needs of the community, and the question of whether a description is productive.

As Hoffmann says, ‘any rigorous definition of a chemical bond is bound to be impoverishing’. So his advice to ‘have fun with the fuzzy richness of the idea’ seems well worth heeding.


References

1. C.A. Coulson, The Spirit of Applied Mathematics, 20-21 (Clarendon Press, Oxford, 1953).
2. Jansen, M. & Wedig, U. Angew. Chem. Int. Ed. 47, 10026-10029 (2008).
3. J. Comput. Chem. special issue, 28, 1-466 (2007).
4. Cortés-Guzmán, F. & Bader, R. F. W. Coord. Chem. Rev. 249, 633 (2005).
5. Merico, G., Médnez-Rojas, M. A., Vela, A. & Heine, T. J. Comput. Chem. 28, 362-372 (2007).
6. Jensen, S. J. K. & Csizmadia, I. G. Chem. Phys. Lett. 319, 220-222 (2000).
7. Benoit, M., Marx, D. & Parrinello, M. Nature 392, 258-261 (1998).
8. Abersfelder, K., White, A. J. P., Rzepa, H. S. & Scheschkewitz, D. Science 327, 564-566 (2010).
9. Lewis, G. N. J. Am. Chem. Soc. 38, 762 (1916).
10. Pauling, L. The Nature of the Chemical Bond (Cornell University Press, Ithaca, 1939).
11. Hoffmann, R., Shaik, S. & Hiberty, P. C. Acc. Chem. Res. 36, 750-756 (2003).
12. Alvarez, S., Hoffmann, R. & Mealli, C. Chem. Eur. J. 15, 8358-8373 (2009).
13. Brown, H. C. The Nonclassical Ion Problem (Springer, Berlin, 1977).
14. Schmidbaur, H. Gold Bull. 13, 3-10 (2000).
15. Pyykkö, P. Chem. Soc. Rev. 37, 1967-1997 (2008).
17. Schmidbaur, H., Cronje, S., Djordjevic, B. & Schuster, O. Chem. Phys. 311, 151-161 (2005).
17. Katz, M. J., Sakai, K. & Leznoff, D. B. Chem. Soc. Rev. 37, 1884-1895 (2008).
18. Isaacs, E. D. et al., Phys. Rev. Lett. 82, 600-603 (1999).
19. Arunan, E. et al., ‘Definition of the hydrogen bond’, recommendation submitted by IUPAC task group 2004-026-2-100, October 2010. See http://media.iupac.org/reports/provisional/abstract11/arunan_310311.html
20. Crim, F. F. Science 249, 1387 (1990).
21. Sukumar, N. Found. Chem. 11, 7-20 (2009).
22. Pisana, S. et al., Nature Mater. 6, 198-201 (2007).
23. Worth, G. A. & Cederbaum, L. S. Ann Rev. Phys. Chem. 55, 127-158 (2004).
24. Sindelka, M., Moiseyev, N. & Cederbaum, L. S., Preprint http://www.arxiv.org/abs/1008.0741.
25. Coulson, C. A. Pure Appl. Chem. 24, 257-287 (1970).

Water mess

I could say a lot about this murky business, but won’t. Michael Banks has done a good job of presenting the facts here, as far as I (as one of the organizing committee) can tell. None of us knows quite what is going to come of it all, except that it seems unlikely that the Nobel decision will be changed. It seems to set a troubling precedent. But if nothing else, it seems to confirm how woefully vulnerable water research is to outbreaks of a pathological nature.

Sunday, January 02, 2011

The Year of Chemistry (but some physics and biology too)

I seem to have ended 2010 with a little cluster of articles here and there. In Physics World I have a feature on single-molecule sequencing of DNA using nanopores – an exciting area that I’m now convinced is going to pay off some time soon, and which will demonstrate that advances in understanding of biology still frequently hinge on the technical capability that physics and chemistry supply. Oddly the December issue of Physics World seems still not to be in circulation or live online, but there’s a preview of the piece here. In Nature I have a couple of pieces to mark the Year of Chemistry in 2011 – an In Retrospect perspective on Linus Pauling’s classic text The Nature of the Chemical Bond and, as the main course to that hors d’oeuvre, an article on changing views of the chemical bond. The first of these is the first item below (the long version, with material that was rightly cut for the published version); the second is too long for that, but will appear in this week’s issue of Nature. I have a follow-up on the Peter Debye story below as my Crucible column in the January Chemistry World; that’s the second item below. And finally, I have a piece in New Humanist that trails my next book Unnatural, coming out in February, which picks up on the forthcoming production of Frankenstein at the National Theatre, directed by Danny Boyle. I’m greatly looking forward to that performance, and hope to be reviewing it for Nature. The NH piece is graced by one of Martin Rowson’s fabulous illustrations – worth the cover price for this alone.

And Happy New Year to everyone.

***********************

Linus Pauling’s The Nature of the Chemical Bond has, like Newton’s Principia or Darwin’s Origin of Species, the kind of legendary status that is commonly deemed to obviate any obligation to read it. Every chemist learns of its transformative role in uniting the prevailing view of molecules as assemblies of atoms with the new quantum-mechanical picture of atomic wavefunctions. But the book is long, by chemists’ standards mathematical, and anyway we now know that there are more versatile and useful approaches to the quantum bond than Pauling’s.

Yet Pauling’s book remains a good primer on the basic facts of chemical bonding – impressive for a book almost 70 years old. That’s not to say that the book should be more widely read – there are naturally better and more relevant treatments of the subject now, and The Nature of the Chemical Bond does not benefit from the elegant prose of Darwin’s works – but it is still bracing to do so. The best preparation is to look first at what more or less contemporary textbooks have to say about bonding. To take two random examples: Inorganic Chemistry, (Macmillan, 1922), by the eminent T. Martin Lowry, professor of physical chemistry at Cambridge, barely gets beyond John Dalton’s symbolic ‘ball’ molecules and Berzelius’s Law of Multiple Proportions (elements combine in simple ratios); Outlines of Physical Chemistry (16th edn, Methuen, 1930) by George Senter of Birkbeck College, a student of Wilhelm Ostwald and Nernst, doesn’t even mention the chemical bond but speaks in terms of affinities. They are products of the nineteenth century.

It’s true that this is not entirely representative, for the problem of how to describe the chemical bond was already by then acknowledging atomic physics. The English chemist Edward Frankland introduced the term in 1866, but regarded it not as a physical connection, as implied by the practice then common of drawing lines between elemental symbols, but as a kind of force akin to that which binds the solar system. Berzelius suspected that this force was electrostatic: the attraction of oppositely charged ions. That view seemed favoured by J. J. Thomson’s discovery of the electron in 1897, since ions could result from an exchange of electrons between nuclei.

But Gilbert Lewis, another Nernst protégé at the University of California at Berkeley, argued that bonding results instead from sharing, not exchange, of electrons. More precisely, this gives rise to what Irving Langmuir later called a covalent bond, as opposed to the ionic bond that comes from electron exchange. In 1916 Lewis outlined the view that atoms are stabilized by having a full ‘octet’ of electrons, visualized as the corners of a cube, and that this might come about by sharing vertices or edges of the cubes. Langmuir popularized (in Lewis’s view, appropriated) this model, which seemed vindicated when Niels Bohr explained how the octets arise from quantum theory, as discrete electron shells.

Yet this remained a rudimentary grafting of quantum theory onto the notions that chemists used to rationalize molecular formulae. Pauling, a supremely gifted young man from a poor family in Oregon who won a scholarship to the prestigious California Institute of Technology in 1922, was convinced that chemical bonding needed instead to be understood from quantum first principles. He wasn’t (as sometimes implied) alone in that – in particular, Richard Tolman at Caltech held the same view. Pauling had a golden opportunity to develop the notion, however, when in 1926 a Guggenheim scholarship allowed him to come to Europe to visit the architects of quantum theory: Bohr at Copenhagem, Arnold Sommerfeld at Munich and Erwin Schrödinger at Zurich. He also met Fritz London and his student Walter Heitler, who in 1927 published their quantum-mechanical description of the hydrogen molecule. Here they found an approximate way to write the wavefunction of the molecule which, when inserted into the Schrödinger equation, allowed them to calculate the binding energy, in reasonable agreement with experiment.

Pauling expanded this treatment to the molecular hydrogen ion H2+, and generalized it into a description called the valence-bond model. He considered that if the wavefunction that offers the lowest energy turns out to be one that is a combination of the wavefunctions of two or more structures, the molecule can be considered to ‘resonate’ between the structures. The molecule is ten stabilized by ‘resonance energy’. “It is found that there are many substances whose properties cannot be accounted for by means of a single electronic structure of the valence-bond type, but which can be fitted into the scheme of classical valence theory by the consideration of resonance among two or more such structures.” For example, the H2+ ion can be considered a resonance between HA+ .HB and HA. HB+ The electron resonates between the two nuclei.

Pauling also showed in a paper of 1928 how the bonding in molecules such as those of four-valent carbon can be explained in terms of the concept of ‘hybridization’, in which atomic electron orbitals (here the so-called 2s and three 2p orbitals) are ‘mixed’ into hybrid orbitals with a new geometric distribution in space: for carbon, they give rise to four sp3 orbitals which create a tetrahedral covalent bonding arrangement. Thiese ideas were published in a series of papers in 1931 in the Journal of the American Chemical Society that formed the core of The Nature of the Chemical Bond. The book remained in print, with (three) revised editions, until 1960. The scope of the book is breathtaking: it brings multiple bond, ionic, metallic and hydrogen bonds all within the framework, and explains how the ideas fit with observations of bond lengths and ionic sizes in X-ray crystallography, the technique that Pauling studied from the outset at Caltech and which eventually led to his seminal work in the 1950s on the structure of proteins and nucleic acids.

Pauling acknowledges in his book that it is a bit arbitrary to divide up the bonding into particular, resonating configurations of nuclei and electrons; but he says we do that all the time. “The description of the propane molecule as involving carbon-carbon single bonds and carbon-hydrogen single bonds is arbitrary; the concepts themselves are idealizations.” The wavefunction is all that really matters.

It is one thing to say it, however, and quite another to accept this arbitrariness in the face of an alternative. In the late 1920s, Robert Mulliken at the University of Chicago and Friedrich Hund in Göttingen devised a different quantum description of chemical bonding which approximated the electron wavefunctions in another way, giving rise to ‘molecular orbitals’ in which electrons were considered to be distributed over several nuclei. This model gave a rather simpler picture for explaining molecular electronic spectra: the quantum energy levels of electrons. What is more, it could offer a single description of some molecules for which the valence-bond approach needed to invoke resonance between a great many discrete structures. This was especially true for aromatic molecules such as benzene: the VB model needed something like 48 separate structures for naphthalene, and, in the case of ferrocene described in the 3rd (1960) edition of The Nature of the Chemical Bond, no fewer than 560. Evidently, while neither the MO nor VB models could lay claim to being more fundamental or ‘correct’, the former had significant advantages from a practical point of view. This was suspected even when Pauling’s book first appeared – some reviewers criticised him for not mentioning the rival theory, while one suspected that the VB method might triumph purely because of Pauling’s superior presentational skills. Pauling himself never accepted that MO theory was generally more useful, although it was the consensus among chemists by the 1970s.

The significance of The Nature of The Chemical Bond was not so much that it pioneered the quantum-mechanical view of bonding – London and Heitler had done that – but that it made this a chemical theory, a description that chemists could appreciate rather than an abstract physical account of wavefunctions. It recognized that, for a mathematical model of physical phenomena to be useful, it needs to accommodate itself to the intuitions and heuristics that scientists need in order to talk coherently about the problem. Emerging from the forefront of physics, this was nevertheless fundamentally a book for chemists.

**********************

In Kurt Vonnegut’s 1961 novel Mother Night, an American writer named Howard Campbell is brought to trial for his crimes as a Nazi propagandist during the Second World War. The apolitical Campbell decided to remain in Germany after Hitler came to power in 1933, where he is persuaded to make English radio broadcasts of Nazi propaganda. But he has also been enlisted by an operative of the US War Department to lace his broadcasts with intelligence messages coded in coughs and pauses. This role is never made public, and Campbell is constantly threatened with exposure of his ‘Nazism’ while trying to lead an anonymous life post-war in New York.

It would be unwise to stretch too far any parallels with the life of Peter Debye, the Dutch physical chemist who won the 1936 chemistry Nobel for his work on molecular structure and dipole moments. But Mother Night came to my mind after hearing the latest suggestion that Debye, who has been reviled in the past for alleged collaboration with the pre-war Nazi regime, might have been passing on information about German war technology to a spy for the British secret service in Berlin.

The evidence for that, outlined in a paper by retired chemist Jurrie Reiding after consulting Debye’s archival documents in America, is extremely circumstantial [1]. Debye was a lifelong friend of Paul Rosbaud, an Austrian chemist who hated the Nazis and spied for the Allies during the war under the codename ‘Griffin’. Reiding says that such a friendship would be inconceivable if Debye was a Nazi sympathizer. But there are no more than vague hints about whether Debye was actually one of Rosbaud’s informants.

Debye’s links with Nazism were asserted in a 2006 book Einstein in Nederland by the Dutch journalist Sybe Rispens, and were outlined in an article ‘Nobel Laureate with dirty hands’ published in a Dutch periodical in connection with the book. Here Rispens explained (as already known to historians) that Debye, as president of the Germany Physical Society (DPG), had signed a letter in 1938 expelling Jews from the society. Panicked by the media exposé, the University of Utrecht removed Debye’s name from its institute for nanomaterials science, while the University of Maastricht withdrew from an annual research prize named after Debye.

A follow-up report on the matter commissioned by the Netherlands Institute for War Documentation (NIOD) changed the accusation of collaboration to one of ‘opportunism’, and the decisions of both universities have now been reversed. But Debye’s name remained tainted in the Netherlands, despite protestation from many scientists both in Europe and in the US, where Debye worked at Cornell University after leaving Germany in 1940.

There’s good reason to think that Debye was no friend of the Nazis. He collected his Nobel prize against their expressed wishes, and they thought him far too friendly to the Jews in his role as DPG president. Indeed, he even – with Rosbaud’s assistance – helped the Jewish nuclear physicist Lise Meitner flee Germany.

And yet why did he stay in Germany so long, when others left? Roald Hoffmann at Cornell has argued that this inevitably taints Debye’s reputation. ‘In the period 1933-39’, he says, ‘Debye took on positions of administration and leadership in German science, aware that such positions would involve collaboration with the Nazi regime. The oppressive, undemocratic, and obsessively anti-Semitic nature of that regime was clear. Debye chose to stay and, through his assumption of prominent state positions within a scientific system that was part of the state, supported the substance and the image of the Nazi regime.’

Clearly Debye’s story is not one of heroic self-sacrifice; the issue is rather where mild resistance blends into passive collusion. Cornelis Gorter, a physicist at Leiden University who knew Debye well, said that (like Howard Campbell) ‘he was not at all a Nazi sympathizer but was apolitical.’ Yet it seems that, also like Campbell, his deeds can tell quite different narratives viewed from different perspectives. The accusation of opportunism in the NIOD report came largely because, having occupied positions of power in Nazi Germany, Debye went on to serve the US war effort enthusiastically, for example through his work on synthetic rubber. That could suggest ingratiating collaboration with any ruling power, but it also fits the picture of Debye striving to limit Nazi abuses before finally fleeing to oppose them more openly.

This situation is reminiscent also of the controversy about Werner Heisenberg, memorably explored in Michael Frayn’s play Copenhagen. Did Heisenberg actively drag his heels to thwart the Nazi efforts to make an atomic bomb, or did he simply get the physics wrong? Did he even know his motives himself? And if not, how can we hope to?

A clue to Debye’s position may lie in a letter he wrote to the physicist Arnold Sommerfeld just before he left Germany for good. His aim, he said, was ‘not to despair and always be ready to grab the Good which whisks by, without granting the Bad any more room than is absolutely necessary. That is a principle of which I have already made much use.’

But maybe the real moral is the one that Vonnegut adduced for Mother Night: ‘We are what we pretend to be, so we must be careful about what we pretend to be.’

1. Reiding, J. Ambix 57, 275-300 (2010).

Thursday, December 16, 2010

All the world's words

Here's the pre-edited (but mostly identical) version of my story for Nature news on an intriguing paper in Science on data-mining of Google Books. There's the danger that in the wrong hands this kind of thing could end up supplanting textual and historical analysis with lexical statistics. But there's clearly a wealth of interesting stuff to be gleaned this way. And I thoroughly approve of a paper that is not afraid to show a sense of humour.

*********************************************************

The digitization of books by Google Books has provoked controversy over issues of copyright and book sales, but for linguists and cultural historians it could offer an unprecedented treasure trove. In a paper in Science[1], researchers at Harvard University and the Google Books team in Mountain View, California, herald a new discipline, called culturomics, which mines this literary bounty for insights into trends in what cultures can and will talk about through the written word.

Among the findings described by the collaboration, led by biologist Jean-Baptiste Michel at Harvard, are the size of the English language (around one million words in 2000), the typical ‘fame trajectories’ of well-known people, and the literary signatures of censorship such as that imposed by the German Nazi government.

‘The possibilities with such a new database, and the ability to analyze it in real time are really exciting’, says linguist Sheila Embleton of York University in Canada. She concurs with the authors’ claim that culturomics offers ‘a new type of evidence in the humanities.’

‘Quantitative analysis of this kind can reveal patterns of language usage and of the salience of a subject matter to a degree that would be impossible by other means’, agrees historian Patricia Hudson of Cardiff University in Wales.

‘The really great aspect of all this is using huge databases, but they will have to be used in careful ways, especially considering alternative explanations and teasing out the differences in alternatives from the database,’ says Royal Skousen, a linguist at Brigham Young University in Provo, Utah. But he is not won over by the term ‘culturomics’: ‘It smacks too much of ‘freakonomics’, and both terms smack of amateur sociology.’

Using statistical and computational techniques to analyse vast quantities of data in historical and linguistic research is nothing new in itself – the fields called quantitative history and quantitative linguistics are well established. But it is the sheer volume of the database created by Google Books that sets the new work apart.

So far, Google has digitized over 15 million books, representing about 12 percent of all those ever published. Michel and his colleagues performed their analyses on just a third of this sample, selected on the basis of the good quality of the digitization via optical character recognition and reliable information about the provenance, such as the date and place of publication.

The resulting data set contained over 500 billion words, mostly in English. This is far more than any single person could read: a fast reader would, without breaks for food and sleep, need 80 years to finish the books for the year 2000 alone.

Not all isolated strings of characters in texts are real words – some are common numbers, others abbreviations or typos. In fact, 51 percent of the character strings in 1900, and 31 percent in 2000, were ‘non-words’. ‘I really have trouble believing that’, admits Embleton. ‘If it’s true, it would really shake some of my foundational thoughts about English.’

By this count, the English language has grown by over 70 percent during the past 50 years, and around 8,500 new words are being added each year. Moreover, only about half of the words currently in use are apparently documented in standard dictionaries. ‘That high amount of lexical ‘dark matter’ is also very hard to believe, and would also shake some foundations’ says Embleton, adding ‘I’d love to see the data.’

In principle she can, because the researchers have made their database public. This will allow others to explore the huge number of potential questions it suggests, not just about word use but about cultural history. Michel and colleagues offer two such examples, concerned with fame and censorship.

They say that actors reach their peak of fame, as recorded in references to names, around the age of 30, while writers take a decade longer but achieve a higher peak. ‘Science is a poor route to fame’, they say. Physicists and biologists who achieve fame do so only late in life, while ‘even at their peak, mathematicians tend not to be appreciated by the public.’

Nation-specific subsets of the data can show how references to ideas, events or people drop out of sight due to state suppression. For example, the Jewish artist Marc Chagall virtually disappears from German writings in 1936-1944 (while remaining prominent in the English language), and ‘Trotsky’ and ‘Tiananmen Square’ similarly vanish in Russian and Chinese works respectively. The authors also look at trends in references to feminism, God, diet and evolution.

‘The ability, via modern technology, to look at just so much at once really opens horizons’, says Embleton. However, Hudson cautions that making effective use of such a resource will require skill and judgement, not just number-crunching.

‘How this quantitative evidence is generated – in response to what questions – and how it is interpreted are the most important factors in forming conclusions’, she says. ‘Quantitative evidence of this kind must always address suitably framed general questions, and employed alongside qualitative evidence and reasoning, or it will not be worth a great deal.’

Reference
1. Michel, J.-B. et al. Science doi:10.1126/science.1199644.

Thursday, December 09, 2010

Debye's dirty hands?

I have written a news story for Nature on new findings about the life of Peter Debye, who has been accused recently of colluding with the Nazis in the run-up to the Second World War. It’s very rich material (even if the new ‘revelations’ are rather indirect and add only a speculative element to the tale); I have written a piece on this for Chemistry World too, but had better wait for that to appear before posting it here. This pre-edited version is not as well structured as the final story, but contains more of the details and anecdotes, so here it is anyway. This is clearly an issue on which feelings run high, so I look forward (I think) to the feedback.

**********************************************

Peter Debye, the Dutch 1936 chemistry Nobel Laureate recently discredited by allegations of being a Nazi sympathizer, could in fact have been an anti-Nazi informer to the Allies during the approach to the Second World War, according to a new analysis of his private correspondence.

In a paper in the journal Ambix, retired chemist Jurrie Reiding in the Netherlands describes archival documents suggesting that Debye might have supplied information to a spy for the British intelligence agency MI6 in Berlin [1].

Although the new evidence is circumstantial, it adds to a mounting case for rehabilitating Debye’s name. When the Nazi links and accusations of anti-Semitism were asserted four years ago, two Dutch universities expunged Debye’s name from a research institute and an annual prize. The new paper ‘is an important and welcome contribution to the debate, which can help in arriving at a more balanced judgement’, says Ernst Homburg, a science historian at the University of Maastricht.

Debye, who worked for most of his pre-war career in Germany, became chairman of the German Physical Society (DPG) in 1937. Four years earlier, a law introduced by Hitler’s Nazi regime demanded the dismissal of all Jewish university professors. Among those who lost their posts was the pioneering nuclear physicist Lise Meitner at the University of Berlin.

In December 1938 the DPG board decided to expel the few remaining Jewish members. Debye sent a letter to members explaining this, citing ’circumstances beyond our control’ and signing off with ‘Heil Hitler!’ ‘Under the circumstances of those days, it was almost impossible not to write such a letter’, says Homburg.

Nonetheless, when this letter was described in an article titled ‘Nobel Laureate with dirty hands’ published in the Dutch newspaper Vrij Nederland in January 2006, in association with a book (in Dutch) called Einstein in Nederland by the journalist Sybe Rispens, the ensuing media controversy caused such alarm that the University of Utrecht removed Debye’s name from the institute for nanomaterials science, while the University of Maastricht in Debye’s home town withdrew its involvement in the annual Debye Prize for scientific research, sponsored by industrial benefactors the Hustinx Foundation.

This caused a storm of protest, not least from the researchers of the former Debye Institute in Utrecht. Chemist Héctor Abruña of Cornell University, where Debye worked after coming to the US in 1940 criticized the ‘rush to judgement’ and said that a university enquiry there found no evidence for the allegations.

As a result the Dutch Ministry of Education commissioned the Dutch Institute for War Documentation (NIOD) to investigate the Debye affair. Its report, released in 2007, softened the accusations to say that Debye had been guilty of ‘opportunism’ under the Nazis, but accused him of ‘keeping the back door open’ by secretly sustaining contacts with Nazi Germany while in the US.

All the same, in 2008 the Dutch government committee advised the universities of Utrecht and Maastricht to continue using Debye’s name, since the evidence of his ‘bad faith’ was equivocal. The Debye Institute at Utrecht was reinstated, and the Maastricht prize is due to be awarded again again next year. However, according to historian of chemistry Peter Morris, who edits Ambix, ‘in the Netherlands and to a lesser extent the USA this affair severely damaged Debye’s reputation.’

Critics of the Dutch universities’ initial decision have cited various arguments why Debye should not be judged too harshly or rashly. When he was chosen by the resolutely anti-Nazi Max Planck to be director of the Kaiser Wilhelm Institute of Physics (KWIP) in Berlin – a post that he occupied from 1935 until 1939 – it was precisely because he was non-German and was thought able to resist Nazi interference. Debye insisted that the place be named the Max Planck Institute when it finally opened in 1938. When the Nazis objected, Debye covered the name carved in stone over the entrance with a wooden plank – a pun that worked in German too.

And Debye accepted his Nobel Prize against the explicit wishes of the Nazis, who had commanded all Germans not to do so. He helped Meitner escape to Holland in 1938, and the Nazis opposed Debye’s chairmanship of the DPG because they considered him too friendly towards Jews. In 1940 Debye sailed to the US to give a series of prestigious lectures at Cornell – where he then stayed until his death in 1966. He aided the US war effort enthusiastically, especially through his work on polymers and synthetic rubber.

‘There were already enough arguments for Debye’s ‘rehabilitation’ before this article’, says Homburg, who calls Risbens’ book ‘heavily flawed’. But now Reiding adds a new narrative to the defence.

Debye, he says, was a friend of Paul Rosbaud, an Austrian working at the KWIP in Berlin, who was recruited by the British secret service to supply scientific information including details of the development of the V1 and V2 rockets and the German attempts to develop an atomic bomb. Rosbaud, who loathed the Nazis, remained in Berlin throughout the war, although even now information about his activities under the codename ‘Griffin’ remain classified.

Because of his consultancy with the academic Berlin publisher Springer Verlag, Rosbaud was very well connected in German science and knew Debye since at least 1930. He too played a key role in getting Meitner out of Germany, and Debye maintained the relationship with Rosbaud after the war. ‘The close friendship between Rosbaud and Debye makes it almost unquestionable that Debye was an anti-Nazi’, Reiding says.

And he points out that, as testified by other scientists to the FBI in the 1940s, Debye would have been party to some highly sensitive information about the German war technology during his time in Berlin. ‘Therefore’, Reiding says, ‘the hypothesis that Debye was a secret informant for Rosbaud does not appear too bold.’

Although Morris thinks that ‘further evidence would be needed before this case could be proved beyond doubt’, he adds that ‘I feel that there was a rush to judgement that not only failed to take into account all the aspects of Debye’s complex life but also failed to give full weight to the ambiguous nature of life under Nazi rule.’

Others question whether the new details add much to the story. ‘There seem to be two camps: those who hate Debye and deplore his actions as president of the DPG, and those who think he was a saint’, says Henk Lekkerkerker of the Debye Institute. ‘Both opinions are misleading, and the professional historians paint a more subtle and accurate picture.’

Perhaps ultimately a clue to Debye’s position lies in a letter that he wrote to the physicist Arnold Sommerfeld in December 1939, just before he left Germany for good. His aim, he said, was ‘not to despair and always be ready to grab the Good which whisks by, without granting the Bad any more room than is absolutely necessary. That is a principle of which I have already made much use.’

1. J. Reiding, Ambix 57, 275-300 (2010).

Tuesday, November 30, 2010

Chemists to the rescue?

Here's my Crucible article for the December issue of Chemistry World, which arose when I chaired a recent talk by John Emsley at the RSC.

***************************

Can chemists save the world? In his new book, targeted at the 2011 Year of Chemistry and published by the RSC, John Emsley argues in his characteristically inspirational manner that chemical innovations in areas such as biofuels, food production and clean water treatment can deliver the promise of the book’s title: A Healthy, Wealthy, Sustainable World. Emsley makes no apologies about his crusading, even propagandizing agenda, for he rightly points out that many of the biggest global challenges, from climate change to the end of oil, demand the expertise of chemistry, making it potentially the key science of the twenty-first century.

But Emsley concedes that his survey of the wonderful things that chemists have achieved in sustainable technology – converting rapseseed oil to biodiesel or to plastics feedstocks, say – does not look in depth at the economic picture. It’s a frequent and valid objection to technical innovation that it is all very well but how much does it cost in comparison to what we can do already? What’s the financial motivation, say, for China to abandon its abundant coal reserves for biofuels?

There is no blanket answer to such economic conundrums, but common to them all is the question of whether one can rely on market mechanisms to generate incentives for a desirable technology, or whether it should be nurtured by governmental or regulatory intervention. Here, as just about everywhere else right now, the issue is how ‘big’ government should be.

In the wake of the financial crisis, market fundamentalists sound less credible asserting that the market knows best, especially when it comes to societal benefits: the recent boom years were not so much generated by market mechanisms as bought on credit. But it seems equally clear that highly managed economies which subsidize unprofitable enterprises are unsustainable and risk stifling innovation. A middle course has been successfully steered by the German government’s investment in photovoltaic (PV) energy generation, where money for research and breaks for commercial companies are coupled to a concerted effort to build a market for solar power through a feed-in tariff: a guaranteed, highly competitive price for energy generated from solar panels and fed into the grid. This stimulus recognizes that new, desirable technologies may need a hand to get off the ground but need eventually to become independent. With government assistance, the German PV industry has created around 50,000 jobs, brought revenues of €5.6 billion in 2009, and made Germany the largest national source of PV power in the world. By 2020, up to 10 percent of Germany’s energy may be solar.

This is one reason why it is unrealistic to dismiss the prospects for an innovative technology on the basis that its (perhaps less desirable) rivals can currently do things more cheaply. There is a financial component to changing attitudes. Encouraging investment in a fledgling innovation can ultimately lower its price both by enabling efficiencies of scale and by supporting research into cost-cutting improvements. That was amply demonstrated by the Human Genome Project (HGP): the international decision that it was a Good Thing created the opportunity for new sequencing technologies that have reduced the cost and increased the speed of decoding an individual’s genome by orders of magnitude. Simply put, it became financially worthwhile for companies such as Illumina (spearheaded by chemists David Walt and Anthony Czarnik) to devise radical new sequencing methods. As a result, the economic hurdle to realizing the potential medical benefits of genome sequencing was lowered.

At the same time, the race between the publicly funded HGP and a private enterprise by Celera Genomics Inc., the company founded by entrepreneur Craig Venter, shows that competition can accelerate innovation. What’s more, through canny marketing the HGP engineered a favourable climate for investment and public endorsement, creating what economist Monika Gisler at ETH in Zurich and her coworkers have called a ‘social bubble’ [1]. They say that ‘governments can take advantage of the social bubble mechanism to catalyze long-term investments by the private sector, which would not otherwise be supported.’ Of course, there is a fine line between supportive publicity and hype. But this is another reminder that promising new technologies, like children, flourish best when they are neither left to fend for themselves nor mollycoddled indefinitely.

1. M. Gisler, D. Sornette & R. Woodward, preprint http://arxiv.org/abs/1003.2882 (2010).

Monday, November 29, 2010

Flight of fantasy

The chorus of disapproval that greeted Howard Flight’s remark about how cuts in child benefits will encourage ‘breeding’ among the lower social classes (or as Flight called them,‘those on benefits’) has left the impression that such comments are now to be judged in a historical vacuum, purely on the basis of whether or not they accord with a current consensus on ‘appropriateness’, or what some would sneeringly call political correctness. This solipsistic perspective is dangerously shallow.

The media coverage has largely ignored the obvious connection between Flight’s comment and the argument for eugenics originally advanced by Darwin’s cousin Francis Galton in the late nineteenth century and pursued by intellectuals on both the left and the right for a considerable part of the twentieth. Galton voiced explicitly what Flight had at least the restraint (or the nous) only to imply: given the chance, the inferior stock among the lower classes will breed like rabbits and thereby corrupt the species. Galton worried about the ‘yearly output by unfit parents of weakly children who are constitutionally incapable of growing up into serviceable citizens, and who are a serious encumbrance to the nation.’ If the harshness of their circumstances were to be alleviated by welfare, he said, then natural selection would no longer constrain the proliferation of ‘bad genes’ throughout society. In a welfare state, the gene pool of humankind would therefore degenerate.

Some eugenicists felt that the answer was to encourage the genetically superior echelons of society to breed more: educated, middle-class women (who were beginning to appreciate that there might be more to life than endless child-rearing) had a national duty to produce offspring. Some biologists, such as Julian Huxley and J.B.S. Haldane, welcomed the prospect of ectogenesis – gestation of fetuses in artificial wombs – so that it might liberate ‘good’ mothers from that onerous obligation (presumably nannies could take over once the child was ‘born’). Even conservatives who regarded such technologies with distaste felt compelled to agree that they offered the best prospect for maintaining the vitality of the species.

This approach was called ‘positive eugenics’: redressing the imbalance by propagating good genes. It is one that Flight apparently endorses, in his concern that we should not discourage the middle classes from breeding by taking away their cash perks. But the other option, also advocated by Galton, was negative eugenics: preventing breeding among the undesirables. In the many US states that introduced forced-sterilization programmes in the early twentieth century (and which ultimately sterilized around 60,000 people), this meant the mentally unstable or impaired (‘idiots and imbeciles’), as well as perhaps the ‘habitually’ unemployed, criminals and drunkards. In Nazi Germany it came also to mean those whose ‘inferiority’ was a matter of race. (There was no lack of racism in the US programmes either.)

Liberal eugenicists such as Haldane and Huxley were rather more nuanced than Flight. They argued that eugenic policies made sense only on a level playing field: while social inequalities held individuals back, there was no guarantee that ‘defective’ genes would be targeted. But once that levelling was effected, what Huxley referred to chillingly as ‘nests of defective germ plasm’ should be shown no mercy. As he put it, “The lowest strata, allegedly less well endowed genetically, are reproducing relatively too fast. Therefore birth-control methods must be taught them; they must not have too easy access to relief or hospital treatment lest the removal of the last check on natural selection should make it too easy for children to be reproduced or to survive; long unemployment should be a ground for sterilization, or at least relief should be contingent upon no further children being brought into the world.” Flight was at least socially aware enough to pull his punches in comparison to this.

Although it was mostly the taint of Nazism that put paid to eugenics (not to mention the emergence of the concept of human rights), the scientific case was eventually revealed to be spurious too, not least because there is no good reason to think that complex traits such as intelligence and sociability have isolable genetic origins that can be refined by selective breeding.

Yet the survival nonetheless of Galton’s ideas among the likes of Flight and, in previous decades, Sir Keith Joseph, should not be mistaken for a failure to keep abreast of the science. I should be surprised if Flight has even heard of Galton, and I suspect he would be surprised himself to find his remark associated with a word – eugenics – that now is (wrongly) often considered to be a product of fascist genocidal fantasies. Galton was after all only providing pseudo-scientific justification for the prejudices about breeding that the aristocracy had espoused since Plato’s time, and it is surely here that the origins of Flights remark lie. That is why what was evidently for him a casual truism represents more than just a lapse of decorum, sensitivity or political acumen. It implies that David Cameron does not merely have the poor judgement to favour loose cannons, but that he is still heir to a deep-rooted tradition of class-based bigotry.

Friday, November 26, 2010

Funny things that happened on my way to the Forum

This Sunday I appear on the BBC World Service’s ‘ideas’ programme The Forum. In principle I am there to discuss The Music Instinct, but it’s actually a round table discussion about the issues raised by all the guests; my fellows on this occasion are the bio-nanotechnologist Sam Stupp and the polemicist and writer P. J. O’Rourke, whose new book is the characteristically titled Don’t Vote: It Only Encourages the Bastards. I have followed Sam’s work for nigh on two decades: he designs peptides that self-assemble into nanostructures which can act as biodegradable scaffolds for tissue regeneration. It is very neat, and I relished the opportunity to see Sam again. O’Rourke embodies the gentlemanly, amusing Republican whose spine-chilling views on such things as gun laws and the Tea Party are moderated by such charm and worldliness (he is no friend of US xenophobes) that you feel churlish to take issue. I was simply happy to establish that his opposition to Big Government applies only to nations and not to his own home. He is also rather funny, as right-leaning polemicists often are when they are not swivel-eyed. In any event, the programme deserves to be better known – rarely does one get the chance to discuss ideas at such leisure in the broadcast media, even on the beloved BBC.

PS: I just got an update with a direct link to the site for this programme. It includes mugshots, but I can't help that now. Gone are the days when it didn't matter how you looked on the radio.

Monday, November 15, 2010

Beyond the edge of the table

Here’s my Crucible column for the November Chemistry World. It gets a bit heavy-duty towards the end – not often now (happily) that I have to go and read (and pretend to understand) textbooks about quantum electrodynamics. But by happy coincidence, I was introduced recently to the numerology (and Pauli’s enthusiasm for it) by a talk at the Royal Institution by Arthur I. Miller, which I had the pleasure of chairing.

***********************************************************
Does the Periodic Table run out? Folk legend asserts that Richard Feynman closed the curtains on the elements after the hypothetical element 137, inelegantly named untrispetium, or more appealingly dubbed feynmanium in his honour.

As physicists (and numerologists) will know, that is no arbitrary cutoff. 137 is an auspicious number – so much so that Feynman himself is said to have recommended that physicists display it prominently in their offices as a reminder of how much they don’t know. Wolfgang Pauli, whose exclusion principle explained the structure of the Periodic Table, was obsessed with the number 137, and discussed its significance over fine wine with his friend and former psychoanalyst Carl Jung – a remarkable relationship explored in Arthur I. Miller’s recent book Deciphering the Cosmic Number (W. W. Norton, 2009). When Pauli was taken ill in Zürich with pancreatic cancer in 1958 and was put in hospital room number 137, he was convinced his time had come – and he was right. For Carl Jung 137 was significant as the number associated with the Jewish mystical tradition called the Cabbalah, as pointed out to physicist Victor Weisskopf by the eminent Jewish scholar Gershom Scholem.

Numerology was not confined to mystics, however, for the ‘explanation’ of the cosmic significance of 137 offered by the astronomer Arthur Eddington was not much more than that. Yet Eddington, Pauli and Feynman were captivated by 137 for the same reason that prompted Feynman to suggest it was where the elements end. For the inverse, 1/137, is almost precisely the value of the so-called fine-structure constant (α), the dimensionless quantity that defines the strength of the electromagnetic interaction – it is in effect the ratio of the square of the electron’s charge to the product of the speed of light and the reduced Planck’s constant.

Why 137? ‘Nobody knows’, Feynman admitted, adding that ‘it’s one of the greatest damn mysteries of physics: a magic number that comes to us with no understanding by man. You might say the hand of God wrote that number, and we don’t know how He pushed his pencil.’ It’s one of the constants that must be added to fundamental physics by hand. Werner Heisenberg was convinced that the problems then plaguing quantum theory would not go away until 137 was ‘explained’. But neither he nor Pauli nor anyone else has cracked the problem. The fact that the denominator of the fine structure constant is not exactly 137, but around 137.035, doesn’t diminish the puzzle, and now this constant is at the centre of arguments about ‘fine-tuning’ of the universe: if it was just 4 percent different, atoms (and we) could not exist.

But was Feynman right about untriseptium? His argument hinged on the fact that α features in the solution of the Dirac equation for the ground-state energy of an atom’s 1s electrons. In effect, when the atomic number Z is equal to or greater than 1/α, the energy becomes imaginary, or in other words, oscillatory – there is no longer a bound state. This doesn’t in itself actually mean that there can be no atoms with Z>137, but rather, there can be no neutral atoms.

However, Feynman’s argument was predicated on a Bohr-type atom in which the nucleus is a point charge. A more accurate prediction of the limiting Z has to take the nucleus’s finite size into account, and the full calculation changes the picture. Now the energy of the 1s orbital doesn’t fall to zero until around Z=150; but actually that is in itself relatively trivial. Even though the bound-state energy becomes negative at larger Z, the 1s electrons remain localized around the nucleus.

But when Z reaches around 173, things get complicated [1]. The bound-state energy then ‘dives’ into what is called the negative continuum: a vacuum ‘sea’ of negative-energy electrons predicted by the Dirac equation. Then the 1s states mix with those in the continuum to create a bound ‘resonance’ state – but the atom remains stable. If the atom’s 1s shell is already ionized, however, containing a single hole, then the consequences are more bizarre: the intense electric field of the nucleus is predicted to pull an electron spontaneously out of the negative continuum to fill it [2]. In other words, an electron-positron pair is created de novo, and the electron plugs the gap in the 1s shell while the positron is emitted.

This behaviour was predicted in the 1970s by Burkhard Fricke of the University of Kassel, working with nuclear physicist Walter Greiner and others [1]. Experiments were conducted during that and the following decade using ‘pseudo-atoms’ – diatomic molecules of two heavy nuclei created in ion collisions – to see if analogous positron emission could be observed from the innermost molecular rather than atomic orbitals. It never was, however, and exactly what would happen for Z>173 remains unresolved.

All the same, it seems that Feynman’s argument does not after all prohibit elements above 137, or even above 173. ‘The Periodic System will not end at 137; in fact it will never end!’, says Greiner triumphantly. Whatever mysteries are posed by the spooky 137, this is apparently not one of them.

1. B. Fricke, W. Greiner & J. T. Waber, Theor. Chim. Acta 21, 235-260 (1971).
2. W. Greiner & J. Reinhardt, Quantum Electrodynamics 4th edn (Springer, Berlin, 2009).

Some like it hot

I have been slack with my postings over the past couple of weeks, so here comes the catching up. First, a Muse for Nature News on a curious paper in PNAS on the origin of life, which seemed to have a corollary not explored by the authors… (I can’t link to the PNAS paper, as it’s not yet been put online, and in the meantime languishes in that peculiar limbo that PNAS commands.)

Heat may have been necessary to ensure that the first prebiotic reactions didn’t take an eternity. If so, this could add weight to the suggestion that water is essential for life in the cosmos.

Should we be surprised to be here? Some scientists maintain that the origin of life is absurdly improbable – Nobel laureate biologist George Wald baldly stated in 1954 that ‘one has only to contemplate the magnitude of [the] task to concede that the spontaneous generation of a living organism is impossible’ [1]. Yet others look at the size of the cosmos and conclude that even such extremely low-probability events are inevitable.

The apparent fine-tuning of physical laws and fundamental constants to enable life’s existence certainly presents a profound puzzle, which the anthropic principle answers only through the profligate hypothesis of multiple universes of which we have the fortune to occupy one that is habitable. But even if we take the laws of nature as we find them, it is hard to know whether or not we should feel fortunate to exist.

One might reasonably argue that the question has little meaning while we still have only a few hundred worlds to compare, about most of which we know next to nothing (not even whether there is, or was, life on our nearest neighbour). But one piece of empirical evidence we do have seems to challenge the notion that the origin of terrestrial life was a piece of extraordinarily good fortune: the geological record implies that life began in a blink, almost the instant the oceans were formed. It is as if it was just waiting to happen – as indeed some have suggested [2]. While Darwinian evolution needed billions of years to find a route from microbe to man, it seems that going from mineral to microbe needs barely a moment.

According to a paper in the Proceedings of the National Academy of Sciences USA by Richard Wolfenden and colleagues at the University of North Carolina, that may be largely a question of chemical kinetics [3]. Just about all the key biochemical processes in living organisms are speeded up by enzyme catalysis; otherwise they would happen too slowly or indiscriminately to make metabolism and life feasible. Some key processes, such as reactions involved in biosynthesis of nucleic acids, happen at a glacial pace without enzymes. If so, how did the earliest living systems bootstrap themselves to the point where they could sustain and reproduce themselves with enzymatic assistance?

The researchers think that temperature was the key. They point out that, not only do reactions speed up with temperature more than is commonly appreciated, but that the slowest reactions speed up the most: a change from 25 C to 100 C, for example, increases the rate of some prebiotically relevant reactions by 10 million-fold.

There’s reason to believe that life may have started in hot water, for example around submarine volcanic vents, where there are abundant supplies of energy, inorganic nutrients and simple molecular building blocks. Some of the earliest branches in the phylogenetic tree of life are occupied by thermophilic organisms, which thrive in hot conditions. A hot, aqueous origin of life is probably now the leading candidate for this mysterious event.

This alone, then, could reduce the timescales needed for a primitive biochemistry to get going from millions to tens of years. What’s more, say Wolfenden and colleagues, some of the best non-enzyme catalysts of slow metabolic reactions, which might have served as prebiotic proto-enzymes, becomes more effective as the temperature is lowered. If that’s what happened on the early Earth, then once catalysis took over from simple temperature-induced acceleration, it would have not suffered as the environment cooled or as life spread to cooler regions.

If this scenario is right, it could constrain on the kinds of worlds that support life. We know that watery worlds can do this; but might other simple liquids act as solvents for different biochemistries? In general, these have lower freezing points than water, such as the liquid hydrocarbons of Saturn’s moon Titan, ammonia (on Jupiter, say), formamide (HCONH2) or water-ammonia mixtures. One can enumerate reasons why in some respects these ‘cold’ liquids might be better solvents for life than water [4]. But if the rates of prebiotic reactions were a limiting factor in life’s origin, it may be that colder seas would never move things along fast enough.

Hotter may not be better either: quite aside from the difficulty of imagining plausible biochemistries in molten silicates, complex molecules would tend more readily to fall apart in extreme heat both because bonds snap more easily and because entropy favours disintegration over union. All of which could lend credence to the suggestion of biochemist Lawrence Henderson in 1913 that water is peculiarly biophilic [5]. In the introduction to a 1958 edition of Henderson’s book, Wald wrote ‘we now believe that life… must arise inevitably wherever it can, given enough time.’ But perhaps what it needs is not so much enough time, but enough heat.

References
1. G. Wald, Sci. Am. 191, 44-53 (1954).
2. H. J. Morowitz & E. Smith, Complexity 13, 51-59 (2007).
3. R. B. Stockbridge, C. A. Lewis Jr, Y. Yuan & R. Woldenden, Proc. Natl Acad. Sci. USA doi:10.1073/pnas.1013647107.
4. S. A. Benner, in Water and Life (eds R. M. Lynden-Bell, S. Conway Morris, J. D. Barrow, J. L. Finney & C. L. Harper, Jr, Chapter 10. CRC Press, Boca Raton, 2010.
5. L. J. Henderson, The Fitness of the Environment. Macmillan, New York, 1913.