Anxiety about the e-future – and in particular who it is going to make redundant – seems suddenly to be bursting out of every corner of the media. There was David Byrne in the Guardian Saturday Review recently worrying that Spotify and Pandora are going to eliminate the income stream for new musicians. Will Self, reviewing film critic Mark Kermode’s new book in the same supplement, talked about the ‘Gutenberg minds’ like Kermode (and himself) who are in denial about how “our art forms and our criticisms of those art forms will soon belong only to the academy and the museum” – digital media are not only undermining the role of the professional critic but changing the whole nature of what criticism is. Then we have Dave Eggers’ new novel The Circle, a satire on the Google/Facebook/Aamazon/Apple takeover of everything and the tyranny of social media. Meanwhile, Jonathan Franzen rails against the media dumbing-down of serious discourse about anything, anywhere, as attention spans shrink to nano-tweet dimensions.
Well, me, I haven’t a clue. I’m tempted to say that this is all a bit drummed up and Greenfield-esque, and that I don’t see those traits in my kids, but then, they are my kids and cruelly deprived of iPads and iPhones, and in any case are only 3 and 8. To say any such thing is surely to invite my words to come back in ten years time and sneer at my naivety. I’ve not the slightest doubt that I’m in any case wedded to all kinds of moribund forms, from the album (of the long-playing variety) to the fully punctuated text to the over-stuffed bookshelf.
But not unrelated to this issue is Philip Hensher’s spat with an academic over his refusal to write an unpaid introduction to an academic text. Hensher’s claim that it is becoming harder for authors to make a living and to have any expectation of getting paid (or paid in any significant way) for much of what they have to do is at least partly a concern from the same stable as Byrne’s – that we now have a culture that is used to getting words, music and images for next to nothing, and there is no money left for the artists.
They’re not wrong. The question of literary festivals is one that many authors are becoming increasingly fed up about, as the Guardian article on Hensher acknowledges. Personally I almost always enjoy literary festivals, and will gladly do them if it’s feasible for my schedule. The Hay Festival, which Guy Walters grumbles about, is one of the best – always fun (if usually muddy), something the family can come to, and a week’s worth of complimentary tickets seems considerable compensation for the lack of a fee. (And yes, six bottles of wine – but at least they’re from Berry Bros, and many literary festivals don’t even offer that.) But I’m also conscious that for middling-to-lower-list writers like me, it is extremely hard to say no to these things even if we wanted to. There’s the fact that publishers would be ‘disappointed’ and probably in the end disgruntled. But more than anything, there’s the sad egotistic fear that failing to appear, or even to be invited, means that you’re slipping closer to the edge of the ‘literary community’. I suspect that this fear, more than anything, is what has allowed literary festivals to proliferate so astonishingly. Well, and the fact that I’m probably not alone in being very easily satisfied (which might be essentially the same as saying that if you’re not a big name, you’re not hard to flatter). Being put up in that lovely country house hotel in Cumbria and given an evening meal has always seemed to me perfectly adequate remuneration for talking at the Words by the Water Festival (ah, so kind of you to ask again, yes I’d love to…).
But the Cambridge professor calling Hensher “priggish and ungracious” for refusing to write for free is another matter. Hensher was in fact far more gracious in response than he had any reason to be. When I am regularly asked to give up a day’s work to travel to give a talk at some academic institution (“we will of course pay your travelling costs”), I generally consider it to be a reflection of the fact that (i) academic departments simply don’t have a budget for paying speakers, and (ii) academics can very easily forget that, whereas they draw their salary while attending conferences and delivering seminars, writers don’t have a salary except for (sometimes) when they write. And so I often go and do it anyway, if I like the folks who have invited me, and/or think it will be interesting. Let alone anything else, it is good to get out and meet people. Same with unpaid writing, of which I could do a fair bit if I agreed to: I’ll contribute an article to a special issue or edited volume if I feel it would be interesting to do so, but it is rare indeed that there will be any acknowledgement that, unlike an academic, I’d then be working for free. But for a writer to be called ‘ungracious’ for refusing an ‘invitation’ to do such unpaid work is pretty despicable.
Thursday, October 24, 2013
Tuesday, October 22, 2013
Before small worlds
Here is my latest piece for BBC Future. I have also posted a little comment on the work on a Youtube channel that I am in the process of creating: see here. It’s an experiment, about which I will say more later.
____________________________________________________________
“Everyone on this planet is separated by only six other people”, claims a character in John Guare’s 1990 play Six Degrees of Separation, which provided us with the defining image of our social networks. “It’s a small world”, we say when we meet someone at a party who turns out to share a mutual friend. And it really is: the average number of links connecting you to any other random person might not be exactly six – it depends on how you define links, for one thing – but it is a small number of about that size.
But has it always been this way? It’s tempting to think so. Jazz musicians in the early 20th century were united by barely three degrees of separation. Much further back, scientists in the seventeenth century maintained a dense social network via letters, as did humanist scholars of the Renaissance. But those were specialized groups. Intellectual and aristocratic elites in history might have all known one another, but was it a small world for ordinary folk too, when mail deliveries and road travel were hard and dangerous and many people were illiterate anyway? That’s what networks expert Mark Newman of the University of Michigan at Ann Arbor and his coworkers have set out to establish.
The modern understanding of small-world social networks has come largely from direct experiments. Guare took his idea from experiments conducted in the late 1960s by social scientist Stanley Milgram of Harvard University and his coworkers. In one study they attempted to get letters to a Boston stockbroker by sending them to random people in Omaha, Nebraska, bearing only the addressee’s name and profession and the fact that he worked in Boston. Those who received the letter were asked to forward it to anyone they knew who might be better placed to help it on its way.
Most of the letters didn’t arrive at all. But of those that did, an average of only six journeys were needed to get them there. A much larger-scale re-run of the experiment in 2003 using email forwarding found an almost identical result: the average ‘chain length’ for messages delivered to the target was between 5 and 7 [P. S. Dodds, R. Muhamad & D. J. Watts, Science 301, 827 (2013)].
Needless to say, it’s not possible to conduct such epistolary experiments for former ages. But there are other ways to figure out what human social networks in history looked like. These networks don’t only spread news, information and rumour, but also things that are decidedly less welcome, such as disease. Many diseases are passed between individuals by direct, sometime intimate contact, and so the spread of an epidemic can reflect the web of human contacts on which it happens.
This is in fact one of the prime motivations for mapping out human contact networks. Epidemiologists now understand that the structure of the network – whether it is a small world or not, say – can have a profound effect on the way a disease spreads. For some types of small world, infectious diseases can pervade the entire population no matter how slow small the chance of infection is, and can be very hard to root out entirely once this happens. Some computer viruses are like this, lurking indefinitely on a few computers somewhere in the world.
Newman and colleagues admit that networks of physical contact, which spread disease, are not the same as networks of social contact: you can infect people you don’t know. But in earlier times most human interactions were conducted face to face, and in relatively small communities people rarely saw someone who they didn’t recognize.
The fact that diseases spread relatively slowly in the pre-industrial world already suggests that it was not a small world. For example, it took at least three years for the Black Death to spread through Europe, Scandinavia and Russia in the 14th century, beginning in the Levant and the Mediterranean ports.
However, network researchers have discovered that it takes only a very small number of ‘long-distance’ links to turn a ‘large world’ network, such as a grid in which each individual is connected only to their nearby neighbours, into a small world.
Newman and colleagues have used this well documented spread of the Black Death to figure out what the underlying network of physical contacts looked like. The disease was spread both by direct person-to-person transmission of the pathogenic bacterium and by being carried by rats and fleas. But neither rats or fleas travel far unless carried by humans, for example on the ships that arrived at the European ports. So transmission reflects the nature of human mobility and contact.
The researchers argue that the crucial point is not how quickly or slowly the disease spread, but what the pattern was like. It moved through the Western world rather like an ink blot spreading across a map of Europe: a steady advance of the ‘disease front’. The researchers’ computer simulations and calculations show that this is possible only if the typical path length linking two people in the network is long: if it’s not a small world. If there were enough long-range links to produce a small world, then the pattern would look quite different: not an expanding ‘stain’ but a blotchy spread in which new outbreaks get seeded far from the origin of the infection.
A: Spreading of an infectious disease in a "large world"
B: Spreading in a "small world"
So if the world was still ‘large’ in the 14th century, when did it become ‘small’? Newman and colleagues hope that other epidemiological data might reveal that, but they guess that it happened with the advent of long-distance transportation in the 19th century, which seems also to have been the time that rapidly spreading epidemics appeared. There’s always a price for progress.
Reference: S. A. Marvel, T. Martin, C. R. Doering, D. Lusseau & M. E. J. Newman, preprint http://www.arxiv.org/abs/1310.2636.
____________________________________________________________
“Everyone on this planet is separated by only six other people”, claims a character in John Guare’s 1990 play Six Degrees of Separation, which provided us with the defining image of our social networks. “It’s a small world”, we say when we meet someone at a party who turns out to share a mutual friend. And it really is: the average number of links connecting you to any other random person might not be exactly six – it depends on how you define links, for one thing – but it is a small number of about that size.
But has it always been this way? It’s tempting to think so. Jazz musicians in the early 20th century were united by barely three degrees of separation. Much further back, scientists in the seventeenth century maintained a dense social network via letters, as did humanist scholars of the Renaissance. But those were specialized groups. Intellectual and aristocratic elites in history might have all known one another, but was it a small world for ordinary folk too, when mail deliveries and road travel were hard and dangerous and many people were illiterate anyway? That’s what networks expert Mark Newman of the University of Michigan at Ann Arbor and his coworkers have set out to establish.
The modern understanding of small-world social networks has come largely from direct experiments. Guare took his idea from experiments conducted in the late 1960s by social scientist Stanley Milgram of Harvard University and his coworkers. In one study they attempted to get letters to a Boston stockbroker by sending them to random people in Omaha, Nebraska, bearing only the addressee’s name and profession and the fact that he worked in Boston. Those who received the letter were asked to forward it to anyone they knew who might be better placed to help it on its way.
Most of the letters didn’t arrive at all. But of those that did, an average of only six journeys were needed to get them there. A much larger-scale re-run of the experiment in 2003 using email forwarding found an almost identical result: the average ‘chain length’ for messages delivered to the target was between 5 and 7 [P. S. Dodds, R. Muhamad & D. J. Watts, Science 301, 827 (2013)].
Needless to say, it’s not possible to conduct such epistolary experiments for former ages. But there are other ways to figure out what human social networks in history looked like. These networks don’t only spread news, information and rumour, but also things that are decidedly less welcome, such as disease. Many diseases are passed between individuals by direct, sometime intimate contact, and so the spread of an epidemic can reflect the web of human contacts on which it happens.
This is in fact one of the prime motivations for mapping out human contact networks. Epidemiologists now understand that the structure of the network – whether it is a small world or not, say – can have a profound effect on the way a disease spreads. For some types of small world, infectious diseases can pervade the entire population no matter how slow small the chance of infection is, and can be very hard to root out entirely once this happens. Some computer viruses are like this, lurking indefinitely on a few computers somewhere in the world.
Newman and colleagues admit that networks of physical contact, which spread disease, are not the same as networks of social contact: you can infect people you don’t know. But in earlier times most human interactions were conducted face to face, and in relatively small communities people rarely saw someone who they didn’t recognize.
The fact that diseases spread relatively slowly in the pre-industrial world already suggests that it was not a small world. For example, it took at least three years for the Black Death to spread through Europe, Scandinavia and Russia in the 14th century, beginning in the Levant and the Mediterranean ports.

However, network researchers have discovered that it takes only a very small number of ‘long-distance’ links to turn a ‘large world’ network, such as a grid in which each individual is connected only to their nearby neighbours, into a small world.
Newman and colleagues have used this well documented spread of the Black Death to figure out what the underlying network of physical contacts looked like. The disease was spread both by direct person-to-person transmission of the pathogenic bacterium and by being carried by rats and fleas. But neither rats or fleas travel far unless carried by humans, for example on the ships that arrived at the European ports. So transmission reflects the nature of human mobility and contact.
The researchers argue that the crucial point is not how quickly or slowly the disease spread, but what the pattern was like. It moved through the Western world rather like an ink blot spreading across a map of Europe: a steady advance of the ‘disease front’. The researchers’ computer simulations and calculations show that this is possible only if the typical path length linking two people in the network is long: if it’s not a small world. If there were enough long-range links to produce a small world, then the pattern would look quite different: not an expanding ‘stain’ but a blotchy spread in which new outbreaks get seeded far from the origin of the infection.

A: Spreading of an infectious disease in a "large world"

B: Spreading in a "small world"
So if the world was still ‘large’ in the 14th century, when did it become ‘small’? Newman and colleagues hope that other epidemiological data might reveal that, but they guess that it happened with the advent of long-distance transportation in the 19th century, which seems also to have been the time that rapidly spreading epidemics appeared. There’s always a price for progress.
Reference: S. A. Marvel, T. Martin, C. R. Doering, D. Lusseau & M. E. J. Newman, preprint http://www.arxiv.org/abs/1310.2636.
Thursday, October 10, 2013
Colour in the Making

I have just received delivery of Colour in the Making: From Old Wisdom to New Brilliance, a book published by Black Dog, in which I have an essay on colour technology in the nineteenth century. And I can say without bias that the book is stunning. This is the first time I have seen what else it contains, and it is a gorgeous compendium of information about pigments, colour theory, and colour technology and use in visual art from medieval painting to printing and photography. There are also essays on medieval paints by Mark Clarke and on digital colour mixing by Carinna Parraman. This book is perhaps rather too weighty to be a genuine coffee-table volume, but is a feast for the eyes, and anyone with even a passing interest in colour should get it. I will put my essay up on my website soon.
Friday, October 04, 2013
The name game
My new book Serving the Reich is published on 10 October. Here is one of the little offshoots, a piece for Research Fortnight (which the kind folks there have made available for free) on the perils of naming in science. (Jim, I told you I’d steal that quote.)
___________________________________________________________________
Where would quantum physics be without Planck’s constant, the Schrödinger equation, the Bohr atom or Heisenberg’s uncertainty principle – or, more recently, Feynman diagrams, Bell’s inequality and Hawking radiation? You might not know what all these things are, but you know who discovered them.
Surely it’s right and proper that scientists should get the credit for what they do, after all. Or is it? This is what Einstein had to say on the matter:
“When a man after long years of searching chances on a thought which discloses something of the beauty of this mysterious universe, he should not therefore be personally celebrated. He is already sufficiently paid by his experience of seeking and finding. In science, moreover, the work of the individual is so bound up with that of his scientific predecessors and contemporaries that it appears almost as an impersonal product of his generation.”
Whether by design or fate, Einstein seems to have avoided having his name explicitly attached to his greatest works, the theories of special and general relativity. (The “Einstein coefficient” is an obscure quantity almost no one uses.)
But Einstein was working in the period when this fad for naming equations, units and the other paraphernalia of science after their discoverers had barely begun. The quantum pioneers were in fact among those who started it. The Dutch physicist Peter Debye insisted, against the wishes of Hitler’s government, that the new Kaiser Wilhelm Institute of Physics in Berlin, which he headed from 1935 to 1939, be called the Max Planck Institute. He had Planck’s name carved in stone over the entrance, and after the war the entire Kaiser Wilhelm Gesellschaft – the network of semi-private German research institutes – was renamed the Max Planck Society, the title that it bears today.
But Debye himself now exemplifies the perils of this practice. In 2006 he was accused in a book by a Dutch journalist of having collaborated with the Nazi government during his time in Germany, and of endorsing their anti-Semitic measures. In response, the University of Utrecht was panicked into removing Debye’s name from its Institute for Nanomaterials Science, saying that “recent evidence is not compatible with the example of using Debye’s name”. Likewise, the University of Maastricht in Debye’s home city asked for permission to rename the Debye Prize, a science award sponsored by the philanthropic Hustinx Foundation in Maastricht.
It’s now generally agreed that these accusations were unfair – Debye was no worse than the vast majority of physicists working in Nazi Germany, and certainly bears no more discredit than Max Planck himself, the grand old man of German physics, whose prevarication and obedience to the state prevented him from voicing opposition to measures that he clearly abhorred. (Recognizing this, the Universities of Utrecht and Maastricht have now relented.) Far more culpable was Werner Heisenberg, who allegedly told the occupied Dutch scientists in 1943 that “history legitimizes Germany to rule Europe and later the world”. He gave propaganda lectures on behalf of the government during the war, and led the German quest to harness nuclear power. Yet no one has questioned the legitimacy of the German Research Foundation’s Heisenberg Professorships.
Here, then, is one of the pitfalls of science’s obsession with naming: what happens when the person you’re celebrating turns out to have a questionable past? Debye, Planck and Heisenberg are all debatable cases: scarcely anyone in positions of influence in Germany under Hitler emerged without some blemish. But it leaves a bitter taste in the mouth to have to call the influence of electric fields on atomic quantum energy states the Stark effect, after its discoverer the Nobel laureate Johannes Stark – an ardent Nazi and anti-Semite, and one of the most unpleasant scientists who ever lived.
Some might say: get over it. No one should expect that people who do great things are themselves great people, and besides, being a nasty piece of work shouldn’t deprive you of credit for what you discover. Both of these things are true. But nevertheless science seems to impose names on everything it can, from awards to units, to a degree that is unparalleled in other fields: we speak of atonality, cubism, deconstructionism, not Schoenbergism, Picassoism and Derridism. This is so much the opposite of scientists’ insistence, à la Einstein, that it doesn’t matter who made the discovery that it seems worth at least pondering on the matter.
Why does science want to immortalize its greats this way? It is not as though there aren’t alternatives: we can have heliocentrism instead of Copernicanism, the law of constant proportions for Proust’s law, and so on. What’s more, naming a law or feature of nature for what it says or does, and not who saw or said it first, avoids arguments about the latter. We know, for example, that the Copernican system didn’t originate with Copernicus, that George Gabriel Stokes didn’t discover Stokes’ law, that Peter Higgs was not alone in proposing the Higgs particle. Naming laws and ideas for people is probably in part a sublimation of scientists’ obsession with priority. It certainly feeds it.
The stakes are higher, however, when it comes to naming institutions, as Utrecht’s Debye Institute discovered. There’s no natural justice which supports the name you choose to put on your lintel – it’s a more or less arbitrary decision, and if your scientific patron saint suddenly seems less saintly, it doesn’t do your reputation any good. Leen Dorsman, a historian of science and philosophy at Utrecht, was scathing about what he called this “American habit” during the “Debye affair”:
“The motive is not to honour great men, it is a sales argument. The name on the façade of the institute shouts: Look at us, look how important we are, we are affiliated with a genuine Nobel laureate.”
While acknowledging that Debye himself contributed to the tendency in Germany, Dorsman says that it was rare in the egalitarian society of the Netherlands until recently. At Utrecht University itself, he attributes it to a governance crisis that led to the appointment of leaders “who had undergone the influence of new public management ideas.” It is this board, he says, that began naming buildings and institutions in the 1990s as a way to restore the university’s self-confidence.
“My opinion is that you should avoid this”, Dorsman says. “There is always something in someone’s past that you wouldn’t like to be confronted with later on, as with Debye.” He adds that even if there isn’t, naming an institution after a “great scientist” risks allying it with a particular school of thought or direction of research, which could cause ill feeling among employees who don’t share that affiliation.
If nevertheless you feel the need to immortalize your alumni this way, the moral seems to be that you’d better ask first how well you really know them. The imposing Francis Crick Institute for biomedical research under construction in London looks fairly secure in the respect – Crick had his quirks, but he seems to have been a well-liked, upfront and decent fellow. Is anyone, however, now going to take their chance with a James Watson Research Centre? And if not, shouldn’t we think a bit more carefully about why not?
___________________________________________________________________
Where would quantum physics be without Planck’s constant, the Schrödinger equation, the Bohr atom or Heisenberg’s uncertainty principle – or, more recently, Feynman diagrams, Bell’s inequality and Hawking radiation? You might not know what all these things are, but you know who discovered them.
Surely it’s right and proper that scientists should get the credit for what they do, after all. Or is it? This is what Einstein had to say on the matter:
“When a man after long years of searching chances on a thought which discloses something of the beauty of this mysterious universe, he should not therefore be personally celebrated. He is already sufficiently paid by his experience of seeking and finding. In science, moreover, the work of the individual is so bound up with that of his scientific predecessors and contemporaries that it appears almost as an impersonal product of his generation.”
Whether by design or fate, Einstein seems to have avoided having his name explicitly attached to his greatest works, the theories of special and general relativity. (The “Einstein coefficient” is an obscure quantity almost no one uses.)
But Einstein was working in the period when this fad for naming equations, units and the other paraphernalia of science after their discoverers had barely begun. The quantum pioneers were in fact among those who started it. The Dutch physicist Peter Debye insisted, against the wishes of Hitler’s government, that the new Kaiser Wilhelm Institute of Physics in Berlin, which he headed from 1935 to 1939, be called the Max Planck Institute. He had Planck’s name carved in stone over the entrance, and after the war the entire Kaiser Wilhelm Gesellschaft – the network of semi-private German research institutes – was renamed the Max Planck Society, the title that it bears today.
But Debye himself now exemplifies the perils of this practice. In 2006 he was accused in a book by a Dutch journalist of having collaborated with the Nazi government during his time in Germany, and of endorsing their anti-Semitic measures. In response, the University of Utrecht was panicked into removing Debye’s name from its Institute for Nanomaterials Science, saying that “recent evidence is not compatible with the example of using Debye’s name”. Likewise, the University of Maastricht in Debye’s home city asked for permission to rename the Debye Prize, a science award sponsored by the philanthropic Hustinx Foundation in Maastricht.
It’s now generally agreed that these accusations were unfair – Debye was no worse than the vast majority of physicists working in Nazi Germany, and certainly bears no more discredit than Max Planck himself, the grand old man of German physics, whose prevarication and obedience to the state prevented him from voicing opposition to measures that he clearly abhorred. (Recognizing this, the Universities of Utrecht and Maastricht have now relented.) Far more culpable was Werner Heisenberg, who allegedly told the occupied Dutch scientists in 1943 that “history legitimizes Germany to rule Europe and later the world”. He gave propaganda lectures on behalf of the government during the war, and led the German quest to harness nuclear power. Yet no one has questioned the legitimacy of the German Research Foundation’s Heisenberg Professorships.
Here, then, is one of the pitfalls of science’s obsession with naming: what happens when the person you’re celebrating turns out to have a questionable past? Debye, Planck and Heisenberg are all debatable cases: scarcely anyone in positions of influence in Germany under Hitler emerged without some blemish. But it leaves a bitter taste in the mouth to have to call the influence of electric fields on atomic quantum energy states the Stark effect, after its discoverer the Nobel laureate Johannes Stark – an ardent Nazi and anti-Semite, and one of the most unpleasant scientists who ever lived.
Some might say: get over it. No one should expect that people who do great things are themselves great people, and besides, being a nasty piece of work shouldn’t deprive you of credit for what you discover. Both of these things are true. But nevertheless science seems to impose names on everything it can, from awards to units, to a degree that is unparalleled in other fields: we speak of atonality, cubism, deconstructionism, not Schoenbergism, Picassoism and Derridism. This is so much the opposite of scientists’ insistence, à la Einstein, that it doesn’t matter who made the discovery that it seems worth at least pondering on the matter.
Why does science want to immortalize its greats this way? It is not as though there aren’t alternatives: we can have heliocentrism instead of Copernicanism, the law of constant proportions for Proust’s law, and so on. What’s more, naming a law or feature of nature for what it says or does, and not who saw or said it first, avoids arguments about the latter. We know, for example, that the Copernican system didn’t originate with Copernicus, that George Gabriel Stokes didn’t discover Stokes’ law, that Peter Higgs was not alone in proposing the Higgs particle. Naming laws and ideas for people is probably in part a sublimation of scientists’ obsession with priority. It certainly feeds it.
The stakes are higher, however, when it comes to naming institutions, as Utrecht’s Debye Institute discovered. There’s no natural justice which supports the name you choose to put on your lintel – it’s a more or less arbitrary decision, and if your scientific patron saint suddenly seems less saintly, it doesn’t do your reputation any good. Leen Dorsman, a historian of science and philosophy at Utrecht, was scathing about what he called this “American habit” during the “Debye affair”:
“The motive is not to honour great men, it is a sales argument. The name on the façade of the institute shouts: Look at us, look how important we are, we are affiliated with a genuine Nobel laureate.”
While acknowledging that Debye himself contributed to the tendency in Germany, Dorsman says that it was rare in the egalitarian society of the Netherlands until recently. At Utrecht University itself, he attributes it to a governance crisis that led to the appointment of leaders “who had undergone the influence of new public management ideas.” It is this board, he says, that began naming buildings and institutions in the 1990s as a way to restore the university’s self-confidence.
“My opinion is that you should avoid this”, Dorsman says. “There is always something in someone’s past that you wouldn’t like to be confronted with later on, as with Debye.” He adds that even if there isn’t, naming an institution after a “great scientist” risks allying it with a particular school of thought or direction of research, which could cause ill feeling among employees who don’t share that affiliation.
If nevertheless you feel the need to immortalize your alumni this way, the moral seems to be that you’d better ask first how well you really know them. The imposing Francis Crick Institute for biomedical research under construction in London looks fairly secure in the respect – Crick had his quirks, but he seems to have been a well-liked, upfront and decent fellow. Is anyone, however, now going to take their chance with a James Watson Research Centre? And if not, shouldn’t we think a bit more carefully about why not?
David and Goliath - who do you cheer for?
I have just reviewed Malcolm Gladwell’s new book for Nature. I had my reservations, but on seeing Steven Poole’s acerbic job in today’s New Statesman I do wonder whether in the end I gave this a slightly easy ride. Steven rarely passes up a chance to stick the boot in, but I can’t argue with his rather damning assessment of Gladwell’s argument. Anyway, here’s mine.
___________________________________________________
David and Goliath: Underdogs, Misfits and the Art of Battling Giants
Malcolm Gladwell
Penguin Books
We think of David as the weedy foe of mighty Goliath, but he had the upper hand all along. The Israelite shepherd boy was nimble and could use his deadly weapon without getting close to his opponent. Given the skill of ancient slingers, this was more like fighting pistol against sword. David won because he changed the rules; Goliath, like everyone else, was anticipating hand-to-hand combat.
That biblical story about power and how it is used, misused and misinterpreted is the frame for Malcolm Gladwell’s David and Goliath. “The powerful are not as powerful as they seem”, he argues, “nor the weak as weak.” Weaker sports teams can win by playing unconventionally. The children of rich families are handicapped by complacency. Smaller school classes don’t necessarily produce better results.
Gladwell describes a police chief who cuts crime by buying Thanksgiving turkeys for problem families, the doctor who cured children with a drug cocktail everyone thought to be lethal. The apparent indicators of strength, such as wealth or military superiority, can prove to be weakness; what look like impediments, such as broken homes or dyslexia, can work to one’s advantage. Provincial high-flyers may under-achieve at Harvard because they’re unaccustomed to being surrounded by even more brilliant peers, whereas at a mediocre university they’d have excelled. Even if some of these conclusions seem obvious in retrospect, Gladwell is a consummate story-teller and you feel you would never have articulated the point until he spelt it out.
But don’t we all know of counter-examples? Who is demoralized and who thrives from the intellectual stimulus depends on particular personal attributes and all kinds of other intangibles. More often than not, dyslexia and broken homes are disadvantages. The achievement of a school or university class may depend more on what is taught, and how, and why, than on size. The case of physician Jay Freireich, who developed an unconventional but ultimately successful treatment for childhood leukaemia, is particularly unsettling. If Freireich had good medical reasons for administering untested mixtures of aggressive anti-cancer drugs, they aren’t explained here. Instead, there is simply a description of his bullish determination to try them out come what may, apparently engendered by his grim upbringing. Yet determination alone can – as with Robert Koch’s misguided conviction that the tuberculosis extract tuberculin would cure the disease – equally prove disastrous.
Even the biblical meta-narrative is confusing. So David wasn’t after all the plucky hero overcoming the odds, but more like Indiana Jones defeating the sword-twirling opponent by pulling out his pistol and shooting him? Was that cheating, or just thinking outside the box? There are endless examples of the stronger side winning, whether in sport, business or war, no matter how ingenious their opponents. Mostly, money does buy privilege and success. So why does David win sometimes and sometimes Goliath? Is it even clear which is which (poor Goliath might even have suffered from a vision impairment)?
These complications are becoming clear, for example in criminology. Gladwell is very interested in why some crime-prevention strategies work and others don’t. But while his “winning hearts and minds” case studies are surely a part of the solution, recent results from behavioural economics and game theory suggest that there are no easy answers beyond the fact that some sort of punishment (ideally centralized, not vigilante) is needed for social stability. Some studies suggest that excessive punishment can be counter-productive. Others show that people do not punish simply to guard their own interests, but will impose penalties on others even to their own detriment. Responses to punishment are culturally variable. In other words, punishment is a complex matter, and resists simple prescriptions.
Besides, winning is itself a slippery notion. Gladwell’s sympathies are for the underdog, the oppressed, the marginalized. But occasionally his stories celebrate a very narrow view of what constitutes success: becoming a Hollywood mogul or the president of an investment banking firm – David turned Goliath, with little regard for what makes people genuinely inspiring, happy or worthy.
None of this is a problem of Gladwell’s exposition, which is always intelligent and perceptive. It’s a problem of form. His books, like those of legions of inferior imitators, present a Big Idea. But it’s an idea that only works selectively, and it’s hard for him or anyone else to say why. These human stories are too context-dependent to deliver a take-home message, at least beyond the advice not always to expect the obvious outcome.
Perhaps Gladwell’s approach does not lend itself to book-length exposition. In The Tipping Point he pulled it off, but his follow-ups Blink, about the reliability of the gut response, and Outliers, a previous take on what makes people succeed, similarly had theses that unravelled the more you thought about them. What remains in this case are ten examples of Gladwell’s true forte: the long-form essay, engaging, surprising and smooth as a New York latte.
___________________________________________________
David and Goliath: Underdogs, Misfits and the Art of Battling Giants
Malcolm Gladwell
Penguin Books
We think of David as the weedy foe of mighty Goliath, but he had the upper hand all along. The Israelite shepherd boy was nimble and could use his deadly weapon without getting close to his opponent. Given the skill of ancient slingers, this was more like fighting pistol against sword. David won because he changed the rules; Goliath, like everyone else, was anticipating hand-to-hand combat.
That biblical story about power and how it is used, misused and misinterpreted is the frame for Malcolm Gladwell’s David and Goliath. “The powerful are not as powerful as they seem”, he argues, “nor the weak as weak.” Weaker sports teams can win by playing unconventionally. The children of rich families are handicapped by complacency. Smaller school classes don’t necessarily produce better results.
Gladwell describes a police chief who cuts crime by buying Thanksgiving turkeys for problem families, the doctor who cured children with a drug cocktail everyone thought to be lethal. The apparent indicators of strength, such as wealth or military superiority, can prove to be weakness; what look like impediments, such as broken homes or dyslexia, can work to one’s advantage. Provincial high-flyers may under-achieve at Harvard because they’re unaccustomed to being surrounded by even more brilliant peers, whereas at a mediocre university they’d have excelled. Even if some of these conclusions seem obvious in retrospect, Gladwell is a consummate story-teller and you feel you would never have articulated the point until he spelt it out.
But don’t we all know of counter-examples? Who is demoralized and who thrives from the intellectual stimulus depends on particular personal attributes and all kinds of other intangibles. More often than not, dyslexia and broken homes are disadvantages. The achievement of a school or university class may depend more on what is taught, and how, and why, than on size. The case of physician Jay Freireich, who developed an unconventional but ultimately successful treatment for childhood leukaemia, is particularly unsettling. If Freireich had good medical reasons for administering untested mixtures of aggressive anti-cancer drugs, they aren’t explained here. Instead, there is simply a description of his bullish determination to try them out come what may, apparently engendered by his grim upbringing. Yet determination alone can – as with Robert Koch’s misguided conviction that the tuberculosis extract tuberculin would cure the disease – equally prove disastrous.
Even the biblical meta-narrative is confusing. So David wasn’t after all the plucky hero overcoming the odds, but more like Indiana Jones defeating the sword-twirling opponent by pulling out his pistol and shooting him? Was that cheating, or just thinking outside the box? There are endless examples of the stronger side winning, whether in sport, business or war, no matter how ingenious their opponents. Mostly, money does buy privilege and success. So why does David win sometimes and sometimes Goliath? Is it even clear which is which (poor Goliath might even have suffered from a vision impairment)?
These complications are becoming clear, for example in criminology. Gladwell is very interested in why some crime-prevention strategies work and others don’t. But while his “winning hearts and minds” case studies are surely a part of the solution, recent results from behavioural economics and game theory suggest that there are no easy answers beyond the fact that some sort of punishment (ideally centralized, not vigilante) is needed for social stability. Some studies suggest that excessive punishment can be counter-productive. Others show that people do not punish simply to guard their own interests, but will impose penalties on others even to their own detriment. Responses to punishment are culturally variable. In other words, punishment is a complex matter, and resists simple prescriptions.
Besides, winning is itself a slippery notion. Gladwell’s sympathies are for the underdog, the oppressed, the marginalized. But occasionally his stories celebrate a very narrow view of what constitutes success: becoming a Hollywood mogul or the president of an investment banking firm – David turned Goliath, with little regard for what makes people genuinely inspiring, happy or worthy.
None of this is a problem of Gladwell’s exposition, which is always intelligent and perceptive. It’s a problem of form. His books, like those of legions of inferior imitators, present a Big Idea. But it’s an idea that only works selectively, and it’s hard for him or anyone else to say why. These human stories are too context-dependent to deliver a take-home message, at least beyond the advice not always to expect the obvious outcome.
Perhaps Gladwell’s approach does not lend itself to book-length exposition. In The Tipping Point he pulled it off, but his follow-ups Blink, about the reliability of the gut response, and Outliers, a previous take on what makes people succeed, similarly had theses that unravelled the more you thought about them. What remains in this case are ten examples of Gladwell’s true forte: the long-form essay, engaging, surprising and smooth as a New York latte.
Who reads the letters?
I often wonder how the letters pages of newspapers and magazines work. For the main articles, most publications use some form of fact-checking. But what can you do about letters in which anyone can make any claim? Does anyone check up on them before publishing? I was struck by a recent letter in New Statesman, for example, which purportedly came from David Cameron’s former schoolteacher. Who could say if it was genuine? (And, while loath to offer the slightest succour to Cameron, is it quite proper for a former teacher to be revealing stuff about his onetime pupils?)
The problem is particularly acute for science. Many a time this or that sound scientific article has been challenged by a letter from an obvious crank. Of course, sometimes factual errors are indeed pointed out this way, but who can tell which is which? I’ve seen letters printed that a newspaper’s science editor would surely have trashed very easily.
This is the case with a letter in the Observer last Sunday from a chap keen to perpetuate the myth that the world’s climate scientists are hiding behind a veil of secrecy. Philip Symmons says that he hasn’t been able to work out for himself if the models currently used for climate projections are actually capable of accurate hindcasts of past climate, since those dastardly folks at the Hadley Centre refuse to let him have the information, even after he has invoked the Freedom of Information Act. What are they afraid of, eh? What are they hiding?
If the Letters editor had asked Robin McKie, I’m sure he would have lost no time in pointing out that this is utter nonsense. The hindcast simulations Symmons is looking for are freely available to all in the last IPCC report (2007 – Figure 9.5). I found that figure after all of five minutes’ checking on the web. And incidentally, the results are extremely striking – without anthropogenic forcings, the hindcasts go badly astray after about 1950, but with them they stay right on track.
It’s clear, then, that Symmons in fact has no interest in actually getting an answer to his question – he just wants to cast aspersions. I can’t figure out why the Observer would let him do that, given how easy it should be to discover that his letter is nonsense. Surely they aren’t still feeling that one needs to present “both sides”?
The problem is particularly acute for science. Many a time this or that sound scientific article has been challenged by a letter from an obvious crank. Of course, sometimes factual errors are indeed pointed out this way, but who can tell which is which? I’ve seen letters printed that a newspaper’s science editor would surely have trashed very easily.
This is the case with a letter in the Observer last Sunday from a chap keen to perpetuate the myth that the world’s climate scientists are hiding behind a veil of secrecy. Philip Symmons says that he hasn’t been able to work out for himself if the models currently used for climate projections are actually capable of accurate hindcasts of past climate, since those dastardly folks at the Hadley Centre refuse to let him have the information, even after he has invoked the Freedom of Information Act. What are they afraid of, eh? What are they hiding?
If the Letters editor had asked Robin McKie, I’m sure he would have lost no time in pointing out that this is utter nonsense. The hindcast simulations Symmons is looking for are freely available to all in the last IPCC report (2007 – Figure 9.5). I found that figure after all of five minutes’ checking on the web. And incidentally, the results are extremely striking – without anthropogenic forcings, the hindcasts go badly astray after about 1950, but with them they stay right on track.
It’s clear, then, that Symmons in fact has no interest in actually getting an answer to his question – he just wants to cast aspersions. I can’t figure out why the Observer would let him do that, given how easy it should be to discover that his letter is nonsense. Surely they aren’t still feeling that one needs to present “both sides”?
Friday, September 27, 2013
Space is (quite) cold
Here’s my latest piece for BBC Future.
___________________________________________________________
How cold is it in space? That question is sure to prompt the geeks among us to pipe up with “2.7 degree kelvin”, which is the temperature produced by the uniform background radiation or ‘afterglow’ from the Big Bang. (Degrees Kelvin (K) here are degrees above absolute zero, with a degree on the kelvin scale being almost the same as those on the centigrade scale.)
But hang on. Evidently you don’t hit 2.7 K the moment you step outside the Earth’s atmosphere. Heat is streaming from the Sun to warm the Earth, and it will also warm other objects exposed to its rays. Take the Moon, which has virtually no atmosphere to complicate things. On the sunlit side the Moon is hotter than the Sahara – it can top 120 oC. But on the dark side it can drop to around minus 170 oC.
So just how cold can it go in our own cosmic neighbourhood? This isn’t an idle question if you’re thinking of sending spacecraft up there (let alone people). It’s particularly pertinent if you’re doing that precisely because space is cold, in order to do experiments in low-temperature physics.
There’s no need for that just to keep the apparatus cold – you only need liquid-helium coolant to get below 4 K in the lab, and some experiments have come to within just a few billionths of a kelvin of absolute zero. But some low-temperature experiments are being planned that also demand zero gravity. You can get that on Earth for a short time in freefall air flights, but for longer than a few seconds you need to go into space.
One such experiment, called MAQRO, hopes to test fundamental features of quantum theory and perhaps to search for subtle effects in a quantum picture of gravity – something that physicists can so far see only in the haziest terms. The scientists behind MAQRO have now worked out whether it will in fact be possible to get cold enough, on a spacecraft carrying the equipment, for the tests to work.
MAQRO was proposed last year by Rainer Kaltenbaek and Markus Aspelmeyer of the University of Vienna and their collaborators [R. Kaltenbaek et al., Experimental Astronomy 34, 123 (2012)]. The experiment would study one of the most profound puzzles in quantum theory: how or why do the rules of quantum physics, which govern fundamental particles like electrons and atoms, give way to the ‘classical’ physics of the everyday world? Why do quantum particles sometimes behave like waves whereas footballs don’t?
No one fully understands this so-called quantum-to-classical transition. But one of the favourite explanations invokes an idea called decoherence, which means that in effect the quantum behaviour of a system gets jumbled and ultimately erased because of the disruptive effects of the environment. These effects become stronger the more particles the system contains, because then there are more options for the environment to interfere. For objects large enough to see, containing countless trillions of atoms, decoherence happens in an instant, washing out quantum effects in favour of classical behaviour.
In this picture, it should be possible to preserve ‘quantum-ness’ in any system, no matter how big, if you could isolate it perfectly from its environment. In principle, even footballs would then show wave-particle duality and could exist in two states, or two places, at once. But some theories, as yet still speculative and untested, insist that something else will prevent this weird behaviour in large, massive objects, perhaps because of effects that would disclose something about a still elusive quantum theory of gravity.
So the stakes for MAQRO could be big. The experimental apparatus itself wouldn’t be too exotic. Kaltenbaek and colleagues propose to use laser beams to place a ‘big’ particle (about a tenth of a micrometre across) in two quantum states at once, called a superposition, and then to probe with the lasers how decoherence destroys this superposition (or not). The apparatus would have to be very cold because, as with most quantum effects, heat would disrupt a delicate superposition. And performing the experiment in zero gravity on a spacecraft could show whether gravity does indeed play a role in the quantum-to-classical transition. Putting it all on a spacecraft would be about as close to perfect isolation from the environment as one can imagine.
But now Kaltenbaek and colleagues, in collaboration with researchers at the leading European space-technology company Astrium Satellites in Friedrichshafen, Germany, have worked out just how cold the apparatus could really get. They imagine sticking a ‘bench’ with all the experimental components on the back of a disk-shaped spacecraft, with the disk, and several further layers of thermal insulation, shielding it from the Sun. So while the main body of the spacecraft would be kept at about 300 K (27 oC), which its operating equipment would require, the bench could be much colder.
But how much? The researchers calculate that, with three concentric thermal shields between the main disk of the spacecraft and the bench, black on their front surface to optimize radiation of heat and gold-plated on the reverse to minimize heating from the shield below, it should be possible to get the temperature of the bench itself down to 27 K. Much of the warming would come through the struts holding the bench and shields to the main disk.
That’s not really cold enough for the MAQRO experiment to work well. But the test particle itself would be held in free space above the bench, and this would be colder. On its own it could reach 8 K, but with all the other experimental components around it, all radiating heat, it reaches 16 K. This, they calculate, would be enough to test the decoherence rates predicted for all the major theories which currently propose that intrinsic mass (perhaps via gravity) will enforce decoherence in a large object. In other words, MAQRO should be cold enough to spot if these models are wrong.
Could it discriminate between any theories that aren’t ruled out? That’s another matter, which remains to be seen. But simply knowing that size matters in quantum mechanics would be a major finding. The bigger question, of course, is whether anyone will consider MAQRO – a cheap experiment as space science goes – worth a shot.
Reference: G. Hechenblaikner et al., preprint at http://www.arxiv.org/abs/1309.3234
___________________________________________________________
How cold is it in space? That question is sure to prompt the geeks among us to pipe up with “2.7 degree kelvin”, which is the temperature produced by the uniform background radiation or ‘afterglow’ from the Big Bang. (Degrees Kelvin (K) here are degrees above absolute zero, with a degree on the kelvin scale being almost the same as those on the centigrade scale.)
But hang on. Evidently you don’t hit 2.7 K the moment you step outside the Earth’s atmosphere. Heat is streaming from the Sun to warm the Earth, and it will also warm other objects exposed to its rays. Take the Moon, which has virtually no atmosphere to complicate things. On the sunlit side the Moon is hotter than the Sahara – it can top 120 oC. But on the dark side it can drop to around minus 170 oC.
So just how cold can it go in our own cosmic neighbourhood? This isn’t an idle question if you’re thinking of sending spacecraft up there (let alone people). It’s particularly pertinent if you’re doing that precisely because space is cold, in order to do experiments in low-temperature physics.
There’s no need for that just to keep the apparatus cold – you only need liquid-helium coolant to get below 4 K in the lab, and some experiments have come to within just a few billionths of a kelvin of absolute zero. But some low-temperature experiments are being planned that also demand zero gravity. You can get that on Earth for a short time in freefall air flights, but for longer than a few seconds you need to go into space.
One such experiment, called MAQRO, hopes to test fundamental features of quantum theory and perhaps to search for subtle effects in a quantum picture of gravity – something that physicists can so far see only in the haziest terms. The scientists behind MAQRO have now worked out whether it will in fact be possible to get cold enough, on a spacecraft carrying the equipment, for the tests to work.
MAQRO was proposed last year by Rainer Kaltenbaek and Markus Aspelmeyer of the University of Vienna and their collaborators [R. Kaltenbaek et al., Experimental Astronomy 34, 123 (2012)]. The experiment would study one of the most profound puzzles in quantum theory: how or why do the rules of quantum physics, which govern fundamental particles like electrons and atoms, give way to the ‘classical’ physics of the everyday world? Why do quantum particles sometimes behave like waves whereas footballs don’t?
No one fully understands this so-called quantum-to-classical transition. But one of the favourite explanations invokes an idea called decoherence, which means that in effect the quantum behaviour of a system gets jumbled and ultimately erased because of the disruptive effects of the environment. These effects become stronger the more particles the system contains, because then there are more options for the environment to interfere. For objects large enough to see, containing countless trillions of atoms, decoherence happens in an instant, washing out quantum effects in favour of classical behaviour.
In this picture, it should be possible to preserve ‘quantum-ness’ in any system, no matter how big, if you could isolate it perfectly from its environment. In principle, even footballs would then show wave-particle duality and could exist in two states, or two places, at once. But some theories, as yet still speculative and untested, insist that something else will prevent this weird behaviour in large, massive objects, perhaps because of effects that would disclose something about a still elusive quantum theory of gravity.
So the stakes for MAQRO could be big. The experimental apparatus itself wouldn’t be too exotic. Kaltenbaek and colleagues propose to use laser beams to place a ‘big’ particle (about a tenth of a micrometre across) in two quantum states at once, called a superposition, and then to probe with the lasers how decoherence destroys this superposition (or not). The apparatus would have to be very cold because, as with most quantum effects, heat would disrupt a delicate superposition. And performing the experiment in zero gravity on a spacecraft could show whether gravity does indeed play a role in the quantum-to-classical transition. Putting it all on a spacecraft would be about as close to perfect isolation from the environment as one can imagine.
But now Kaltenbaek and colleagues, in collaboration with researchers at the leading European space-technology company Astrium Satellites in Friedrichshafen, Germany, have worked out just how cold the apparatus could really get. They imagine sticking a ‘bench’ with all the experimental components on the back of a disk-shaped spacecraft, with the disk, and several further layers of thermal insulation, shielding it from the Sun. So while the main body of the spacecraft would be kept at about 300 K (27 oC), which its operating equipment would require, the bench could be much colder.
But how much? The researchers calculate that, with three concentric thermal shields between the main disk of the spacecraft and the bench, black on their front surface to optimize radiation of heat and gold-plated on the reverse to minimize heating from the shield below, it should be possible to get the temperature of the bench itself down to 27 K. Much of the warming would come through the struts holding the bench and shields to the main disk.
That’s not really cold enough for the MAQRO experiment to work well. But the test particle itself would be held in free space above the bench, and this would be colder. On its own it could reach 8 K, but with all the other experimental components around it, all radiating heat, it reaches 16 K. This, they calculate, would be enough to test the decoherence rates predicted for all the major theories which currently propose that intrinsic mass (perhaps via gravity) will enforce decoherence in a large object. In other words, MAQRO should be cold enough to spot if these models are wrong.
Could it discriminate between any theories that aren’t ruled out? That’s another matter, which remains to be seen. But simply knowing that size matters in quantum mechanics would be a major finding. The bigger question, of course, is whether anyone will consider MAQRO – a cheap experiment as space science goes – worth a shot.
Reference: G. Hechenblaikner et al., preprint at http://www.arxiv.org/abs/1309.3234
Thursday, September 19, 2013
Fearful symmetry

So the plan is that I’ll be writing a regular (ideally weekly) blog piece for Prospect from now on. Here is the current one, stemming from a gig last night that was a lot of fun.
_________________________________________________________
Roger Penrose makes his own rules. He is one of the most distinguished mathematical physicists in the world, but also (this doesn’t necessarily follow) one of the most inventive thinkers. It was his work on the theory of general relativity in the 1960s, especially on how the gravity of collapsing stars can produce black-hole ‘singularities’ in spacetime, that set Stephen Hawking on a course to rewrite black-hole physics. That research made Penrose’s name in science, but his mind ranges much further. In The Emperor’s New Mind (1989) he proposed that the human mind can handle problems that are formally ‘non-computable’, meaning that any computer trying to solve them by executing a set of logical rules (as all computers do) would chunter away forever without coming to a conclusion. This property of the mind, Penrose said, might stem from the brain’s use of some sort of quantum-mechanical principle, perhaps involving quantum gravity. In collaboration with anaesthetist Stuart Hameroff, he suggested in Shadows of the Mind (1994) what that principle might be, involving quantum behaviour in protein filaments called microtubules in neurons. Neuroscientists scoffed, glazed over, or muttered “Oh, physicists…”
So when I introduced a talk by Penrose this week at the Royal Institution, I commented that he is known for ideas that most others wouldn’t even imagine, let alone dare voice. I didn’t, however, expect to encounter some new ones that evening.
Penrose was speaking about the discovery for which he is perhaps best known among the public: the so-called Penrose tiling, a pair of rhombus-shaped tiles that can be used to tile a flat surface forever without the pattern ever repeating. It turns out that this pattern is peppered with objects that have five- or ten-fold symmetry: like a pentagon, they can be superimposed on themselves when rotated a fifth of a full turn. That is very strange, because fivefold symmetry is known to be rigorously forbidden for any two-dimensional packing of shapes. (Try it with ordinary pentagons and you quickly find that you get lots of gaps.) The Penrose tiling doesn’t have this ‘forbidden symmetry’ in a perfect form, but it almost does.
.svg.png)
These tilings – there are other shapes that have an equivalent result – are strikingly beautiful, with a mixture of regularity and disorder that is somehow pleasing. This is doubtless why, as Penrose explained, many architects worldwide have made use of them. But they also have a deeper significance. After Penrose described the tiling in the 1970s, the crystallographer Alan Mackay – one of the unsung polymathic savants of British science – showed in 1981 that if you imagine putting atoms at the corners of the tiles and bouncing X-rays off them (the standard technique of X-ray crystallography for deducing the atomic structures of crystals) you can get a pattern of reflections that looks for all the world like that of a perfect crystal with the forbidden five- and tenfold symmetries. Four years later, such a material (a metal alloy) was found in the real world by the Israeli materials scientist Daniel Shechtman and his coworkers. This was dubbed a quasicrystal, and the discovery won Shechtman the Nobel prize in Chemistry in 2011. Penrose tilings can explain how quasicrystals attain their ‘impossible’ structure.
In his talk Penrose explained the richness of these tilings, manipulating transparencies (remember them?) like a prestidigitator in ways that elicited several gasps of delight as new patterns suddenly came into view. But it was in the Q&A session that we got a glimpse of Penrose’s wildly lateral thinking. Assembling a tiling (and thus a quasicrystal) is a very delicate business, because if you add a tile in the wrong place or orientation, somewhere further down the line the pattern fouls up. But how could atoms in a quasicrystal know that they have to come together in a certain way here to avoid a problem right over there? Maybe, Penrose said, they make use of the bizarre quantum-mechanical property called entanglement, which foxed Einstein, in which two particles can affect one another instantaneously over any distance. Crikey.
In Penrose’s mind it all links up: quasicrystals, non-computable problems, the universe… You can use these tiles, he said, to represent the rules of how things interact in a hypothetical universe in which everything is then non-computable: the rules are well defined, but you can never use them to predict what is going to happen until it actually happens.
But my favourite anecdote had Penrose inspecting a new Penrose tiling being laid out on the concourse of some university. Looking it over, he felt uneasy. Eventually he saw why: the builders, seeing an empty space at the edge of the tiling, had stuck another tile there that didn’t respect the proper rules for their assembly. No one else would have noticed, but Penrose saw that what it meant was that “the tiling would go wrong somewhere in the middle of the lawn”. Not that it was ever going to reach that far – but it was a flaw in that hypothetical continuation, that imaginary universe, and for a mathematician that wouldn’t do. The tile had to go.
Tuesday, September 17, 2013
Quantum theory reloaded
I have finally published a long-gestated piece in Nature (501, p154; 12 September) on quantum reconstructions. It has been one of the most interesting features I can remember working on, but was necessarily reduced drastically from the unwieldy first draft. Here (long post alert) is an intermediate version that contains a fair bit more than the final article could accommodate.
__________________________________________________________
Quantum theory works. It allows us to calculate the shapes of molecules, the behaviour of semiconductor devices, the trajectories of light, with stunning accuracy. But nagging inconsistencies, paradoxes and counter-intuitive effects play around the margins: entanglement, collapse of the wave function, the effect of the observer. Can Schrödinger’s cat really be alive and dead at once? Does reality correspond to a superposition of all possible quantum states, as the “many worlds” interpretation insists?
Most users don’t worry too much about these nagging puzzles. In the words of the physicist David Mermin of Cornell University, they “shut up and calculate”. That is, after all, one way of interpreting the famous Copenhagen interpretation of quantum theory developed in the 1920s by Niels Bohr, Werner Heisenberg and their collaborators, which states that the theory tells us all we can meaningfully know about the world and that the apparent weirdness, such as wave-particle duality, is just how things are.
But there have always been some researchers who aren’t content with this. They want to know what quantum theory means – what it really tells us about the world it describes with such precision. Ever since Bohr argued with Einstein, who could not accept his “get over it” attitude to quantum theory’s seeming refusal to assign objective properties, there has been continual and sometimes furious debate over the interpretations or “foundations” of quantum theory. The basic question, says physicist Maximilian Schlosshauer of the University of Portland in Oregon, is this: “What is it about this world that forces us to navigate it with the help of such an abstract entity as quantum theory?”
A small community of physicists and philosophers has now come to suspect that these arguments are doomed to remain unresolved so long as we cling to quantum theory as it currently stands, with its exotic paraphernalia of wavefunctions, superpositions, entangled states and the uncertainty principle. They suspect that we’re stuck with seemingly irreconcilable disputes about interpretation because we don’t really have the right form of the theory in the first place. We’re looking at it from the wrong angle, making its shadow odd, spiky, hard to decode. If we could only find the right perspective, all would be clear.
But to find it, they say, we will have to rebuild quantum theory from scratch: to tear up the work of Bohr, Heisenberg and Schrödinger and start again. This is the project known as quantum reconstruction. “The program of reconstructions starts with some fundamental physical principles – hopefully only a small number of them, and with principles that are physically meaningful and reasonable and that we all can agree on – and then shows the structure of quantum theory emerges as a consequence of these principles”, says Schlosshauer. He adds that this approach, which began in earnest over a decade ago, “has gained a lot of momentum in the past years and has already helped us understand why we have a theory as strange as quantum theory to begin with.”
One hundred years ago the Bohr atom placed the quantum hypothesis advanced by Max Planck and Einstein at the heart of the structure of the physical universe. Attempts to derive the structure of the quantum atom from first principles produced Erwin Schrödinger’s quantum mechanics and the Copenhagen interpretation. Now the time seems ripe for asking if all this was just an ad hoc heuristic tool that is due for replacement with something better. Quantum reconstructionists are a diverse bunch, each with a different view of what the project should entail. But one thing they share in common is that, in seeking to resolve the outstanding foundational ‘problems’ of quantum theory, they respond much as the proverbial Irishman when asked for directions to Dublin: “I wouldn’t start from here.”
That’s at the core of the discontent evinced by one of the key reconstructionists, Christopher Fuchs of the Perimeter Institute for Theoretical Physics in Waterloo, Canada [now moved to Raytheon], at most physicists’ efforts to grapple with quantum foundations. He points out that the fundamental axioms of special relativity can be expressed in a form anyone can understand: in any moving frame, the speed of light stays constant and the laws of physics stay the same. In contrast, efforts to write down the axioms of quantum theory rapidly degenerate into a welter of arcane symbols. Fuchs suspects that, if we find the right axioms, they will be a transparent as those of relativity [1].
“The very best quantum-foundational effort”, he says, “will be the one that can write a story – literally a story, all in plain words – so compelling and so masterful in its imagery that the mathematics of quantum mechanics in all its exact technical detail will fall out as a matter of course.” Fuchs takes inspiration from quantum pioneer John Wheeler, who once claimed that if we really understood the central point of quantum theory, we ought to be able to state it in one simple sentence.
“Despite all the posturing and grimacing over the paradoxes and mysteries, none of them ask in any serious way, ‘Why do we have this theory in the first place?’” says Fuchs. “They see the task as one of patching a leaking boat, not one of seeking the principle that has kept the boat floating this long. My guess is that if we can understand what has kept the theory afloat, we’ll understand that it was never leaky to begin with.”
We can rebuild it
One of the earliest attempts at reconstruction came in 2001, when Lucien Hardy, then at Oxford University, proposed that quantum theory might be derived from a small set of “very reasonable” axioms [2]. These axioms describe how states are described by variables or probability measurements, and how these states may be combined and interconverted. Hardy assumes that any state may be specified by the number K of probabilities needed to describe it uniquely, and that there are N ‘pure’ states that can be reliably distinguished in a single measurement. For example, for either a coin toss or a quantum bit (qubit), N=2. A key (if seemingly innocuous) axiom is that for a composite system we get K and N by multiplying those parameters for each of the components: Kab = KaKb, say. It follows from this that K and N must be related according to K=N**r, where r = 1,2,3… For a classical system each state has a single probability (50 percent for heads, say), so that K=N. But that possibility is ruled out by a so-called ‘continuity axiom’, which describes how states are transformed one to another. For a classical system this happens discontinuously – a head is flipped to a tail – whereas for quantum systems the transformation can be continuous: the two pure states of a qubit can be mixed together in any degree. (That is not, Hardy stresses, the same as assuming a quantum superposition – so ‘quantumness’ isn’t being inserted by fiat.) The simplest relationship consistent with the continuity axiom is therefore K=N**2, which corresponds to a quantum picture.
But as physicist Rafael Sorkin of Syracuse University in New York had previously pointed out [3], there seems to be no fundamental reason why the higher-order theories (requiring N**3, N**4 measurements and so forth) should not also exist and have real effects. For example, Hardy says, the famous double-slit experiment for quantum particles adds a new behaviour (interference) where classical theory would just predict the outcome to be the sum of two single-slit experiments. But whereas quantum theory predicts nothing new on adding a third slit, a higher-order theory would introduce a new effect in that case – an experimental prediction, albeit one that might be very hard to detect experimentally.
In this way Hardy claims to have begun to set up quantum theory as a general theory of probability, which he thinks could have been derived in principle by nineteenth-century mathematicians without any knowledge of the empirical motivations that led Planck and Einstein to initiate quantum mechanics at the start of the twentieth century.
Indeed, perhaps the most startling aspect of quantum reconstruction is that what seemed to the pioneers of quantum theory such as Planck, Einstein and Bohr to be revolutionary about it – the quantization rather than continuum of energy – may in fact be something of a sideshow. Quantization is not an axiomatic concept in quantum reconstructions, but emerges from them. “The historical development of quantum mechanics may have led us a little astray in our view of what it is all about”, says Schlosshauer. “The whole talk of waves versus particles, quantization and so on has made many people gravitate toward interpretations where wavefunctions represent some kind of actual physical wave property, creating a lot of confusion. Quantum mechanics is not a descriptive theory of nature, and that to read it as such is to misunderstand its role.”
The new QBism
Fuchs says that Hardy’s paper “convinced me to pursue the idea that a quantum state is not just like a set of probability distributions, but very literally is a probability distribution itself – a quantification of partial belief, and nothing more.” He says “it hit me over the head like a hammer and has shaped my thinking ever since” – although he admits that Hardy does not draw the same lesson from the work himself.
Fuchs was particularly troubled by the concept of entanglement. According to Schrödinger, who coined the term in the first place, this “is the characteristic trait of quantum mechanics, the one that enforces its entire departure from classical lines of thought” [4]. In most common expositions of the theory, entanglement is depicted as seeming to permit the kind of instantaneous ‘action at a distance’ Einstein’s theory of relativity forbade. Entangled particles have interdependent states, such that a measurement on one of them is instantaneously ‘felt’ by the other. For example, two photons can be entangled such that they have opposed orientations of polarization (vertical or horizontal). Before a measurement is made on the photons, their polarization is indeterminate: all we know is that these are correlated. But if we measure one photon, collapsing the probabilities into a well-defined outcome, then we automatically and instantaneously determine the other’s polarization too, no matter how far apart the two photons are. In 1935 Einstein and coworkers presented this as a paradox intended to undermine the probabilistic Copenhagen interpretation; but experiments on photons in the 1980s showed that it really happens [5]. Entanglement, far from being a contrived quirk, is the key to quantum information theory and its associated technologies, such as quantum computers and cryptography.
But although quantum theory can predict the outcomes of entanglement experiments perfectly adequately, it still seems an odd way for the world to behave. We can write down the equations, but we can’t feel the physics behind them. That’s what prompted Fuchs to call for a fresh approach to quantum foundations [1]. His approach [6, 7] argues that quantum states themselves – the entangled state of two photons, say, or even just the spin state of a single photon – don’t exist as objective realities. Rather, “quantum states represent observers’ personal information, expectations and degrees of belief”, he says.
Fuchs calls this approach quantum Bayesianism or QBism (pronounced “cubism”), because he believes that, as standard Bayesian probability theory assumes, probabilities – including quantum probabilities – “are not real things out in the world; their only existence is in quantifying personal degrees of belief of what might happen.” This view, he says, “allows one to see all quantum measurement events as little ‘moments of creation’, rather than as revealing anything pre-existent.”
This idea that quantum theory is really about what we can and do know has always been somewhat in the picture. Schrödinger’s wavefunctions encode a probability distribution of measurement outcomes: what these measurements on a quantum system might be. In the Copenhagen view, it is meaningless to talk about what we actually will measure until we do it. Likewise, Heisenberg’s uncertainty principle insists that we can’t know everything about every observable property with arbitrarily exact accuracy. In other words, quantum theory seemed to impose limits on our precise knowledge of the state of the world – or perhaps better put, to expose a fundamental indeterminacy in our expectations of what measurement will show us. But Fuchs wants us to accept that this isn’t a question of generalized imprecision of knowledge, but a statement about what a specific individual can see and measure. We’re not just part of the painting: in a sense we are partially responsible for painting it.
Information is the key
The rise of quantum information theory over the past few decades has put a new spin on this consideration. One might say that it has replaced an impression of analog fuzziness (“I can’t see this clearly”) with digital error (“the answer might be this or that, but there’s such-and-such a chance of your prediction being wrong”). It is this focus on information – rather, knowledge – that characterizes several of the current attempts to rebuild quantum theory from scratch. As physicists Caslav Brukner and Anton Zeilinger of the University of Vienna put it, “quantum physics is an elementary theory of information” [8].
Jeffrey Bub of the University of Maryland agrees: quantum mechanics, he says, is “fundamentally a theory about the representation and manipulation of information, not a theory about the mechanics of nonclassical waves or particles” – as clear a statement as you could wish for of why early quantum theory got distracted by the wrong things. His approach to reconstruction builds on the formal properties of how different sorts of information can be ordered and permuted, which lie at the heart of the uncertainty principle [9].
In the quantum picture, certain pairs of quantities do not commute, which means that it matters in which order they are considered: momentum times position is not the same as position times momentum, rather as kneading and baking dough do not commute when making bread. Bub believes that noncommutativity is what distinguishes quantum from classical mechanics, and that entanglement is one of the consequences. This property, he says, is a feature of the way information is fundamentally structured, and it might emerge from a principle called ‘information causality’ [10], introduced by Marcin Pawlowski of the University of Gdansk and colleagues. This postulate describes how much information one observer (call him Bob) can gain about a data set held by another (Alice). Classically the amount is limited by what Alice communicates to Bob. Quantum correlations such as entanglement can increase this limit, but only within bounds set by the information causality postulate. Pawlowski and colleagues suspect that this postulate might single out precisely what quantum correlations permit about information transfer. If so, they argue, “information causality might be one of the foundational properties of nature” – in other words, an axiom of quantum theory.
Ontic or epistemic?
At the root of the matter is the issue of whether quantum theory pronounces on the nature of reality (a so-called ontic theory) or merely on our allowed knowledge of it (an epistemic theory). Ontic theories, such as the Many Worlds interpretation, take the view that wavefunctions are real entities. The Copenhagen interpretation, on the other hand, is epistemic, insisting that it’s not physically meaningful to look for any layer of reality beneath what we can measure. In this view, says Fuchs, God plays dice and so “the future is not completely determined by the past.” QBism takes this even further: what we see depends on what we look for. “In both Copenhagen and QBism, the wave function is not something “out there’”, says Fuchs. “QBism should be seen as a modern variant and refinement of Copenhagen.”
His faith in epistemic approaches to reconstruction is boosted by the work of Robert Spekkens, his colleague at the Perimeter Institute. Spekkens has devised a ‘toy theory’ that restricts the amount of information an observer can have about discrete ontic states of the system: specifically, one’s knowledge about these states can never exceed the amount of knowledge one lacks about them. Spekkens calls this the ‘knowledge balance principle’. It might seem an arbitrary imposition, but he finds that it alone is sufficient to reproduce many (but not all) of the characteristics of quantum theory, such as superposition, entanglement and teleportation [11]. Related ideas involving other kinds of restriction on what can be known about a suite of states also find quantum-like behaviours emerging [12,13].
Fuchs sees these insights as a necessary corrective to the way quantum information theory has tended to propagate the notion that information is something objective and real – which is to say, ontic. “It is amazing how many people talk about information as if it is simply some new kind of objective quantity in physics, like energy, but measured in bits instead of ergs”, he says. “You’ll often hear information spoken of as if it’s a new fluid that physics has only recently taken note of.” In contrast, he argues, what else can information possibly be except an expression of what we think we know?
“What quantum information gave us was a vast range of phenomena that nominally looked quite novel when they were first found”, Fuchs explains. For example, it seemed that quantum states, unlike classical states, can’t be ‘cloned’ to make identical copes. “But what Rob’s toy model showed was that so much of this vast range wasn’t really novel at all, so long as one understood these to be phenomena of epistemic states, not ontic ones”. Classical epistemic states can’t be cloned any more than quantum states can be, for much the same reason as you can’t be me.
What’s the use?
What’s striking about several of these attempts at quantum reconstruction is that they suggest that our universe is just one of many mathematical possibilities. “It turns out that many principles lead to a whole class of probabilistic theories, and not specifically quantum theory”, says Schlosshauer. “The problem has been to find principles that actually single out quantum theory”. But this is in itself a valuable insight: “a lot of the features we think of as uniquely quantum, like superpositions, interference and entanglement, are actually generic to many probabilistic theories. This allows us to focus on the question of what makes quantum theory unique.”
Hardy says that, after a hiatus following Fuchs’ call to arms and his own five-axiom proposal in the early 2000s, progress in reconstructions really began in 2009. “We’re now poised for some really significant breakthroughs, in a way that we weren’t ten years ago”, he says. While there’s still no consensus on what the basic axioms should look like, he is confident that “we’ll know them when we see them.” He suspects that ultimately the right description will prove to be ontic rather than epistemic: it will remove the human observer from the scene once more and return us to an objective view of reality. But he acknowledges that some, like Fuchs, disagree profoundly.
For Fuchs, the aim of reconstruction is not to rebuild the existing formalism of quantum theory from scratch, but to rewrite it totally. He says that approaches such as QBism are already motivating new experimental proposals, which might for example reveal a new, deep symmetry within quantum mechanics [14]. The existence of this symmetry, Fuchs says, would allow the quantum probability law to be re-expressed as a minor variation of the standard ‘law of total probability’ in probability theory, which relates the probability of an event to the conditional probabilities of all the ways it might come about. “That new view, if it proves valid, could change our understanding of how to build quantum computers and other quantum information kits,” he says.
Quantum reconstruction is gaining support. A recent poll of attitudes among quantum theorists showed that 60% think reconstructions give useful insights, and more than a quarter think they will lead to a new theory deeper than quantum mechanics [15]. That is a rare degree of consensus for matters connected to quantum foundations.
But how can we judge the success of these efforts? “Since the object is simply to reconstruct quantum theory as it stands, we could not prove that a particular reconstruction was correct since the experimental results are the same regardless”, Hardy admits. “However, we could attempt to do experiments that test that the given axioms are true.” For example, one might seek the ‘higher-order’ interference that his approach predicts.
“However, I would say that the real criterion for success are more theoretical”, he adds. “Do we have a better understanding of quantum theory, and do the axioms give us new ideas as to how to go beyond current day physics?” He is hopeful that some of these principles might assist the development of a theory of quantum gravity – but says that in this regard it’s too early to say whether the approach has been successful.
Fuchs agrees that “the question is not one of testing the reconstructions in any kind of experimental way, but rather through any insight the different variations might give for furthering physical theory along. A good reconstruction is one that has some ‘leading power’ for the way a theorist might think.”
Some remain skeptical. “Reconstructing quantum theory from a set of basic principles seems like an idea with the odds greatly against it”, admits Daniel Greenberger of the City College of New York. “But it’s a worthy enterprise” [16]. Yet Schlosshauer argues that “even if no single reconstruction program can actually find a universally accepted set of principles that works, it’s not a wasted effort, because we will have learned so much along the way.”
He is cautiously optimistic. “I believe that once we have a set of simple and physically intuitive principles, and a convincing story to go with them, quantum mechanics will look a lot less mysterious”, he says. “And I think a lot of the outstanding questions will then go away. I’m probably not the only one who would love to be around to witness the discovery of these principles.” Fuchs feels that could be revolutionary. “My guess is, when the answer is in hand, physics will be ready to explore worlds the faulty preconception of quantum states couldn’t dream of.”
References
1. Fuchs, C., http://arxiv.org/abs/quant-ph/0106166 (2001).
2. Hardy, L. E. http://arxiv.org/abs/quant-ph/0101012 (2003).
3. Sorkin, R., http://arxiv.org/pdf/gr-qc/9401003 (1994).
4. Schrödinger, E. Proc. Cambridge Phil. Soc. 31, 555–563 (1935).
5. A. Aspect et al., Phys. Rev. Lett. 49, 91 (1982).
6. Fuchs, C. http://arxiv.org/pdf/1003.5209
7. Fuchs, C. http://arxiv.org/abs/1207.2141 (2012).
8. Brukner, C. & Zeilinger, A. http://arxiv.org/pdf/quant-ph/0212084 (2008).
9. Bub, J. http://arxiv.org/pdf/quant-ph/0408020 (2008).
10. Pawlowski, M. et al., Nature 461, 1101-1104 (2009).
11. Spekkens, R. W. http://arxiv.org/abs/quant-ph/0401052 (2004).
12. Kirkpatrick, K. A. Found. Phys. Lett. 16, 199 (2003).
13. Smolin, J. A. Quantum Inform. Compu. 5, 161 (2005).
14. Renes, J. M., Blume-Kohout, R., Scott, A. J. & Caves, C. M. J. Math. Phys. 45, 2717 (2004).
15. Schlosshauer, M., Kofler, J. & Zeilinger, A. Stud. Hist. Phil. Mod. Phys. 44, 222–230 (2013).
16. In Schlosshauer, M. (ed.), Elegance and Enigma: The Quantum Interviews (Springer, 2011).
__________________________________________________________
Quantum theory works. It allows us to calculate the shapes of molecules, the behaviour of semiconductor devices, the trajectories of light, with stunning accuracy. But nagging inconsistencies, paradoxes and counter-intuitive effects play around the margins: entanglement, collapse of the wave function, the effect of the observer. Can Schrödinger’s cat really be alive and dead at once? Does reality correspond to a superposition of all possible quantum states, as the “many worlds” interpretation insists?
Most users don’t worry too much about these nagging puzzles. In the words of the physicist David Mermin of Cornell University, they “shut up and calculate”. That is, after all, one way of interpreting the famous Copenhagen interpretation of quantum theory developed in the 1920s by Niels Bohr, Werner Heisenberg and their collaborators, which states that the theory tells us all we can meaningfully know about the world and that the apparent weirdness, such as wave-particle duality, is just how things are.
But there have always been some researchers who aren’t content with this. They want to know what quantum theory means – what it really tells us about the world it describes with such precision. Ever since Bohr argued with Einstein, who could not accept his “get over it” attitude to quantum theory’s seeming refusal to assign objective properties, there has been continual and sometimes furious debate over the interpretations or “foundations” of quantum theory. The basic question, says physicist Maximilian Schlosshauer of the University of Portland in Oregon, is this: “What is it about this world that forces us to navigate it with the help of such an abstract entity as quantum theory?”
A small community of physicists and philosophers has now come to suspect that these arguments are doomed to remain unresolved so long as we cling to quantum theory as it currently stands, with its exotic paraphernalia of wavefunctions, superpositions, entangled states and the uncertainty principle. They suspect that we’re stuck with seemingly irreconcilable disputes about interpretation because we don’t really have the right form of the theory in the first place. We’re looking at it from the wrong angle, making its shadow odd, spiky, hard to decode. If we could only find the right perspective, all would be clear.
But to find it, they say, we will have to rebuild quantum theory from scratch: to tear up the work of Bohr, Heisenberg and Schrödinger and start again. This is the project known as quantum reconstruction. “The program of reconstructions starts with some fundamental physical principles – hopefully only a small number of them, and with principles that are physically meaningful and reasonable and that we all can agree on – and then shows the structure of quantum theory emerges as a consequence of these principles”, says Schlosshauer. He adds that this approach, which began in earnest over a decade ago, “has gained a lot of momentum in the past years and has already helped us understand why we have a theory as strange as quantum theory to begin with.”
One hundred years ago the Bohr atom placed the quantum hypothesis advanced by Max Planck and Einstein at the heart of the structure of the physical universe. Attempts to derive the structure of the quantum atom from first principles produced Erwin Schrödinger’s quantum mechanics and the Copenhagen interpretation. Now the time seems ripe for asking if all this was just an ad hoc heuristic tool that is due for replacement with something better. Quantum reconstructionists are a diverse bunch, each with a different view of what the project should entail. But one thing they share in common is that, in seeking to resolve the outstanding foundational ‘problems’ of quantum theory, they respond much as the proverbial Irishman when asked for directions to Dublin: “I wouldn’t start from here.”
That’s at the core of the discontent evinced by one of the key reconstructionists, Christopher Fuchs of the Perimeter Institute for Theoretical Physics in Waterloo, Canada [now moved to Raytheon], at most physicists’ efforts to grapple with quantum foundations. He points out that the fundamental axioms of special relativity can be expressed in a form anyone can understand: in any moving frame, the speed of light stays constant and the laws of physics stay the same. In contrast, efforts to write down the axioms of quantum theory rapidly degenerate into a welter of arcane symbols. Fuchs suspects that, if we find the right axioms, they will be a transparent as those of relativity [1].
“The very best quantum-foundational effort”, he says, “will be the one that can write a story – literally a story, all in plain words – so compelling and so masterful in its imagery that the mathematics of quantum mechanics in all its exact technical detail will fall out as a matter of course.” Fuchs takes inspiration from quantum pioneer John Wheeler, who once claimed that if we really understood the central point of quantum theory, we ought to be able to state it in one simple sentence.
“Despite all the posturing and grimacing over the paradoxes and mysteries, none of them ask in any serious way, ‘Why do we have this theory in the first place?’” says Fuchs. “They see the task as one of patching a leaking boat, not one of seeking the principle that has kept the boat floating this long. My guess is that if we can understand what has kept the theory afloat, we’ll understand that it was never leaky to begin with.”
We can rebuild it
One of the earliest attempts at reconstruction came in 2001, when Lucien Hardy, then at Oxford University, proposed that quantum theory might be derived from a small set of “very reasonable” axioms [2]. These axioms describe how states are described by variables or probability measurements, and how these states may be combined and interconverted. Hardy assumes that any state may be specified by the number K of probabilities needed to describe it uniquely, and that there are N ‘pure’ states that can be reliably distinguished in a single measurement. For example, for either a coin toss or a quantum bit (qubit), N=2. A key (if seemingly innocuous) axiom is that for a composite system we get K and N by multiplying those parameters for each of the components: Kab = KaKb, say. It follows from this that K and N must be related according to K=N**r, where r = 1,2,3… For a classical system each state has a single probability (50 percent for heads, say), so that K=N. But that possibility is ruled out by a so-called ‘continuity axiom’, which describes how states are transformed one to another. For a classical system this happens discontinuously – a head is flipped to a tail – whereas for quantum systems the transformation can be continuous: the two pure states of a qubit can be mixed together in any degree. (That is not, Hardy stresses, the same as assuming a quantum superposition – so ‘quantumness’ isn’t being inserted by fiat.) The simplest relationship consistent with the continuity axiom is therefore K=N**2, which corresponds to a quantum picture.
But as physicist Rafael Sorkin of Syracuse University in New York had previously pointed out [3], there seems to be no fundamental reason why the higher-order theories (requiring N**3, N**4 measurements and so forth) should not also exist and have real effects. For example, Hardy says, the famous double-slit experiment for quantum particles adds a new behaviour (interference) where classical theory would just predict the outcome to be the sum of two single-slit experiments. But whereas quantum theory predicts nothing new on adding a third slit, a higher-order theory would introduce a new effect in that case – an experimental prediction, albeit one that might be very hard to detect experimentally.
In this way Hardy claims to have begun to set up quantum theory as a general theory of probability, which he thinks could have been derived in principle by nineteenth-century mathematicians without any knowledge of the empirical motivations that led Planck and Einstein to initiate quantum mechanics at the start of the twentieth century.
Indeed, perhaps the most startling aspect of quantum reconstruction is that what seemed to the pioneers of quantum theory such as Planck, Einstein and Bohr to be revolutionary about it – the quantization rather than continuum of energy – may in fact be something of a sideshow. Quantization is not an axiomatic concept in quantum reconstructions, but emerges from them. “The historical development of quantum mechanics may have led us a little astray in our view of what it is all about”, says Schlosshauer. “The whole talk of waves versus particles, quantization and so on has made many people gravitate toward interpretations where wavefunctions represent some kind of actual physical wave property, creating a lot of confusion. Quantum mechanics is not a descriptive theory of nature, and that to read it as such is to misunderstand its role.”
The new QBism
Fuchs says that Hardy’s paper “convinced me to pursue the idea that a quantum state is not just like a set of probability distributions, but very literally is a probability distribution itself – a quantification of partial belief, and nothing more.” He says “it hit me over the head like a hammer and has shaped my thinking ever since” – although he admits that Hardy does not draw the same lesson from the work himself.
Fuchs was particularly troubled by the concept of entanglement. According to Schrödinger, who coined the term in the first place, this “is the characteristic trait of quantum mechanics, the one that enforces its entire departure from classical lines of thought” [4]. In most common expositions of the theory, entanglement is depicted as seeming to permit the kind of instantaneous ‘action at a distance’ Einstein’s theory of relativity forbade. Entangled particles have interdependent states, such that a measurement on one of them is instantaneously ‘felt’ by the other. For example, two photons can be entangled such that they have opposed orientations of polarization (vertical or horizontal). Before a measurement is made on the photons, their polarization is indeterminate: all we know is that these are correlated. But if we measure one photon, collapsing the probabilities into a well-defined outcome, then we automatically and instantaneously determine the other’s polarization too, no matter how far apart the two photons are. In 1935 Einstein and coworkers presented this as a paradox intended to undermine the probabilistic Copenhagen interpretation; but experiments on photons in the 1980s showed that it really happens [5]. Entanglement, far from being a contrived quirk, is the key to quantum information theory and its associated technologies, such as quantum computers and cryptography.
But although quantum theory can predict the outcomes of entanglement experiments perfectly adequately, it still seems an odd way for the world to behave. We can write down the equations, but we can’t feel the physics behind them. That’s what prompted Fuchs to call for a fresh approach to quantum foundations [1]. His approach [6, 7] argues that quantum states themselves – the entangled state of two photons, say, or even just the spin state of a single photon – don’t exist as objective realities. Rather, “quantum states represent observers’ personal information, expectations and degrees of belief”, he says.
Fuchs calls this approach quantum Bayesianism or QBism (pronounced “cubism”), because he believes that, as standard Bayesian probability theory assumes, probabilities – including quantum probabilities – “are not real things out in the world; their only existence is in quantifying personal degrees of belief of what might happen.” This view, he says, “allows one to see all quantum measurement events as little ‘moments of creation’, rather than as revealing anything pre-existent.”
This idea that quantum theory is really about what we can and do know has always been somewhat in the picture. Schrödinger’s wavefunctions encode a probability distribution of measurement outcomes: what these measurements on a quantum system might be. In the Copenhagen view, it is meaningless to talk about what we actually will measure until we do it. Likewise, Heisenberg’s uncertainty principle insists that we can’t know everything about every observable property with arbitrarily exact accuracy. In other words, quantum theory seemed to impose limits on our precise knowledge of the state of the world – or perhaps better put, to expose a fundamental indeterminacy in our expectations of what measurement will show us. But Fuchs wants us to accept that this isn’t a question of generalized imprecision of knowledge, but a statement about what a specific individual can see and measure. We’re not just part of the painting: in a sense we are partially responsible for painting it.
Information is the key
The rise of quantum information theory over the past few decades has put a new spin on this consideration. One might say that it has replaced an impression of analog fuzziness (“I can’t see this clearly”) with digital error (“the answer might be this or that, but there’s such-and-such a chance of your prediction being wrong”). It is this focus on information – rather, knowledge – that characterizes several of the current attempts to rebuild quantum theory from scratch. As physicists Caslav Brukner and Anton Zeilinger of the University of Vienna put it, “quantum physics is an elementary theory of information” [8].
Jeffrey Bub of the University of Maryland agrees: quantum mechanics, he says, is “fundamentally a theory about the representation and manipulation of information, not a theory about the mechanics of nonclassical waves or particles” – as clear a statement as you could wish for of why early quantum theory got distracted by the wrong things. His approach to reconstruction builds on the formal properties of how different sorts of information can be ordered and permuted, which lie at the heart of the uncertainty principle [9].
In the quantum picture, certain pairs of quantities do not commute, which means that it matters in which order they are considered: momentum times position is not the same as position times momentum, rather as kneading and baking dough do not commute when making bread. Bub believes that noncommutativity is what distinguishes quantum from classical mechanics, and that entanglement is one of the consequences. This property, he says, is a feature of the way information is fundamentally structured, and it might emerge from a principle called ‘information causality’ [10], introduced by Marcin Pawlowski of the University of Gdansk and colleagues. This postulate describes how much information one observer (call him Bob) can gain about a data set held by another (Alice). Classically the amount is limited by what Alice communicates to Bob. Quantum correlations such as entanglement can increase this limit, but only within bounds set by the information causality postulate. Pawlowski and colleagues suspect that this postulate might single out precisely what quantum correlations permit about information transfer. If so, they argue, “information causality might be one of the foundational properties of nature” – in other words, an axiom of quantum theory.
Ontic or epistemic?
At the root of the matter is the issue of whether quantum theory pronounces on the nature of reality (a so-called ontic theory) or merely on our allowed knowledge of it (an epistemic theory). Ontic theories, such as the Many Worlds interpretation, take the view that wavefunctions are real entities. The Copenhagen interpretation, on the other hand, is epistemic, insisting that it’s not physically meaningful to look for any layer of reality beneath what we can measure. In this view, says Fuchs, God plays dice and so “the future is not completely determined by the past.” QBism takes this even further: what we see depends on what we look for. “In both Copenhagen and QBism, the wave function is not something “out there’”, says Fuchs. “QBism should be seen as a modern variant and refinement of Copenhagen.”
His faith in epistemic approaches to reconstruction is boosted by the work of Robert Spekkens, his colleague at the Perimeter Institute. Spekkens has devised a ‘toy theory’ that restricts the amount of information an observer can have about discrete ontic states of the system: specifically, one’s knowledge about these states can never exceed the amount of knowledge one lacks about them. Spekkens calls this the ‘knowledge balance principle’. It might seem an arbitrary imposition, but he finds that it alone is sufficient to reproduce many (but not all) of the characteristics of quantum theory, such as superposition, entanglement and teleportation [11]. Related ideas involving other kinds of restriction on what can be known about a suite of states also find quantum-like behaviours emerging [12,13].
Fuchs sees these insights as a necessary corrective to the way quantum information theory has tended to propagate the notion that information is something objective and real – which is to say, ontic. “It is amazing how many people talk about information as if it is simply some new kind of objective quantity in physics, like energy, but measured in bits instead of ergs”, he says. “You’ll often hear information spoken of as if it’s a new fluid that physics has only recently taken note of.” In contrast, he argues, what else can information possibly be except an expression of what we think we know?
“What quantum information gave us was a vast range of phenomena that nominally looked quite novel when they were first found”, Fuchs explains. For example, it seemed that quantum states, unlike classical states, can’t be ‘cloned’ to make identical copes. “But what Rob’s toy model showed was that so much of this vast range wasn’t really novel at all, so long as one understood these to be phenomena of epistemic states, not ontic ones”. Classical epistemic states can’t be cloned any more than quantum states can be, for much the same reason as you can’t be me.
What’s the use?
What’s striking about several of these attempts at quantum reconstruction is that they suggest that our universe is just one of many mathematical possibilities. “It turns out that many principles lead to a whole class of probabilistic theories, and not specifically quantum theory”, says Schlosshauer. “The problem has been to find principles that actually single out quantum theory”. But this is in itself a valuable insight: “a lot of the features we think of as uniquely quantum, like superpositions, interference and entanglement, are actually generic to many probabilistic theories. This allows us to focus on the question of what makes quantum theory unique.”
Hardy says that, after a hiatus following Fuchs’ call to arms and his own five-axiom proposal in the early 2000s, progress in reconstructions really began in 2009. “We’re now poised for some really significant breakthroughs, in a way that we weren’t ten years ago”, he says. While there’s still no consensus on what the basic axioms should look like, he is confident that “we’ll know them when we see them.” He suspects that ultimately the right description will prove to be ontic rather than epistemic: it will remove the human observer from the scene once more and return us to an objective view of reality. But he acknowledges that some, like Fuchs, disagree profoundly.
For Fuchs, the aim of reconstruction is not to rebuild the existing formalism of quantum theory from scratch, but to rewrite it totally. He says that approaches such as QBism are already motivating new experimental proposals, which might for example reveal a new, deep symmetry within quantum mechanics [14]. The existence of this symmetry, Fuchs says, would allow the quantum probability law to be re-expressed as a minor variation of the standard ‘law of total probability’ in probability theory, which relates the probability of an event to the conditional probabilities of all the ways it might come about. “That new view, if it proves valid, could change our understanding of how to build quantum computers and other quantum information kits,” he says.
Quantum reconstruction is gaining support. A recent poll of attitudes among quantum theorists showed that 60% think reconstructions give useful insights, and more than a quarter think they will lead to a new theory deeper than quantum mechanics [15]. That is a rare degree of consensus for matters connected to quantum foundations.
But how can we judge the success of these efforts? “Since the object is simply to reconstruct quantum theory as it stands, we could not prove that a particular reconstruction was correct since the experimental results are the same regardless”, Hardy admits. “However, we could attempt to do experiments that test that the given axioms are true.” For example, one might seek the ‘higher-order’ interference that his approach predicts.
“However, I would say that the real criterion for success are more theoretical”, he adds. “Do we have a better understanding of quantum theory, and do the axioms give us new ideas as to how to go beyond current day physics?” He is hopeful that some of these principles might assist the development of a theory of quantum gravity – but says that in this regard it’s too early to say whether the approach has been successful.
Fuchs agrees that “the question is not one of testing the reconstructions in any kind of experimental way, but rather through any insight the different variations might give for furthering physical theory along. A good reconstruction is one that has some ‘leading power’ for the way a theorist might think.”
Some remain skeptical. “Reconstructing quantum theory from a set of basic principles seems like an idea with the odds greatly against it”, admits Daniel Greenberger of the City College of New York. “But it’s a worthy enterprise” [16]. Yet Schlosshauer argues that “even if no single reconstruction program can actually find a universally accepted set of principles that works, it’s not a wasted effort, because we will have learned so much along the way.”
He is cautiously optimistic. “I believe that once we have a set of simple and physically intuitive principles, and a convincing story to go with them, quantum mechanics will look a lot less mysterious”, he says. “And I think a lot of the outstanding questions will then go away. I’m probably not the only one who would love to be around to witness the discovery of these principles.” Fuchs feels that could be revolutionary. “My guess is, when the answer is in hand, physics will be ready to explore worlds the faulty preconception of quantum states couldn’t dream of.”
References
1. Fuchs, C., http://arxiv.org/abs/quant-ph/0106166 (2001).
2. Hardy, L. E. http://arxiv.org/abs/quant-ph/0101012 (2003).
3. Sorkin, R., http://arxiv.org/pdf/gr-qc/9401003 (1994).
4. Schrödinger, E. Proc. Cambridge Phil. Soc. 31, 555–563 (1935).
5. A. Aspect et al., Phys. Rev. Lett. 49, 91 (1982).
6. Fuchs, C. http://arxiv.org/pdf/1003.5209
7. Fuchs, C. http://arxiv.org/abs/1207.2141 (2012).
8. Brukner, C. & Zeilinger, A. http://arxiv.org/pdf/quant-ph/0212084 (2008).
9. Bub, J. http://arxiv.org/pdf/quant-ph/0408020 (2008).
10. Pawlowski, M. et al., Nature 461, 1101-1104 (2009).
11. Spekkens, R. W. http://arxiv.org/abs/quant-ph/0401052 (2004).
12. Kirkpatrick, K. A. Found. Phys. Lett. 16, 199 (2003).
13. Smolin, J. A. Quantum Inform. Compu. 5, 161 (2005).
14. Renes, J. M., Blume-Kohout, R., Scott, A. J. & Caves, C. M. J. Math. Phys. 45, 2717 (2004).
15. Schlosshauer, M., Kofler, J. & Zeilinger, A. Stud. Hist. Phil. Mod. Phys. 44, 222–230 (2013).
16. In Schlosshauer, M. (ed.), Elegance and Enigma: The Quantum Interviews (Springer, 2011).
Sunday, September 15, 2013
Insects with cogs

Here’s the initial version of my latest news story for Nature.
___________________________________________________
Toothed gears allow young jumping planthoppers to synchronize their legs.
If you’re a young planthopper, leaping a metre in a single bound, you need to push off with both hindlegs perfectly in time or you’ll end up spinning crazily. Researchers in England have discovered that this synchrony is made possible by toothed gears connecting the two legs.
Zoologists Malcolm Burrows and Gregory Sutton of Cambridge University say that this seems to be the first example of rotary motion in nature coupled by toothed gears. They describe their results in Science [1].
Their microscopic images of the hindleg mechanism of the planthopper Issus coleoptratus show that the topmost leg segments, ending in partly circular structures, are connected by a series of tiny intermeshing teeth about 20 micrometres (thousandths of a millimetre) long.
When the insects jump, the two legs rotate together, the cog teeth ensuring that they thrust at exactly the same time. “The gears add an extra level of synchronisation beyond that which can be achieved by the nervous system”, says Burrows.
Planthopper nymphs can take off in just 2 milliseconds, reaching take-off speeds of almost 4 metres per second. For motions this rapid, some mechanical device is needed to keep the legs synchronized and avoid lopsided jumps that lead to spinning along the body axis. The problem doesn’t arise for grasshoppers and fleas: they have legs at the side of the body that push in separate planes rather than counter-rotating in a single plane, and so they can jump one-legged.
Toothed gears have been used in rotating machinery for millenia: Aristotle and Archimedes described them, and they seem to have been used in ancient China much earlier. But like the wheel, this human invention seemed previously to have little value in the natural world.
Now, however, the gear joins the screw-and-nut as a mechanism whose complex shape has been mastered by evolution. In 2011 Alexander Riedel of the State Museum of Natural History in Karlsruhe, Germany and his colleagues reported a screw-and-nut system in the leg joints of a weevil [2].
Riedel considers this new work “significant and exciting”. It adds to the view that “most of the basic components of engineering have been developed in the natural world”, he says. Perhaps gears are not more common, he adds, because there are different ways to achieve the same goal. Honeybees, for example, “couple the movement of both wings to stabilize their flight by using pegs, not as moving gears but more like a Velcro fastener.”
Curiously, the gears are only found in the young nymph insects. When they undergo their final moult, sloughing off their exoskeleton for the last time to reach full adulthood, the gears disappear and instead the legs are synchronized by simpler frictional contact.
Burrows and Sutton aren’t yet sure why this is so, but it might be because of ease of repair. “If a gear breaks it can’t be replaced in adults”, says Burrows. “But in nymphs a repair can be made at the next of several moults.” He also explains that the larger and more rigid adult bodies might make the frictional method work better.
References
1. Burrows, M. & Sutton, G. Science 341, 1254-1256 (2013).
2. van de Kamp, T., Vagovic, P., Baumbach, T. & Riedel, A., Science 333, 52 (2011).
Some additional comments from biomimetics expert Steven Vogel of Duke University:
Interesting business. I can't think of another case of rotary gears at the moment. The closest thing that has yet come to mind is the zipper-like closure once described in (if I recall right) ctenophore mouths.
So many creatures jump without such an obvious mechanical coupling between paired legs that it can't be too difficult to keep from going awry. In any case, some compensation would often be necessary for irregularity in stiffness and level of substratum, etc. One does wonder about whether proprioceptive feedback can work at the short times that would necessarily be involved.
M. Scherge and S.N. Gorb, in their 2000 book, Biological Micro- and Nanotribology, do quite a thorough job. The upshot seems to be that what Burrows describes may be functionally novel, but from a structural point of view it represents (as is so typical of evolutionary innovations) no spectacular discontinuity. They talk about coxal rather than trochanteral segments, one unit more proximal, of course, for whatever that matters.
Synchronizing legs may be no absolute requirement. After all, surfaces are irregular in level, resilience, and so forth, and the legs never push directly at the center of gravity of the insect. So perfect synchrony won't necessarily give a straight trajectory anyway. And some post-launch adjustment may be possible, either inertially or aerodynamically. (Zero-angular-velocity turns, as righting cats, or tail-swinging, as perhaps in jumping rodents that have convergently evolved long tails with hair tufts at their ends.)
Maybe gears such as these come with an odd disability--they really louse things up if they mesh out of register. Or maybe they're tricky to molt and get back into register.
Filleting the gear bottoms is an interesting fillip. For us that's a relatively recent development, I gather. We've made gears for a long time--the antikythera mechanism (100 bce) is a bunch of 'em. Ones that take reasonable torque might be more recent, but are still old - I found some in Agostino Ramelli (1588), unfilleted. And the gears salvaged from a horse ferry (1830) scuttled on Lake Champlain were unfilleted. Odd that no one seemed to have noticed that filleted gears are much, much less prone to getting knocked off, particularly with brittle cast iron.
I take mild issue with Burrow's use of ‘force' for acceleration. It's not only incorrect, but it tends to perpetrate the myth that insects are superstrong, instead of recognizing artifacts of scaling. I wrote an essay about the matter a few years ago; it became chapter 2 in "Glimpses of Creature in their Physical Worlds" (2009). The upshot is that we expect, and find, that acceleration scales (or bumps into a limit line) inversely with length - from jumping cougars down to shooting spores, five orders of magnitude. That keeps the stress on launch materials roughly constant, assuming roughly constant investment in the relevant guns. 200 or 500 g isn't out of line for their size. Good, but not earthshaking.
I'm amused to learn of yet another case of something I once commented on (in "Cats' Paws and Catapults") when trying to inject a note of reality into the hype and hope of biomimetics: "The biomechanic usually recognizes nature's use of some neat device only when the engineer has already provided us with a model. Put another way, biomechanics still studies how, where, and why nature does what engineers do."
Friday, September 13, 2013
Storm in a test tube
Here’s the last of the La Recherche pieces on 'controversies': a short profile of Martin Fleischmann and cold fusion.
_____________________________________________________________
It would be unfair to Martin Fleischmann, who died last year aged 85, if he were remembered solely for his work on ‘cold fusion’ – the alleged discovery with his coworker Stanley Pons in 1989 that nuclear fusion of heavy hydrogen (deuterium), and the consequent release of energy, could be achieved with benchtop chemistry. Before making that controversial claim, Fleischmann enjoyed international repute for his work in electrochemistry. But to many scientists, cold fusion – now rejected by all but a handful of ‘true believers’ – cast doubt on his judgement and even his integrity.
Fleischmann was born in 1927 to a family with Jewish heritage in Czechoslovakia, and came to England as a young boy to escape the Nazis. He conducted his most celebrated work at the University of Southampton, where in 1974 he discovered a technique for monitoring chemical processes at surfaces. This and his later work on ultra-small electrodes made him a respected figure in electrochemistry.
After officially retiring, he conducted his work with Pons at the University of Utah in the late 1980s. They claimed that the electrolysis of lithium deuteroxide using palladium electrodes generated more energy than it consumed, presumably because of fusion of deuterium atoms packed densely into the ‘hydrogen sponge’ of the palladium metal. Their announcement of the results in a press conference – before publication of a paper, accompanied by very scanty evidence, and scooping a similar claim by a team at the nearby Brigham Young University – ensured that cold fusion was controversial from the outset. At the April 1989 meeting of the American Chemical Society, Fleischmann and Pons were welcomed like rock stars for apparently having achieved what physicists had been trying to do for decades: to liberate energy by nuclear fusion.
Things quickly fell apart. Genuine fusion should be accompanied by other telltale signatures, such as the formation of helium and the emission of neutrons with a particular energy. The claim also depended on control experiments using ordinary hydrogen in place of deuterium. Pons and Fleischmann were evasive when asked whether they had done these checks, or what the results were, and the only paper they published on the subject offered no clarification. Several other groups soon reported ‘excess heat’ and other putative fusion signatures, but the claims were never repeatable, and several exhaustive studies failed to find convincing evidence for fusion. The affair ended badly, amidst law suits, recriminations and accusations of fraud.
Fleischmann always maintained that cold fusion was real, albeit perhaps not quite the phenomenon he’d originally thought. The pattern of marginal and irreproducible effects and ad hoc, shifting explanations fits Irving Langmuir’s template of “pathological science”. But even now, some cling to the alluring dream that cold fusion could be an energy source.
_____________________________________________________________
It would be unfair to Martin Fleischmann, who died last year aged 85, if he were remembered solely for his work on ‘cold fusion’ – the alleged discovery with his coworker Stanley Pons in 1989 that nuclear fusion of heavy hydrogen (deuterium), and the consequent release of energy, could be achieved with benchtop chemistry. Before making that controversial claim, Fleischmann enjoyed international repute for his work in electrochemistry. But to many scientists, cold fusion – now rejected by all but a handful of ‘true believers’ – cast doubt on his judgement and even his integrity.
Fleischmann was born in 1927 to a family with Jewish heritage in Czechoslovakia, and came to England as a young boy to escape the Nazis. He conducted his most celebrated work at the University of Southampton, where in 1974 he discovered a technique for monitoring chemical processes at surfaces. This and his later work on ultra-small electrodes made him a respected figure in electrochemistry.
After officially retiring, he conducted his work with Pons at the University of Utah in the late 1980s. They claimed that the electrolysis of lithium deuteroxide using palladium electrodes generated more energy than it consumed, presumably because of fusion of deuterium atoms packed densely into the ‘hydrogen sponge’ of the palladium metal. Their announcement of the results in a press conference – before publication of a paper, accompanied by very scanty evidence, and scooping a similar claim by a team at the nearby Brigham Young University – ensured that cold fusion was controversial from the outset. At the April 1989 meeting of the American Chemical Society, Fleischmann and Pons were welcomed like rock stars for apparently having achieved what physicists had been trying to do for decades: to liberate energy by nuclear fusion.
Things quickly fell apart. Genuine fusion should be accompanied by other telltale signatures, such as the formation of helium and the emission of neutrons with a particular energy. The claim also depended on control experiments using ordinary hydrogen in place of deuterium. Pons and Fleischmann were evasive when asked whether they had done these checks, or what the results were, and the only paper they published on the subject offered no clarification. Several other groups soon reported ‘excess heat’ and other putative fusion signatures, but the claims were never repeatable, and several exhaustive studies failed to find convincing evidence for fusion. The affair ended badly, amidst law suits, recriminations and accusations of fraud.
Fleischmann always maintained that cold fusion was real, albeit perhaps not quite the phenomenon he’d originally thought. The pattern of marginal and irreproducible effects and ad hoc, shifting explanations fits Irving Langmuir’s template of “pathological science”. But even now, some cling to the alluring dream that cold fusion could be an energy source.
Thursday, September 12, 2013
Remembering the memory
Here’s my second piece for La Recherche’s special issue in August on scientific controversies – this one on the ‘memory of water’.
_____________________________________________________________
So far, “The Memory of Water” has been used as the title of a play, two movies, a collection of poems and a rock song. When the French immunologist Jacques Benveniste proposed in 1988 that water has a memory, he gave birth to a catchphrase with considerable cultural currency.
But Benveniste, who died in 2004, also ignited a scientific controversy that is still simmering a quarter of a century later. While most physicists and chemists consider Benveniste’s original idea – that water can retain a memory of substances it has dissolved, so that they can display chemical effects even when diluted to vanishing point – to be inconsistent with all we know about the properties of liquid water, Benveniste’s former colleagues and a handful of converts still believe there was something in it.
The claim would be provocative under any circumstances. But the dispute is all the fiercer because Benveniste’s ‘memory of water’ seems to offer an explanation for how homeopathy can work. This ‘alternative’ medical treatment, in which putative remedies are so diluted that active ingredients remain, has a huge following worldwide, and is particularly popular in France. But most medical practitioners consider it to be sheer superstition sustained by ignorance and the placebo effect.
Yet while there seems no good reason to believe that water has a ‘memory’, no one is quite sure how to account for the peculiar results Benveniste reported in 1988. This episode illustrates how hard it is for science to deal with deeply unorthodox findings, especially when they bear on wider cultural issues. In such cases an objective assessment of the data might not be sufficient, and perhaps not even possible, and the business of doing science is revealed for the human endeavour that it is, with all its ambiguities, flaws and pitfalls.
Rise and fall
Benveniste did not set out to ‘discover’ anything about water. As the head of Unit 200 of the French national medical research organization INSERM in Clamart on the edge of Paris, he was respected for his work on allergic responses. In 1987 he and his team spotted something strange while investigating the response of a type of human white blood cell, called basophils, to antibodies. Basophils patrol the bloodstream for foreign particles, and are triggered into releasing histamine – a response called degranulation – when they encounter allergy-inducing substances called allergens. Degranulation begins when allergens attach to antibodies called immunoglobulin E (IgE) anchored to the basophil surface. Benveniste’s team were using a ‘fake allergen’ to initiate this process: another antibody called anti-IgE, produced in non-human animals.
The researchers sometimes found that degranulation happened even when the concentration of anti-IgE was too low to be expected to have any effect. Benveniste and colleagues diluted a solution of anti-IgE gradually and monitored the amount of basophil degranulation. Basic chemistry suggests that the activity of anti-IgE should fall smoothly to zero as its concentration falls. But instead, the activity seemed to rise and fall almost rhythmically as the solution got more dilute. Even stranger, it went on behaving that way when the solution was so dilute that not a single anti-IgE molecule should remain.
That made no sense. How can molecules have an effect if they’re not there? Benveniste considered this finding striking enough to submit to Nature.
The editor of Nature at that time was John Maddox, who often displayed empathy for outsiders and a healthy scepticism of smug scientific consensus. Rather against the wishes of his staff, he insisted on sending the paper for peer review. The referees were puzzled but could find no obvious flaw in Benveniste’s experiments. After they had been replicated in independent laboratories in Canada, Italy and Israel, there seemed to be no option but to publish Benveniste’s paper, which Nature did in June 1988 [E. Davenas et al., Nature 333, 816 (1988)] – accompanied by an editorial from Maddox admitting that “There is no objective explanation of these observations.”
Hope for homeopathy?
The Nature paper caused pandemonium. It was clear at once that Benveniste’s results seemed to be offering scientific validation of homeopathy, the system of medicine introduced in the early nineteenth century by the German physician Samuel Hahnemann, in which the ‘active’ ingredients, already diluted to extinction, are said to get even more potent as they get more dilute.
Advocates swear that some clinical trials support the efficacy of homeopathy, but most medical experts consider there to be no solid evidence that it is effective beyond what would be expected from placebo effects. Even many homeopaths admit that there is no obvious scientific way to account for the effects they claim.
Not, at least, until the memory of water. “Homeopathy finds scientific support”, proclaimed Newsweek after Benveniste’s paper was published.
But how could water do this? The French team evidently had no idea. They suggested that “water could act as a ‘template’ for the [anti-IgE] molecule” – but this made no sense. For one thing, they evidently meant it the other way round: the antibody was acting as a template to imprint some kind of molecular structure on water, which could then act as a surrogate when the antibody was diluted away. But why should a negative imprint of the molecule act like the molecule itself? In any case, the properties of antibodies don’t just depend on their shape, but on the positions of particular chemical groups within the folded-up protein chain. And most of all, water is a liquid: its H2O molecules are constantly on the move in a molecular dance, sticking to one another by weak chemical bonds for typically just a trillionth of a second before separating to form new configurations. Any imprint would be washed away in an instant. If Benveniste and colleagues were right, shouldn’t water show the same behaviour as everything it has ever dissolved, making it sweet, salty, biologically active, toxic?
But data are data. Or are they? That’s what Maddox had begun to wonder. To get to the bottom of the affair, he launched an unprecedented investigation into INSERM Unit 200. Maddox travelled to Clamart to watch Benveniste’s team repeat their measurements before his eyes, accompanied by American biologist Walter Stewart, a ‘fraud-buster’ at the National Institutes of Health who had previously investigated allegations of misconduct in the laboratory of Nobel laureate David Baltimore, and stage magician James Randi, a debunker of pseudoscientific claims like those of the ‘psychic’ Uri Geller. “So now at last confirmation of what I have always suspected”, one correspondent wrote to Nature. “Papers for publication in Nature are referred by the Editor, a magician and his rabbit.”
The Nature team insisting that the researchers carry out a suite of double-blind experiments designed to rule out self-deception or trickery. Their conclusions were damning: “The anti-IgE at conventional dilutions caused degranulation, but at ‘high dilution’ there was no effect”, the investigators wrote [J. Maddox et al., Nature 334, 287 (1988)]. Some runs did seem to show high-dilution activity, but it was neither repeatable nor periodic as dilution increased.
Attempts by other labs to reproduce the results also failed to supported Benveniste’s claims. Although occasionally they did see strange high-dilution effects, it is not at all uncommon to find anomalous results in experiments on biological systems, which are notoriously messy and sensitive to impurities or small changes in conditions. The ‘high-dilution’ claims meet all the criteria for what the American chemist Irving Langmuir called ‘pathological science’ in 1925. For Langmuir, this was the science of “things that aren’t so”: phenomena that are illusory. Langmuir adduced several distinguishing features: the effects always operate at the margins of detectability, for example, and their supporters generally meet criticisms with ad hoc excuses dreamed up on the spur of the moment. His criteria apply equally to some other modern scientific controversies, notably the claim by Russian scientists in the late 1960s to have discovered a new, waxy form of water called polywater, and the claims of ‘cold nuclear fusion’ achieved using benchtop chemistry by Martin Fleischmann and Stanley Pons in Utah in 1989 [coming up next!].
Disappearing act
After Maddox’s investigation, most scientists dismissed the memory of water as a chimera. But Benveniste never recanted. He was sacked from INSERM after ignoring instructions not to pursue the high-dilution work, but he continued it with private funds, having attracted something of a cult following. These studies led him to conclude that water acts as a “vehicle for [biological] information”, carrying the signal that somehow encodes the biomolecule’s activity. Benveniste eventually decided that water can be “programmed” to behave like any biological agent – proteins, bacteria, viruses – by electromagnetic signals that can be recorded and sent down telephone wires. In 1997 he set up a private company, DigiBio, to promote this field of “digital biology”, and it is rumoured that the US Department of Defense funded research on this putative ‘remote transmission’ process.
Such studies continue after his death, and have recently acquired a high-profile supporter: the immunologist Luc Montagnier, who was awarded the 2008 Nobel prize for the co-discovery of the AIDS virus HIV. Montagnier believes that the DNA molecule itself can act as both a transmitter and a receiver of ultralow frequency electromagnetic signals that can broadcast biological effects. He believes that the signals emitted by pathogen DNA could be used to detect infection. He maintains that these emissions do not depend on the amount of DNA in suspensions of pathogens, and are sometimes detectable at very high dilution. They might originate, he says, from quantum effects in the water surrounding the DNA and other biological structures, according to a controversial theory that has also been invoked to explain Benveniste’s experiments [E. Del Guidice et al. Phys. Rev. Lett. 61, 1085 (1988)].
“Benveniste was rejected by everybody, because he was too far ahead”, Montagnier has said [Science 330, 1732 (2010)]. “I think he was mostly right but the problem was that his results weren't 100% reproducible.” In 2010 Montagnier began research on high-dilution DNA at a new research institute at Jiaotong University in Shanghai. “It's not pseudoscience, it's not quackery”, he insists. “These are real phenomena which deserve further study.” He is currently the head of the World Foundation for AIDS Research and Prevention in Paris, but his unorthodox views on water’s ‘memory’ have prompted some leading researchers to question his suitability to head AIDS projects.
Meanwhile, the idea that the undoubtedly unusual molecular structure of water – a source of continued controversy in its own right [see e.g. here and here] – might contrive to produce high-dilution effects still finds a few supporters among physical chemists. Homeopaths have never relinquished the hope that the idea might grant them the scientific vindication they crave: a special issue of the journal Homeopathy in 2007 was devoted to scientific papers alleging to explore water’s ‘memory’, although none provided either clear evidence for its existence or a plausible explanation for its mechanism [see here].
Such efforts remain firmly at the fringes of science. But what must we make of Benveniste’s claims? While inevitably the suspicion of fraud clouds such events, my own view – I joined Nature just after the ‘memory of water’ paper was published, and spoke to Benveniste shortly before his death – is that he fully believed what he said. A charming and charismatic man, he was convinced that he had been condemned by the ‘scientific priesthood’ for heresy. The irony is that he never recognized how his nemesis Maddox shared his maverick inclinations.
The “Galileo” rhetoric that Benveniste deployed is common from those who feel they have been ‘outlawed’ for their controversial scientific claims. But Benveniste never seemed to know how to make his results convincing, other than to pile up more of them. Faced with a puzzling phenomenon, the scientist’s instinct should be to break it down, to seek it in simpler systems that are more easily understood and controlled, and to pinpoint where the anomalies arise. In contrast, Benveniste studied ever more complicated biological systems – bacteria, plants, guinea pigs – until neither he nor anyone else could really tell what was going on. The last talk I saw his team deliver, in 2004, was a riot of graphs and numbers presented in rapid succession, as though any wild idea could be kept in the air so long as no one can pause to examine it.
This, perhaps, is the lesson of the memory of water: when you have a truly weird and remarkable result in science, your first duty is to try to show not why it must be true, but why it cannot be.
_____________________________________________________________
So far, “The Memory of Water” has been used as the title of a play, two movies, a collection of poems and a rock song. When the French immunologist Jacques Benveniste proposed in 1988 that water has a memory, he gave birth to a catchphrase with considerable cultural currency.
But Benveniste, who died in 2004, also ignited a scientific controversy that is still simmering a quarter of a century later. While most physicists and chemists consider Benveniste’s original idea – that water can retain a memory of substances it has dissolved, so that they can display chemical effects even when diluted to vanishing point – to be inconsistent with all we know about the properties of liquid water, Benveniste’s former colleagues and a handful of converts still believe there was something in it.
The claim would be provocative under any circumstances. But the dispute is all the fiercer because Benveniste’s ‘memory of water’ seems to offer an explanation for how homeopathy can work. This ‘alternative’ medical treatment, in which putative remedies are so diluted that active ingredients remain, has a huge following worldwide, and is particularly popular in France. But most medical practitioners consider it to be sheer superstition sustained by ignorance and the placebo effect.
Yet while there seems no good reason to believe that water has a ‘memory’, no one is quite sure how to account for the peculiar results Benveniste reported in 1988. This episode illustrates how hard it is for science to deal with deeply unorthodox findings, especially when they bear on wider cultural issues. In such cases an objective assessment of the data might not be sufficient, and perhaps not even possible, and the business of doing science is revealed for the human endeavour that it is, with all its ambiguities, flaws and pitfalls.
Rise and fall
Benveniste did not set out to ‘discover’ anything about water. As the head of Unit 200 of the French national medical research organization INSERM in Clamart on the edge of Paris, he was respected for his work on allergic responses. In 1987 he and his team spotted something strange while investigating the response of a type of human white blood cell, called basophils, to antibodies. Basophils patrol the bloodstream for foreign particles, and are triggered into releasing histamine – a response called degranulation – when they encounter allergy-inducing substances called allergens. Degranulation begins when allergens attach to antibodies called immunoglobulin E (IgE) anchored to the basophil surface. Benveniste’s team were using a ‘fake allergen’ to initiate this process: another antibody called anti-IgE, produced in non-human animals.
The researchers sometimes found that degranulation happened even when the concentration of anti-IgE was too low to be expected to have any effect. Benveniste and colleagues diluted a solution of anti-IgE gradually and monitored the amount of basophil degranulation. Basic chemistry suggests that the activity of anti-IgE should fall smoothly to zero as its concentration falls. But instead, the activity seemed to rise and fall almost rhythmically as the solution got more dilute. Even stranger, it went on behaving that way when the solution was so dilute that not a single anti-IgE molecule should remain.
That made no sense. How can molecules have an effect if they’re not there? Benveniste considered this finding striking enough to submit to Nature.
The editor of Nature at that time was John Maddox, who often displayed empathy for outsiders and a healthy scepticism of smug scientific consensus. Rather against the wishes of his staff, he insisted on sending the paper for peer review. The referees were puzzled but could find no obvious flaw in Benveniste’s experiments. After they had been replicated in independent laboratories in Canada, Italy and Israel, there seemed to be no option but to publish Benveniste’s paper, which Nature did in June 1988 [E. Davenas et al., Nature 333, 816 (1988)] – accompanied by an editorial from Maddox admitting that “There is no objective explanation of these observations.”
Hope for homeopathy?
The Nature paper caused pandemonium. It was clear at once that Benveniste’s results seemed to be offering scientific validation of homeopathy, the system of medicine introduced in the early nineteenth century by the German physician Samuel Hahnemann, in which the ‘active’ ingredients, already diluted to extinction, are said to get even more potent as they get more dilute.
Advocates swear that some clinical trials support the efficacy of homeopathy, but most medical experts consider there to be no solid evidence that it is effective beyond what would be expected from placebo effects. Even many homeopaths admit that there is no obvious scientific way to account for the effects they claim.
Not, at least, until the memory of water. “Homeopathy finds scientific support”, proclaimed Newsweek after Benveniste’s paper was published.
But how could water do this? The French team evidently had no idea. They suggested that “water could act as a ‘template’ for the [anti-IgE] molecule” – but this made no sense. For one thing, they evidently meant it the other way round: the antibody was acting as a template to imprint some kind of molecular structure on water, which could then act as a surrogate when the antibody was diluted away. But why should a negative imprint of the molecule act like the molecule itself? In any case, the properties of antibodies don’t just depend on their shape, but on the positions of particular chemical groups within the folded-up protein chain. And most of all, water is a liquid: its H2O molecules are constantly on the move in a molecular dance, sticking to one another by weak chemical bonds for typically just a trillionth of a second before separating to form new configurations. Any imprint would be washed away in an instant. If Benveniste and colleagues were right, shouldn’t water show the same behaviour as everything it has ever dissolved, making it sweet, salty, biologically active, toxic?
But data are data. Or are they? That’s what Maddox had begun to wonder. To get to the bottom of the affair, he launched an unprecedented investigation into INSERM Unit 200. Maddox travelled to Clamart to watch Benveniste’s team repeat their measurements before his eyes, accompanied by American biologist Walter Stewart, a ‘fraud-buster’ at the National Institutes of Health who had previously investigated allegations of misconduct in the laboratory of Nobel laureate David Baltimore, and stage magician James Randi, a debunker of pseudoscientific claims like those of the ‘psychic’ Uri Geller. “So now at last confirmation of what I have always suspected”, one correspondent wrote to Nature. “Papers for publication in Nature are referred by the Editor, a magician and his rabbit.”
The Nature team insisting that the researchers carry out a suite of double-blind experiments designed to rule out self-deception or trickery. Their conclusions were damning: “The anti-IgE at conventional dilutions caused degranulation, but at ‘high dilution’ there was no effect”, the investigators wrote [J. Maddox et al., Nature 334, 287 (1988)]. Some runs did seem to show high-dilution activity, but it was neither repeatable nor periodic as dilution increased.
Attempts by other labs to reproduce the results also failed to supported Benveniste’s claims. Although occasionally they did see strange high-dilution effects, it is not at all uncommon to find anomalous results in experiments on biological systems, which are notoriously messy and sensitive to impurities or small changes in conditions. The ‘high-dilution’ claims meet all the criteria for what the American chemist Irving Langmuir called ‘pathological science’ in 1925. For Langmuir, this was the science of “things that aren’t so”: phenomena that are illusory. Langmuir adduced several distinguishing features: the effects always operate at the margins of detectability, for example, and their supporters generally meet criticisms with ad hoc excuses dreamed up on the spur of the moment. His criteria apply equally to some other modern scientific controversies, notably the claim by Russian scientists in the late 1960s to have discovered a new, waxy form of water called polywater, and the claims of ‘cold nuclear fusion’ achieved using benchtop chemistry by Martin Fleischmann and Stanley Pons in Utah in 1989 [coming up next!].
Disappearing act
After Maddox’s investigation, most scientists dismissed the memory of water as a chimera. But Benveniste never recanted. He was sacked from INSERM after ignoring instructions not to pursue the high-dilution work, but he continued it with private funds, having attracted something of a cult following. These studies led him to conclude that water acts as a “vehicle for [biological] information”, carrying the signal that somehow encodes the biomolecule’s activity. Benveniste eventually decided that water can be “programmed” to behave like any biological agent – proteins, bacteria, viruses – by electromagnetic signals that can be recorded and sent down telephone wires. In 1997 he set up a private company, DigiBio, to promote this field of “digital biology”, and it is rumoured that the US Department of Defense funded research on this putative ‘remote transmission’ process.
Such studies continue after his death, and have recently acquired a high-profile supporter: the immunologist Luc Montagnier, who was awarded the 2008 Nobel prize for the co-discovery of the AIDS virus HIV. Montagnier believes that the DNA molecule itself can act as both a transmitter and a receiver of ultralow frequency electromagnetic signals that can broadcast biological effects. He believes that the signals emitted by pathogen DNA could be used to detect infection. He maintains that these emissions do not depend on the amount of DNA in suspensions of pathogens, and are sometimes detectable at very high dilution. They might originate, he says, from quantum effects in the water surrounding the DNA and other biological structures, according to a controversial theory that has also been invoked to explain Benveniste’s experiments [E. Del Guidice et al. Phys. Rev. Lett. 61, 1085 (1988)].
“Benveniste was rejected by everybody, because he was too far ahead”, Montagnier has said [Science 330, 1732 (2010)]. “I think he was mostly right but the problem was that his results weren't 100% reproducible.” In 2010 Montagnier began research on high-dilution DNA at a new research institute at Jiaotong University in Shanghai. “It's not pseudoscience, it's not quackery”, he insists. “These are real phenomena which deserve further study.” He is currently the head of the World Foundation for AIDS Research and Prevention in Paris, but his unorthodox views on water’s ‘memory’ have prompted some leading researchers to question his suitability to head AIDS projects.
Meanwhile, the idea that the undoubtedly unusual molecular structure of water – a source of continued controversy in its own right [see e.g. here and here] – might contrive to produce high-dilution effects still finds a few supporters among physical chemists. Homeopaths have never relinquished the hope that the idea might grant them the scientific vindication they crave: a special issue of the journal Homeopathy in 2007 was devoted to scientific papers alleging to explore water’s ‘memory’, although none provided either clear evidence for its existence or a plausible explanation for its mechanism [see here].
Such efforts remain firmly at the fringes of science. But what must we make of Benveniste’s claims? While inevitably the suspicion of fraud clouds such events, my own view – I joined Nature just after the ‘memory of water’ paper was published, and spoke to Benveniste shortly before his death – is that he fully believed what he said. A charming and charismatic man, he was convinced that he had been condemned by the ‘scientific priesthood’ for heresy. The irony is that he never recognized how his nemesis Maddox shared his maverick inclinations.
The “Galileo” rhetoric that Benveniste deployed is common from those who feel they have been ‘outlawed’ for their controversial scientific claims. But Benveniste never seemed to know how to make his results convincing, other than to pile up more of them. Faced with a puzzling phenomenon, the scientist’s instinct should be to break it down, to seek it in simpler systems that are more easily understood and controlled, and to pinpoint where the anomalies arise. In contrast, Benveniste studied ever more complicated biological systems – bacteria, plants, guinea pigs – until neither he nor anyone else could really tell what was going on. The last talk I saw his team deliver, in 2004, was a riot of graphs and numbers presented in rapid succession, as though any wild idea could be kept in the air so long as no one can pause to examine it.
This, perhaps, is the lesson of the memory of water: when you have a truly weird and remarkable result in science, your first duty is to try to show not why it must be true, but why it cannot be.
The antimony wars
The August issue of La Recherche has the theme of ‘controversies in science’. I wrote several pieces for it – this is the first, on the battle between the Galenists and Paracelsians in the French court in the early 17th century.
_____________________________________________
“I am different”, the sixteenth-century Swiss alchemist and physician Paracelsus once wrote, adding “let this not upset you”. But he upset almost everyone who came into contact with him and his ideas, and his vision of science and medicine continued to spark dispute for at least a hundred years after his death in 1541. For Paracelsus wanted to pull up by its roots the entire system of medicine and natural philosophy that originated with the ancient Greeks – particularly Aristotle – and replace it with a system that seemed to many to have more in common with the practices of mountebanks and peasant healers.
Paracelsus – whose splendid full name was Philip Theophrastus Aureolus Bombastus von Hohenheim – had a haphazard career as a doctor, mostly in the German territories but also in Italy, France and, if his own accounts can be believed, as far afield as Sweden, Russia and Egypt. Born in the Swiss village of Einsiedeln, near Zurich, into a noble Swabian family fallen on hard times, he trained in medicine in the German universities and Ferrara in Italy before wandering throughout Europe offering his services. He attended kings and treated peasants, sometimes with a well-filled purse but more often penniless. Time and again his argumentative nature ruined his chances of a stable position: at one time town physician of Basle, he made himself so unpopular with the university faculty and the authorities that he had to flee under cover of darkness to avoid imprisonment.
Paracelsus could be said to have conceived of a Theory of Everything: a system that explained medicine and the human body, alchemy, astrology, religion and the fundamental structure of the cosmos. He provided one of the first versions of what science historians now call the ‘chemical philosophy’: a theory that makes chemical transformation the analogy for all processes. For Paracelsus, every natural phenomenon was essentially an alchemical process. The rising of moisture from the earth and its falling back as rain was the equivalent of distillation and condensation in the alchemist’s flask. Growth of plants and animals from seeds was a kind of alchemy too, and in fact even the Biblical creation of the world was basically an alchemical process: a separation of earth from water. This philosophy seems highly fanciful now, but it was nonetheless rational and mechanistic: it could ascribe natural and comprehensible causes to events.
Although Paracelsus was one of the most influential advocates of these ideas in the early Renaissance, they weren’t entirely his invention (although he characteristically exaggerated his originality). The chemical philosophy was rooted in the tradition known as Neoplatonism, derived from the teachings of Plato but shaped into a kind of mystical philosophy by the third-century Greek philosopher Plotinus. One of the central ideas of Neoplatonism is the correspondence between the macrocosm and the microcosm, so that events that occurred in the heavens and in the natural world have direct analogies within the human body – or with the processes conducted in an alchemist’s flasks and retorts. This correspondence provided the theoretical basis for a belief in astrology, although Paracelsus denied that our destiny is absolutely fixed by our horoscope. He proposed that the macro-micro correspondence led to ‘signatures’ in nature which revealed, for example, the medical uses of plants: those shaped like a kidney could treat renal complaints. These signatures were signs left by God to guide the physician towards the proper use of herbal medicines. They exemplify the symbolic character of the chemical philosophy, which was based on such analogies of form and appearance.
What the chemical philosophy implied for medicine conflicted with the tradition taught to physicians at the universities, which drew on ideas from antiquity, particularly those attributed to the Greek philosopher Hippocrates and the Roman doctor Galen. This classical tradition asserted that our health is governed by four bodily fluids called humours: blood, phlegm, and black and yellow bile. Illness results from an imbalance of the humours, and the doctor’s task was to restore this balance – by drugs, diet or, commonly, by blood-letting.
Academic doctors in the Middle Ages adopted the humoral system as the theoretical basis of their work, but its connection to their working practices was generally rather tenuous. Often they prescribed drugs, made from herbs or minerals and sold by medieval pharmacists called apothecaries. Doctors charged high fees for their services, which only merchants and nobles could afford. They were eminent in society, and often dressed lavishly.
Paracelsus despised all of this. He did not share the doctors’ disdain of manual work, and he hated how they paraded their wealth. Worse still, he considered that the whole foundation of classical medicine, with its doctrine of humours, was mistaken. When he discovered at university that becoming a doctor of medicine was a matter of simply learning and memorizing the books of Galen and Avicenna, he was outraged. He insisted that it was only through experience, not through book-learning, that one could become a true healer.
By bringing an alchemical perspective to the study of life and medicine, Paracelsus helped to unify the sciences. Previously, alchemy had been about the transmutation of metals. But for Paracelsus, its principle purpose was to make medicines. Just as alchemists could mimic the natural transmutation of metals, so could they use alchemical medicines to bring about the natural process of healing. This was possible, in fact, because human biology was itself a kind of alchemy. In one of his most fertile ideas, Paracelsus asserted that there is an alchemist inside each one of us, a kind of principle that he called the archeus, which separates the good from the bad in the food and drink that we ingest. The archeus uses the good matter to make flesh and blood, and the bad is expelled as waste. Paracelsus devised a kind of bio-alchemy, the precursor to modern biochemistry, which indeed now regards nature as a superb chemist that takes molecules apart and puts them back together as the constituents of our cells.
Most of all, Paracelsus argued that medicine should involve the use of specific chemical drugs to treat specific ailments: it was a system of chemotherapy, which had little space for the general-purpose blood-letting treatments prescribed by the humoral theory. This Paracelsian, chemical approach to healing became known in the late sixteenth century as ‘iatrochemistry’, meaning the chemistry of medicine.
Paracelsus was able to publish relatively little of his writings while he was alive, but from around 1560 several publishers scoured Europe for his manuscripts and published compendia of Paracelsian medicine. Once in print, his ideas attracted adherents, and by the last decades of the century Paracelsian medicine was exciting furious debate between traditionalists and progressives. Iatrochemistry found a fairly receptive audience in England, but the disputes they provoked in France were bitter, especially among the conservative medical faculty of the University of Paris.
That differing reception was partly motivated by religion. Paracelsus belonged to no creed, but he was widely identified with the Reformation – he even compared himself to Martin Luther – and so his views found more sympathy from Protestants than Catholics. The religious tensions were especially acute in France when the Huguenot prince of Navarre was crowned Henri IV in 1589. Fears that Henri would create a Huguenot court seemed confirmed when the new king appointed the Swiss doctor Jean Ribit as his premier médicin, and summoned also two other Huguenot doctors with Paracelsian ideas, the Gascon Joseph Duchesne and another Genevan, Theodore Turquet de Mayerne.
In 1603 Jean Riolan, the head of the Paris medical faculty, published an attack on Mayerne and Duchesne, asserting the supremacy of the medicine of Hippocrates and Galen. Although these two Paracelsians sought to defend themselves, they only secured a retraction of this damning charge by agreeing to practice medicine according to the rules of the classical authorities.
But the Paracelsians struck back. Around 1604, Ribit and Mayerne helped a fellow Huguenot and iatrochemist named Jean Béguin set up a pharmaceutical laboratory in Paris to promote chemical medicine. In 1610 Béguin published a textbook laying out the principles of iatrochemistry in a clear, straightforward manner free from the convoluted style and fanciful jargon used by Paracelsus. When this Latin text was translated into French five years later as Les elemens de chymie, it served much the same propagandizing role as Antoine Lavoisier’s Traité élémentaire de chemie did for Lavoisier’s own system of chemistry at the end of the eighteenth century.
But the war between the Galenists and the Paracelsians raged well into the seventeenth century. Things looked bad for the radicals when Henri IV, who had been prevented in 1609 from making Mayerne his new premier médicin, was assassinated the following year. Lacking royal protection, Mayerne took up an earlier offer from James I of England and fled there, where he flourished.
Yet when Riolan’s equally conservative son (also Jean) drew up plans for a royal herb garden in 1618, he did not anticipate that this institution would finally be established 20 years later as the Jardin du Roi by the iatrochemist Gui de la Brosse. In 1647 the Jardin appointed the first French professor of chemistry, a Scotsman named William Davidson, who was an ardent Paracelsian.
Most offensive of all to the Paris medical faculty was Davidson’s support for the medical use of antimony. Ever since the start of the century, Paracelsians and Galenists had been split over whether antimony was a cure or poison (it is in fact quite toxic). Davidson’s claim that “there is no more lofty medicine under heaven” so enraged the faculty that they hounded him from his post in 1651, when the younger Riolan republished his father’s condemnation of Duchense and Mayerne.
Yet it was all too late for the Galenists, for the Jardin du Roi, which became one of the most influential institutions in French chemistry and medicine, continued to support iatrochemistry. The professors there produced a string of successful chemical textbooks, most famously that of Nicolas Lemery, called Cours de chimie, in 1675. These men were sober, practical individuals who helped to strip iatrochemistry of its Paracelsian fantasies and outlandish jargon. They placed chemical medicine, and chemistry itself, on a sound footing, paving the way to Lavoisier’s triumphs.
What was this long and bitter dispute really about? Partly, of course, it was a power struggle: over who had the king’s ear, but also who should dictate the practice (and thus reap the financial rewards) of medicine. But it would be too easy to cast Riolan and his colleagues as outdated reactionaries. After all, they were right about antimony (if for the wrong reasons) – and they were right too to criticize some of the wild excesses of Paracelsus’s ideas. Their opposition forced the iatrochemists to prune those ideas, sorting the good from the bad. Besides, since no kind of medicine was terribly effective in those days, there wasn’t much empirical justification for throwing out the old ways. The dispute is a reminder that introducing new scientific ideas may depend as much on the power of good rhetoric as on the evidence itself. And it shows that in the end a good argument can leave science healthier.
_____________________________________________
“I am different”, the sixteenth-century Swiss alchemist and physician Paracelsus once wrote, adding “let this not upset you”. But he upset almost everyone who came into contact with him and his ideas, and his vision of science and medicine continued to spark dispute for at least a hundred years after his death in 1541. For Paracelsus wanted to pull up by its roots the entire system of medicine and natural philosophy that originated with the ancient Greeks – particularly Aristotle – and replace it with a system that seemed to many to have more in common with the practices of mountebanks and peasant healers.
Paracelsus – whose splendid full name was Philip Theophrastus Aureolus Bombastus von Hohenheim – had a haphazard career as a doctor, mostly in the German territories but also in Italy, France and, if his own accounts can be believed, as far afield as Sweden, Russia and Egypt. Born in the Swiss village of Einsiedeln, near Zurich, into a noble Swabian family fallen on hard times, he trained in medicine in the German universities and Ferrara in Italy before wandering throughout Europe offering his services. He attended kings and treated peasants, sometimes with a well-filled purse but more often penniless. Time and again his argumentative nature ruined his chances of a stable position: at one time town physician of Basle, he made himself so unpopular with the university faculty and the authorities that he had to flee under cover of darkness to avoid imprisonment.
Paracelsus could be said to have conceived of a Theory of Everything: a system that explained medicine and the human body, alchemy, astrology, religion and the fundamental structure of the cosmos. He provided one of the first versions of what science historians now call the ‘chemical philosophy’: a theory that makes chemical transformation the analogy for all processes. For Paracelsus, every natural phenomenon was essentially an alchemical process. The rising of moisture from the earth and its falling back as rain was the equivalent of distillation and condensation in the alchemist’s flask. Growth of plants and animals from seeds was a kind of alchemy too, and in fact even the Biblical creation of the world was basically an alchemical process: a separation of earth from water. This philosophy seems highly fanciful now, but it was nonetheless rational and mechanistic: it could ascribe natural and comprehensible causes to events.
Although Paracelsus was one of the most influential advocates of these ideas in the early Renaissance, they weren’t entirely his invention (although he characteristically exaggerated his originality). The chemical philosophy was rooted in the tradition known as Neoplatonism, derived from the teachings of Plato but shaped into a kind of mystical philosophy by the third-century Greek philosopher Plotinus. One of the central ideas of Neoplatonism is the correspondence between the macrocosm and the microcosm, so that events that occurred in the heavens and in the natural world have direct analogies within the human body – or with the processes conducted in an alchemist’s flasks and retorts. This correspondence provided the theoretical basis for a belief in astrology, although Paracelsus denied that our destiny is absolutely fixed by our horoscope. He proposed that the macro-micro correspondence led to ‘signatures’ in nature which revealed, for example, the medical uses of plants: those shaped like a kidney could treat renal complaints. These signatures were signs left by God to guide the physician towards the proper use of herbal medicines. They exemplify the symbolic character of the chemical philosophy, which was based on such analogies of form and appearance.
What the chemical philosophy implied for medicine conflicted with the tradition taught to physicians at the universities, which drew on ideas from antiquity, particularly those attributed to the Greek philosopher Hippocrates and the Roman doctor Galen. This classical tradition asserted that our health is governed by four bodily fluids called humours: blood, phlegm, and black and yellow bile. Illness results from an imbalance of the humours, and the doctor’s task was to restore this balance – by drugs, diet or, commonly, by blood-letting.
Academic doctors in the Middle Ages adopted the humoral system as the theoretical basis of their work, but its connection to their working practices was generally rather tenuous. Often they prescribed drugs, made from herbs or minerals and sold by medieval pharmacists called apothecaries. Doctors charged high fees for their services, which only merchants and nobles could afford. They were eminent in society, and often dressed lavishly.
Paracelsus despised all of this. He did not share the doctors’ disdain of manual work, and he hated how they paraded their wealth. Worse still, he considered that the whole foundation of classical medicine, with its doctrine of humours, was mistaken. When he discovered at university that becoming a doctor of medicine was a matter of simply learning and memorizing the books of Galen and Avicenna, he was outraged. He insisted that it was only through experience, not through book-learning, that one could become a true healer.
By bringing an alchemical perspective to the study of life and medicine, Paracelsus helped to unify the sciences. Previously, alchemy had been about the transmutation of metals. But for Paracelsus, its principle purpose was to make medicines. Just as alchemists could mimic the natural transmutation of metals, so could they use alchemical medicines to bring about the natural process of healing. This was possible, in fact, because human biology was itself a kind of alchemy. In one of his most fertile ideas, Paracelsus asserted that there is an alchemist inside each one of us, a kind of principle that he called the archeus, which separates the good from the bad in the food and drink that we ingest. The archeus uses the good matter to make flesh and blood, and the bad is expelled as waste. Paracelsus devised a kind of bio-alchemy, the precursor to modern biochemistry, which indeed now regards nature as a superb chemist that takes molecules apart and puts them back together as the constituents of our cells.
Most of all, Paracelsus argued that medicine should involve the use of specific chemical drugs to treat specific ailments: it was a system of chemotherapy, which had little space for the general-purpose blood-letting treatments prescribed by the humoral theory. This Paracelsian, chemical approach to healing became known in the late sixteenth century as ‘iatrochemistry’, meaning the chemistry of medicine.
Paracelsus was able to publish relatively little of his writings while he was alive, but from around 1560 several publishers scoured Europe for his manuscripts and published compendia of Paracelsian medicine. Once in print, his ideas attracted adherents, and by the last decades of the century Paracelsian medicine was exciting furious debate between traditionalists and progressives. Iatrochemistry found a fairly receptive audience in England, but the disputes they provoked in France were bitter, especially among the conservative medical faculty of the University of Paris.
That differing reception was partly motivated by religion. Paracelsus belonged to no creed, but he was widely identified with the Reformation – he even compared himself to Martin Luther – and so his views found more sympathy from Protestants than Catholics. The religious tensions were especially acute in France when the Huguenot prince of Navarre was crowned Henri IV in 1589. Fears that Henri would create a Huguenot court seemed confirmed when the new king appointed the Swiss doctor Jean Ribit as his premier médicin, and summoned also two other Huguenot doctors with Paracelsian ideas, the Gascon Joseph Duchesne and another Genevan, Theodore Turquet de Mayerne.
In 1603 Jean Riolan, the head of the Paris medical faculty, published an attack on Mayerne and Duchesne, asserting the supremacy of the medicine of Hippocrates and Galen. Although these two Paracelsians sought to defend themselves, they only secured a retraction of this damning charge by agreeing to practice medicine according to the rules of the classical authorities.
But the Paracelsians struck back. Around 1604, Ribit and Mayerne helped a fellow Huguenot and iatrochemist named Jean Béguin set up a pharmaceutical laboratory in Paris to promote chemical medicine. In 1610 Béguin published a textbook laying out the principles of iatrochemistry in a clear, straightforward manner free from the convoluted style and fanciful jargon used by Paracelsus. When this Latin text was translated into French five years later as Les elemens de chymie, it served much the same propagandizing role as Antoine Lavoisier’s Traité élémentaire de chemie did for Lavoisier’s own system of chemistry at the end of the eighteenth century.
But the war between the Galenists and the Paracelsians raged well into the seventeenth century. Things looked bad for the radicals when Henri IV, who had been prevented in 1609 from making Mayerne his new premier médicin, was assassinated the following year. Lacking royal protection, Mayerne took up an earlier offer from James I of England and fled there, where he flourished.
Yet when Riolan’s equally conservative son (also Jean) drew up plans for a royal herb garden in 1618, he did not anticipate that this institution would finally be established 20 years later as the Jardin du Roi by the iatrochemist Gui de la Brosse. In 1647 the Jardin appointed the first French professor of chemistry, a Scotsman named William Davidson, who was an ardent Paracelsian.
Most offensive of all to the Paris medical faculty was Davidson’s support for the medical use of antimony. Ever since the start of the century, Paracelsians and Galenists had been split over whether antimony was a cure or poison (it is in fact quite toxic). Davidson’s claim that “there is no more lofty medicine under heaven” so enraged the faculty that they hounded him from his post in 1651, when the younger Riolan republished his father’s condemnation of Duchense and Mayerne.
Yet it was all too late for the Galenists, for the Jardin du Roi, which became one of the most influential institutions in French chemistry and medicine, continued to support iatrochemistry. The professors there produced a string of successful chemical textbooks, most famously that of Nicolas Lemery, called Cours de chimie, in 1675. These men were sober, practical individuals who helped to strip iatrochemistry of its Paracelsian fantasies and outlandish jargon. They placed chemical medicine, and chemistry itself, on a sound footing, paving the way to Lavoisier’s triumphs.
What was this long and bitter dispute really about? Partly, of course, it was a power struggle: over who had the king’s ear, but also who should dictate the practice (and thus reap the financial rewards) of medicine. But it would be too easy to cast Riolan and his colleagues as outdated reactionaries. After all, they were right about antimony (if for the wrong reasons) – and they were right too to criticize some of the wild excesses of Paracelsus’s ideas. Their opposition forced the iatrochemists to prune those ideas, sorting the good from the bad. Besides, since no kind of medicine was terribly effective in those days, there wasn’t much empirical justification for throwing out the old ways. The dispute is a reminder that introducing new scientific ideas may depend as much on the power of good rhetoric as on the evidence itself. And it shows that in the end a good argument can leave science healthier.
Subscribe to:
Posts (Atom)