Here’s my latest piece for BBC Future.
___________________________________________________________
How cold is it in space? That question is sure to prompt the geeks among us to pipe up with “2.7 degree kelvin”, which is the temperature produced by the uniform background radiation or ‘afterglow’ from the Big Bang. (Degrees Kelvin (K) here are degrees above absolute zero, with a degree on the kelvin scale being almost the same as those on the centigrade scale.)
But hang on. Evidently you don’t hit 2.7 K the moment you step outside the Earth’s atmosphere. Heat is streaming from the Sun to warm the Earth, and it will also warm other objects exposed to its rays. Take the Moon, which has virtually no atmosphere to complicate things. On the sunlit side the Moon is hotter than the Sahara – it can top 120 oC. But on the dark side it can drop to around minus 170 oC.
So just how cold can it go in our own cosmic neighbourhood? This isn’t an idle question if you’re thinking of sending spacecraft up there (let alone people). It’s particularly pertinent if you’re doing that precisely because space is cold, in order to do experiments in low-temperature physics.
There’s no need for that just to keep the apparatus cold – you only need liquid-helium coolant to get below 4 K in the lab, and some experiments have come to within just a few billionths of a kelvin of absolute zero. But some low-temperature experiments are being planned that also demand zero gravity. You can get that on Earth for a short time in freefall air flights, but for longer than a few seconds you need to go into space.
One such experiment, called MAQRO, hopes to test fundamental features of quantum theory and perhaps to search for subtle effects in a quantum picture of gravity – something that physicists can so far see only in the haziest terms. The scientists behind MAQRO have now worked out whether it will in fact be possible to get cold enough, on a spacecraft carrying the equipment, for the tests to work.
MAQRO was proposed last year by Rainer Kaltenbaek and Markus Aspelmeyer of the University of Vienna and their collaborators [R. Kaltenbaek et al., Experimental Astronomy 34, 123 (2012)]. The experiment would study one of the most profound puzzles in quantum theory: how or why do the rules of quantum physics, which govern fundamental particles like electrons and atoms, give way to the ‘classical’ physics of the everyday world? Why do quantum particles sometimes behave like waves whereas footballs don’t?
No one fully understands this so-called quantum-to-classical transition. But one of the favourite explanations invokes an idea called decoherence, which means that in effect the quantum behaviour of a system gets jumbled and ultimately erased because of the disruptive effects of the environment. These effects become stronger the more particles the system contains, because then there are more options for the environment to interfere. For objects large enough to see, containing countless trillions of atoms, decoherence happens in an instant, washing out quantum effects in favour of classical behaviour.
In this picture, it should be possible to preserve ‘quantum-ness’ in any system, no matter how big, if you could isolate it perfectly from its environment. In principle, even footballs would then show wave-particle duality and could exist in two states, or two places, at once. But some theories, as yet still speculative and untested, insist that something else will prevent this weird behaviour in large, massive objects, perhaps because of effects that would disclose something about a still elusive quantum theory of gravity.
So the stakes for MAQRO could be big. The experimental apparatus itself wouldn’t be too exotic. Kaltenbaek and colleagues propose to use laser beams to place a ‘big’ particle (about a tenth of a micrometre across) in two quantum states at once, called a superposition, and then to probe with the lasers how decoherence destroys this superposition (or not). The apparatus would have to be very cold because, as with most quantum effects, heat would disrupt a delicate superposition. And performing the experiment in zero gravity on a spacecraft could show whether gravity does indeed play a role in the quantum-to-classical transition. Putting it all on a spacecraft would be about as close to perfect isolation from the environment as one can imagine.
But now Kaltenbaek and colleagues, in collaboration with researchers at the leading European space-technology company Astrium Satellites in Friedrichshafen, Germany, have worked out just how cold the apparatus could really get. They imagine sticking a ‘bench’ with all the experimental components on the back of a disk-shaped spacecraft, with the disk, and several further layers of thermal insulation, shielding it from the Sun. So while the main body of the spacecraft would be kept at about 300 K (27 oC), which its operating equipment would require, the bench could be much colder.
But how much? The researchers calculate that, with three concentric thermal shields between the main disk of the spacecraft and the bench, black on their front surface to optimize radiation of heat and gold-plated on the reverse to minimize heating from the shield below, it should be possible to get the temperature of the bench itself down to 27 K. Much of the warming would come through the struts holding the bench and shields to the main disk.
That’s not really cold enough for the MAQRO experiment to work well. But the test particle itself would be held in free space above the bench, and this would be colder. On its own it could reach 8 K, but with all the other experimental components around it, all radiating heat, it reaches 16 K. This, they calculate, would be enough to test the decoherence rates predicted for all the major theories which currently propose that intrinsic mass (perhaps via gravity) will enforce decoherence in a large object. In other words, MAQRO should be cold enough to spot if these models are wrong.
Could it discriminate between any theories that aren’t ruled out? That’s another matter, which remains to be seen. But simply knowing that size matters in quantum mechanics would be a major finding. The bigger question, of course, is whether anyone will consider MAQRO – a cheap experiment as space science goes – worth a shot.
Reference: G. Hechenblaikner et al., preprint at http://www.arxiv.org/abs/1309.3234
Friday, September 27, 2013
Thursday, September 19, 2013
Fearful symmetry
So the plan is that I’ll be writing a regular (ideally weekly) blog piece for Prospect from now on. Here is the current one, stemming from a gig last night that was a lot of fun.
_________________________________________________________
Roger Penrose makes his own rules. He is one of the most distinguished mathematical physicists in the world, but also (this doesn’t necessarily follow) one of the most inventive thinkers. It was his work on the theory of general relativity in the 1960s, especially on how the gravity of collapsing stars can produce black-hole ‘singularities’ in spacetime, that set Stephen Hawking on a course to rewrite black-hole physics. That research made Penrose’s name in science, but his mind ranges much further. In The Emperor’s New Mind (1989) he proposed that the human mind can handle problems that are formally ‘non-computable’, meaning that any computer trying to solve them by executing a set of logical rules (as all computers do) would chunter away forever without coming to a conclusion. This property of the mind, Penrose said, might stem from the brain’s use of some sort of quantum-mechanical principle, perhaps involving quantum gravity. In collaboration with anaesthetist Stuart Hameroff, he suggested in Shadows of the Mind (1994) what that principle might be, involving quantum behaviour in protein filaments called microtubules in neurons. Neuroscientists scoffed, glazed over, or muttered “Oh, physicists…”
So when I introduced a talk by Penrose this week at the Royal Institution, I commented that he is known for ideas that most others wouldn’t even imagine, let alone dare voice. I didn’t, however, expect to encounter some new ones that evening.
Penrose was speaking about the discovery for which he is perhaps best known among the public: the so-called Penrose tiling, a pair of rhombus-shaped tiles that can be used to tile a flat surface forever without the pattern ever repeating. It turns out that this pattern is peppered with objects that have five- or ten-fold symmetry: like a pentagon, they can be superimposed on themselves when rotated a fifth of a full turn. That is very strange, because fivefold symmetry is known to be rigorously forbidden for any two-dimensional packing of shapes. (Try it with ordinary pentagons and you quickly find that you get lots of gaps.) The Penrose tiling doesn’t have this ‘forbidden symmetry’ in a perfect form, but it almost does.
These tilings – there are other shapes that have an equivalent result – are strikingly beautiful, with a mixture of regularity and disorder that is somehow pleasing. This is doubtless why, as Penrose explained, many architects worldwide have made use of them. But they also have a deeper significance. After Penrose described the tiling in the 1970s, the crystallographer Alan Mackay – one of the unsung polymathic savants of British science – showed in 1981 that if you imagine putting atoms at the corners of the tiles and bouncing X-rays off them (the standard technique of X-ray crystallography for deducing the atomic structures of crystals) you can get a pattern of reflections that looks for all the world like that of a perfect crystal with the forbidden five- and tenfold symmetries. Four years later, such a material (a metal alloy) was found in the real world by the Israeli materials scientist Daniel Shechtman and his coworkers. This was dubbed a quasicrystal, and the discovery won Shechtman the Nobel prize in Chemistry in 2011. Penrose tilings can explain how quasicrystals attain their ‘impossible’ structure.
In his talk Penrose explained the richness of these tilings, manipulating transparencies (remember them?) like a prestidigitator in ways that elicited several gasps of delight as new patterns suddenly came into view. But it was in the Q&A session that we got a glimpse of Penrose’s wildly lateral thinking. Assembling a tiling (and thus a quasicrystal) is a very delicate business, because if you add a tile in the wrong place or orientation, somewhere further down the line the pattern fouls up. But how could atoms in a quasicrystal know that they have to come together in a certain way here to avoid a problem right over there? Maybe, Penrose said, they make use of the bizarre quantum-mechanical property called entanglement, which foxed Einstein, in which two particles can affect one another instantaneously over any distance. Crikey.
In Penrose’s mind it all links up: quasicrystals, non-computable problems, the universe… You can use these tiles, he said, to represent the rules of how things interact in a hypothetical universe in which everything is then non-computable: the rules are well defined, but you can never use them to predict what is going to happen until it actually happens.
But my favourite anecdote had Penrose inspecting a new Penrose tiling being laid out on the concourse of some university. Looking it over, he felt uneasy. Eventually he saw why: the builders, seeing an empty space at the edge of the tiling, had stuck another tile there that didn’t respect the proper rules for their assembly. No one else would have noticed, but Penrose saw that what it meant was that “the tiling would go wrong somewhere in the middle of the lawn”. Not that it was ever going to reach that far – but it was a flaw in that hypothetical continuation, that imaginary universe, and for a mathematician that wouldn’t do. The tile had to go.
Tuesday, September 17, 2013
Quantum theory reloaded
I have finally published a long-gestated piece in Nature (501, p154; 12 September) on quantum reconstructions. It has been one of the most interesting features I can remember working on, but was necessarily reduced drastically from the unwieldy first draft. Here (long post alert) is an intermediate version that contains a fair bit more than the final article could accommodate.
__________________________________________________________
Quantum theory works. It allows us to calculate the shapes of molecules, the behaviour of semiconductor devices, the trajectories of light, with stunning accuracy. But nagging inconsistencies, paradoxes and counter-intuitive effects play around the margins: entanglement, collapse of the wave function, the effect of the observer. Can Schrödinger’s cat really be alive and dead at once? Does reality correspond to a superposition of all possible quantum states, as the “many worlds” interpretation insists?
Most users don’t worry too much about these nagging puzzles. In the words of the physicist David Mermin of Cornell University, they “shut up and calculate”. That is, after all, one way of interpreting the famous Copenhagen interpretation of quantum theory developed in the 1920s by Niels Bohr, Werner Heisenberg and their collaborators, which states that the theory tells us all we can meaningfully know about the world and that the apparent weirdness, such as wave-particle duality, is just how things are.
But there have always been some researchers who aren’t content with this. They want to know what quantum theory means – what it really tells us about the world it describes with such precision. Ever since Bohr argued with Einstein, who could not accept his “get over it” attitude to quantum theory’s seeming refusal to assign objective properties, there has been continual and sometimes furious debate over the interpretations or “foundations” of quantum theory. The basic question, says physicist Maximilian Schlosshauer of the University of Portland in Oregon, is this: “What is it about this world that forces us to navigate it with the help of such an abstract entity as quantum theory?”
A small community of physicists and philosophers has now come to suspect that these arguments are doomed to remain unresolved so long as we cling to quantum theory as it currently stands, with its exotic paraphernalia of wavefunctions, superpositions, entangled states and the uncertainty principle. They suspect that we’re stuck with seemingly irreconcilable disputes about interpretation because we don’t really have the right form of the theory in the first place. We’re looking at it from the wrong angle, making its shadow odd, spiky, hard to decode. If we could only find the right perspective, all would be clear.
But to find it, they say, we will have to rebuild quantum theory from scratch: to tear up the work of Bohr, Heisenberg and Schrödinger and start again. This is the project known as quantum reconstruction. “The program of reconstructions starts with some fundamental physical principles – hopefully only a small number of them, and with principles that are physically meaningful and reasonable and that we all can agree on – and then shows the structure of quantum theory emerges as a consequence of these principles”, says Schlosshauer. He adds that this approach, which began in earnest over a decade ago, “has gained a lot of momentum in the past years and has already helped us understand why we have a theory as strange as quantum theory to begin with.”
One hundred years ago the Bohr atom placed the quantum hypothesis advanced by Max Planck and Einstein at the heart of the structure of the physical universe. Attempts to derive the structure of the quantum atom from first principles produced Erwin Schrödinger’s quantum mechanics and the Copenhagen interpretation. Now the time seems ripe for asking if all this was just an ad hoc heuristic tool that is due for replacement with something better. Quantum reconstructionists are a diverse bunch, each with a different view of what the project should entail. But one thing they share in common is that, in seeking to resolve the outstanding foundational ‘problems’ of quantum theory, they respond much as the proverbial Irishman when asked for directions to Dublin: “I wouldn’t start from here.”
That’s at the core of the discontent evinced by one of the key reconstructionists, Christopher Fuchs of the Perimeter Institute for Theoretical Physics in Waterloo, Canada [now moved to Raytheon], at most physicists’ efforts to grapple with quantum foundations. He points out that the fundamental axioms of special relativity can be expressed in a form anyone can understand: in any moving frame, the speed of light stays constant and the laws of physics stay the same. In contrast, efforts to write down the axioms of quantum theory rapidly degenerate into a welter of arcane symbols. Fuchs suspects that, if we find the right axioms, they will be a transparent as those of relativity [1].
“The very best quantum-foundational effort”, he says, “will be the one that can write a story – literally a story, all in plain words – so compelling and so masterful in its imagery that the mathematics of quantum mechanics in all its exact technical detail will fall out as a matter of course.” Fuchs takes inspiration from quantum pioneer John Wheeler, who once claimed that if we really understood the central point of quantum theory, we ought to be able to state it in one simple sentence.
“Despite all the posturing and grimacing over the paradoxes and mysteries, none of them ask in any serious way, ‘Why do we have this theory in the first place?’” says Fuchs. “They see the task as one of patching a leaking boat, not one of seeking the principle that has kept the boat floating this long. My guess is that if we can understand what has kept the theory afloat, we’ll understand that it was never leaky to begin with.”
We can rebuild it
One of the earliest attempts at reconstruction came in 2001, when Lucien Hardy, then at Oxford University, proposed that quantum theory might be derived from a small set of “very reasonable” axioms [2]. These axioms describe how states are described by variables or probability measurements, and how these states may be combined and interconverted. Hardy assumes that any state may be specified by the number K of probabilities needed to describe it uniquely, and that there are N ‘pure’ states that can be reliably distinguished in a single measurement. For example, for either a coin toss or a quantum bit (qubit), N=2. A key (if seemingly innocuous) axiom is that for a composite system we get K and N by multiplying those parameters for each of the components: Kab = KaKb, say. It follows from this that K and N must be related according to K=N**r, where r = 1,2,3… For a classical system each state has a single probability (50 percent for heads, say), so that K=N. But that possibility is ruled out by a so-called ‘continuity axiom’, which describes how states are transformed one to another. For a classical system this happens discontinuously – a head is flipped to a tail – whereas for quantum systems the transformation can be continuous: the two pure states of a qubit can be mixed together in any degree. (That is not, Hardy stresses, the same as assuming a quantum superposition – so ‘quantumness’ isn’t being inserted by fiat.) The simplest relationship consistent with the continuity axiom is therefore K=N**2, which corresponds to a quantum picture.
But as physicist Rafael Sorkin of Syracuse University in New York had previously pointed out [3], there seems to be no fundamental reason why the higher-order theories (requiring N**3, N**4 measurements and so forth) should not also exist and have real effects. For example, Hardy says, the famous double-slit experiment for quantum particles adds a new behaviour (interference) where classical theory would just predict the outcome to be the sum of two single-slit experiments. But whereas quantum theory predicts nothing new on adding a third slit, a higher-order theory would introduce a new effect in that case – an experimental prediction, albeit one that might be very hard to detect experimentally.
In this way Hardy claims to have begun to set up quantum theory as a general theory of probability, which he thinks could have been derived in principle by nineteenth-century mathematicians without any knowledge of the empirical motivations that led Planck and Einstein to initiate quantum mechanics at the start of the twentieth century.
Indeed, perhaps the most startling aspect of quantum reconstruction is that what seemed to the pioneers of quantum theory such as Planck, Einstein and Bohr to be revolutionary about it – the quantization rather than continuum of energy – may in fact be something of a sideshow. Quantization is not an axiomatic concept in quantum reconstructions, but emerges from them. “The historical development of quantum mechanics may have led us a little astray in our view of what it is all about”, says Schlosshauer. “The whole talk of waves versus particles, quantization and so on has made many people gravitate toward interpretations where wavefunctions represent some kind of actual physical wave property, creating a lot of confusion. Quantum mechanics is not a descriptive theory of nature, and that to read it as such is to misunderstand its role.”
The new QBism
Fuchs says that Hardy’s paper “convinced me to pursue the idea that a quantum state is not just like a set of probability distributions, but very literally is a probability distribution itself – a quantification of partial belief, and nothing more.” He says “it hit me over the head like a hammer and has shaped my thinking ever since” – although he admits that Hardy does not draw the same lesson from the work himself.
Fuchs was particularly troubled by the concept of entanglement. According to Schrödinger, who coined the term in the first place, this “is the characteristic trait of quantum mechanics, the one that enforces its entire departure from classical lines of thought” [4]. In most common expositions of the theory, entanglement is depicted as seeming to permit the kind of instantaneous ‘action at a distance’ Einstein’s theory of relativity forbade. Entangled particles have interdependent states, such that a measurement on one of them is instantaneously ‘felt’ by the other. For example, two photons can be entangled such that they have opposed orientations of polarization (vertical or horizontal). Before a measurement is made on the photons, their polarization is indeterminate: all we know is that these are correlated. But if we measure one photon, collapsing the probabilities into a well-defined outcome, then we automatically and instantaneously determine the other’s polarization too, no matter how far apart the two photons are. In 1935 Einstein and coworkers presented this as a paradox intended to undermine the probabilistic Copenhagen interpretation; but experiments on photons in the 1980s showed that it really happens [5]. Entanglement, far from being a contrived quirk, is the key to quantum information theory and its associated technologies, such as quantum computers and cryptography.
But although quantum theory can predict the outcomes of entanglement experiments perfectly adequately, it still seems an odd way for the world to behave. We can write down the equations, but we can’t feel the physics behind them. That’s what prompted Fuchs to call for a fresh approach to quantum foundations [1]. His approach [6, 7] argues that quantum states themselves – the entangled state of two photons, say, or even just the spin state of a single photon – don’t exist as objective realities. Rather, “quantum states represent observers’ personal information, expectations and degrees of belief”, he says.
Fuchs calls this approach quantum Bayesianism or QBism (pronounced “cubism”), because he believes that, as standard Bayesian probability theory assumes, probabilities – including quantum probabilities – “are not real things out in the world; their only existence is in quantifying personal degrees of belief of what might happen.” This view, he says, “allows one to see all quantum measurement events as little ‘moments of creation’, rather than as revealing anything pre-existent.”
This idea that quantum theory is really about what we can and do know has always been somewhat in the picture. Schrödinger’s wavefunctions encode a probability distribution of measurement outcomes: what these measurements on a quantum system might be. In the Copenhagen view, it is meaningless to talk about what we actually will measure until we do it. Likewise, Heisenberg’s uncertainty principle insists that we can’t know everything about every observable property with arbitrarily exact accuracy. In other words, quantum theory seemed to impose limits on our precise knowledge of the state of the world – or perhaps better put, to expose a fundamental indeterminacy in our expectations of what measurement will show us. But Fuchs wants us to accept that this isn’t a question of generalized imprecision of knowledge, but a statement about what a specific individual can see and measure. We’re not just part of the painting: in a sense we are partially responsible for painting it.
Information is the key
The rise of quantum information theory over the past few decades has put a new spin on this consideration. One might say that it has replaced an impression of analog fuzziness (“I can’t see this clearly”) with digital error (“the answer might be this or that, but there’s such-and-such a chance of your prediction being wrong”). It is this focus on information – rather, knowledge – that characterizes several of the current attempts to rebuild quantum theory from scratch. As physicists Caslav Brukner and Anton Zeilinger of the University of Vienna put it, “quantum physics is an elementary theory of information” [8].
Jeffrey Bub of the University of Maryland agrees: quantum mechanics, he says, is “fundamentally a theory about the representation and manipulation of information, not a theory about the mechanics of nonclassical waves or particles” – as clear a statement as you could wish for of why early quantum theory got distracted by the wrong things. His approach to reconstruction builds on the formal properties of how different sorts of information can be ordered and permuted, which lie at the heart of the uncertainty principle [9].
In the quantum picture, certain pairs of quantities do not commute, which means that it matters in which order they are considered: momentum times position is not the same as position times momentum, rather as kneading and baking dough do not commute when making bread. Bub believes that noncommutativity is what distinguishes quantum from classical mechanics, and that entanglement is one of the consequences. This property, he says, is a feature of the way information is fundamentally structured, and it might emerge from a principle called ‘information causality’ [10], introduced by Marcin Pawlowski of the University of Gdansk and colleagues. This postulate describes how much information one observer (call him Bob) can gain about a data set held by another (Alice). Classically the amount is limited by what Alice communicates to Bob. Quantum correlations such as entanglement can increase this limit, but only within bounds set by the information causality postulate. Pawlowski and colleagues suspect that this postulate might single out precisely what quantum correlations permit about information transfer. If so, they argue, “information causality might be one of the foundational properties of nature” – in other words, an axiom of quantum theory.
Ontic or epistemic?
At the root of the matter is the issue of whether quantum theory pronounces on the nature of reality (a so-called ontic theory) or merely on our allowed knowledge of it (an epistemic theory). Ontic theories, such as the Many Worlds interpretation, take the view that wavefunctions are real entities. The Copenhagen interpretation, on the other hand, is epistemic, insisting that it’s not physically meaningful to look for any layer of reality beneath what we can measure. In this view, says Fuchs, God plays dice and so “the future is not completely determined by the past.” QBism takes this even further: what we see depends on what we look for. “In both Copenhagen and QBism, the wave function is not something “out there’”, says Fuchs. “QBism should be seen as a modern variant and refinement of Copenhagen.”
His faith in epistemic approaches to reconstruction is boosted by the work of Robert Spekkens, his colleague at the Perimeter Institute. Spekkens has devised a ‘toy theory’ that restricts the amount of information an observer can have about discrete ontic states of the system: specifically, one’s knowledge about these states can never exceed the amount of knowledge one lacks about them. Spekkens calls this the ‘knowledge balance principle’. It might seem an arbitrary imposition, but he finds that it alone is sufficient to reproduce many (but not all) of the characteristics of quantum theory, such as superposition, entanglement and teleportation [11]. Related ideas involving other kinds of restriction on what can be known about a suite of states also find quantum-like behaviours emerging [12,13].
Fuchs sees these insights as a necessary corrective to the way quantum information theory has tended to propagate the notion that information is something objective and real – which is to say, ontic. “It is amazing how many people talk about information as if it is simply some new kind of objective quantity in physics, like energy, but measured in bits instead of ergs”, he says. “You’ll often hear information spoken of as if it’s a new fluid that physics has only recently taken note of.” In contrast, he argues, what else can information possibly be except an expression of what we think we know?
“What quantum information gave us was a vast range of phenomena that nominally looked quite novel when they were first found”, Fuchs explains. For example, it seemed that quantum states, unlike classical states, can’t be ‘cloned’ to make identical copes. “But what Rob’s toy model showed was that so much of this vast range wasn’t really novel at all, so long as one understood these to be phenomena of epistemic states, not ontic ones”. Classical epistemic states can’t be cloned any more than quantum states can be, for much the same reason as you can’t be me.
What’s the use?
What’s striking about several of these attempts at quantum reconstruction is that they suggest that our universe is just one of many mathematical possibilities. “It turns out that many principles lead to a whole class of probabilistic theories, and not specifically quantum theory”, says Schlosshauer. “The problem has been to find principles that actually single out quantum theory”. But this is in itself a valuable insight: “a lot of the features we think of as uniquely quantum, like superpositions, interference and entanglement, are actually generic to many probabilistic theories. This allows us to focus on the question of what makes quantum theory unique.”
Hardy says that, after a hiatus following Fuchs’ call to arms and his own five-axiom proposal in the early 2000s, progress in reconstructions really began in 2009. “We’re now poised for some really significant breakthroughs, in a way that we weren’t ten years ago”, he says. While there’s still no consensus on what the basic axioms should look like, he is confident that “we’ll know them when we see them.” He suspects that ultimately the right description will prove to be ontic rather than epistemic: it will remove the human observer from the scene once more and return us to an objective view of reality. But he acknowledges that some, like Fuchs, disagree profoundly.
For Fuchs, the aim of reconstruction is not to rebuild the existing formalism of quantum theory from scratch, but to rewrite it totally. He says that approaches such as QBism are already motivating new experimental proposals, which might for example reveal a new, deep symmetry within quantum mechanics [14]. The existence of this symmetry, Fuchs says, would allow the quantum probability law to be re-expressed as a minor variation of the standard ‘law of total probability’ in probability theory, which relates the probability of an event to the conditional probabilities of all the ways it might come about. “That new view, if it proves valid, could change our understanding of how to build quantum computers and other quantum information kits,” he says.
Quantum reconstruction is gaining support. A recent poll of attitudes among quantum theorists showed that 60% think reconstructions give useful insights, and more than a quarter think they will lead to a new theory deeper than quantum mechanics [15]. That is a rare degree of consensus for matters connected to quantum foundations.
But how can we judge the success of these efforts? “Since the object is simply to reconstruct quantum theory as it stands, we could not prove that a particular reconstruction was correct since the experimental results are the same regardless”, Hardy admits. “However, we could attempt to do experiments that test that the given axioms are true.” For example, one might seek the ‘higher-order’ interference that his approach predicts.
“However, I would say that the real criterion for success are more theoretical”, he adds. “Do we have a better understanding of quantum theory, and do the axioms give us new ideas as to how to go beyond current day physics?” He is hopeful that some of these principles might assist the development of a theory of quantum gravity – but says that in this regard it’s too early to say whether the approach has been successful.
Fuchs agrees that “the question is not one of testing the reconstructions in any kind of experimental way, but rather through any insight the different variations might give for furthering physical theory along. A good reconstruction is one that has some ‘leading power’ for the way a theorist might think.”
Some remain skeptical. “Reconstructing quantum theory from a set of basic principles seems like an idea with the odds greatly against it”, admits Daniel Greenberger of the City College of New York. “But it’s a worthy enterprise” [16]. Yet Schlosshauer argues that “even if no single reconstruction program can actually find a universally accepted set of principles that works, it’s not a wasted effort, because we will have learned so much along the way.”
He is cautiously optimistic. “I believe that once we have a set of simple and physically intuitive principles, and a convincing story to go with them, quantum mechanics will look a lot less mysterious”, he says. “And I think a lot of the outstanding questions will then go away. I’m probably not the only one who would love to be around to witness the discovery of these principles.” Fuchs feels that could be revolutionary. “My guess is, when the answer is in hand, physics will be ready to explore worlds the faulty preconception of quantum states couldn’t dream of.”
References
1. Fuchs, C., http://arxiv.org/abs/quant-ph/0106166 (2001).
2. Hardy, L. E. http://arxiv.org/abs/quant-ph/0101012 (2003).
3. Sorkin, R., http://arxiv.org/pdf/gr-qc/9401003 (1994).
4. Schrödinger, E. Proc. Cambridge Phil. Soc. 31, 555–563 (1935).
5. A. Aspect et al., Phys. Rev. Lett. 49, 91 (1982).
6. Fuchs, C. http://arxiv.org/pdf/1003.5209
7. Fuchs, C. http://arxiv.org/abs/1207.2141 (2012).
8. Brukner, C. & Zeilinger, A. http://arxiv.org/pdf/quant-ph/0212084 (2008).
9. Bub, J. http://arxiv.org/pdf/quant-ph/0408020 (2008).
10. Pawlowski, M. et al., Nature 461, 1101-1104 (2009).
11. Spekkens, R. W. http://arxiv.org/abs/quant-ph/0401052 (2004).
12. Kirkpatrick, K. A. Found. Phys. Lett. 16, 199 (2003).
13. Smolin, J. A. Quantum Inform. Compu. 5, 161 (2005).
14. Renes, J. M., Blume-Kohout, R., Scott, A. J. & Caves, C. M. J. Math. Phys. 45, 2717 (2004).
15. Schlosshauer, M., Kofler, J. & Zeilinger, A. Stud. Hist. Phil. Mod. Phys. 44, 222–230 (2013).
16. In Schlosshauer, M. (ed.), Elegance and Enigma: The Quantum Interviews (Springer, 2011).
__________________________________________________________
Quantum theory works. It allows us to calculate the shapes of molecules, the behaviour of semiconductor devices, the trajectories of light, with stunning accuracy. But nagging inconsistencies, paradoxes and counter-intuitive effects play around the margins: entanglement, collapse of the wave function, the effect of the observer. Can Schrödinger’s cat really be alive and dead at once? Does reality correspond to a superposition of all possible quantum states, as the “many worlds” interpretation insists?
Most users don’t worry too much about these nagging puzzles. In the words of the physicist David Mermin of Cornell University, they “shut up and calculate”. That is, after all, one way of interpreting the famous Copenhagen interpretation of quantum theory developed in the 1920s by Niels Bohr, Werner Heisenberg and their collaborators, which states that the theory tells us all we can meaningfully know about the world and that the apparent weirdness, such as wave-particle duality, is just how things are.
But there have always been some researchers who aren’t content with this. They want to know what quantum theory means – what it really tells us about the world it describes with such precision. Ever since Bohr argued with Einstein, who could not accept his “get over it” attitude to quantum theory’s seeming refusal to assign objective properties, there has been continual and sometimes furious debate over the interpretations or “foundations” of quantum theory. The basic question, says physicist Maximilian Schlosshauer of the University of Portland in Oregon, is this: “What is it about this world that forces us to navigate it with the help of such an abstract entity as quantum theory?”
A small community of physicists and philosophers has now come to suspect that these arguments are doomed to remain unresolved so long as we cling to quantum theory as it currently stands, with its exotic paraphernalia of wavefunctions, superpositions, entangled states and the uncertainty principle. They suspect that we’re stuck with seemingly irreconcilable disputes about interpretation because we don’t really have the right form of the theory in the first place. We’re looking at it from the wrong angle, making its shadow odd, spiky, hard to decode. If we could only find the right perspective, all would be clear.
But to find it, they say, we will have to rebuild quantum theory from scratch: to tear up the work of Bohr, Heisenberg and Schrödinger and start again. This is the project known as quantum reconstruction. “The program of reconstructions starts with some fundamental physical principles – hopefully only a small number of them, and with principles that are physically meaningful and reasonable and that we all can agree on – and then shows the structure of quantum theory emerges as a consequence of these principles”, says Schlosshauer. He adds that this approach, which began in earnest over a decade ago, “has gained a lot of momentum in the past years and has already helped us understand why we have a theory as strange as quantum theory to begin with.”
One hundred years ago the Bohr atom placed the quantum hypothesis advanced by Max Planck and Einstein at the heart of the structure of the physical universe. Attempts to derive the structure of the quantum atom from first principles produced Erwin Schrödinger’s quantum mechanics and the Copenhagen interpretation. Now the time seems ripe for asking if all this was just an ad hoc heuristic tool that is due for replacement with something better. Quantum reconstructionists are a diverse bunch, each with a different view of what the project should entail. But one thing they share in common is that, in seeking to resolve the outstanding foundational ‘problems’ of quantum theory, they respond much as the proverbial Irishman when asked for directions to Dublin: “I wouldn’t start from here.”
That’s at the core of the discontent evinced by one of the key reconstructionists, Christopher Fuchs of the Perimeter Institute for Theoretical Physics in Waterloo, Canada [now moved to Raytheon], at most physicists’ efforts to grapple with quantum foundations. He points out that the fundamental axioms of special relativity can be expressed in a form anyone can understand: in any moving frame, the speed of light stays constant and the laws of physics stay the same. In contrast, efforts to write down the axioms of quantum theory rapidly degenerate into a welter of arcane symbols. Fuchs suspects that, if we find the right axioms, they will be a transparent as those of relativity [1].
“The very best quantum-foundational effort”, he says, “will be the one that can write a story – literally a story, all in plain words – so compelling and so masterful in its imagery that the mathematics of quantum mechanics in all its exact technical detail will fall out as a matter of course.” Fuchs takes inspiration from quantum pioneer John Wheeler, who once claimed that if we really understood the central point of quantum theory, we ought to be able to state it in one simple sentence.
“Despite all the posturing and grimacing over the paradoxes and mysteries, none of them ask in any serious way, ‘Why do we have this theory in the first place?’” says Fuchs. “They see the task as one of patching a leaking boat, not one of seeking the principle that has kept the boat floating this long. My guess is that if we can understand what has kept the theory afloat, we’ll understand that it was never leaky to begin with.”
We can rebuild it
One of the earliest attempts at reconstruction came in 2001, when Lucien Hardy, then at Oxford University, proposed that quantum theory might be derived from a small set of “very reasonable” axioms [2]. These axioms describe how states are described by variables or probability measurements, and how these states may be combined and interconverted. Hardy assumes that any state may be specified by the number K of probabilities needed to describe it uniquely, and that there are N ‘pure’ states that can be reliably distinguished in a single measurement. For example, for either a coin toss or a quantum bit (qubit), N=2. A key (if seemingly innocuous) axiom is that for a composite system we get K and N by multiplying those parameters for each of the components: Kab = KaKb, say. It follows from this that K and N must be related according to K=N**r, where r = 1,2,3… For a classical system each state has a single probability (50 percent for heads, say), so that K=N. But that possibility is ruled out by a so-called ‘continuity axiom’, which describes how states are transformed one to another. For a classical system this happens discontinuously – a head is flipped to a tail – whereas for quantum systems the transformation can be continuous: the two pure states of a qubit can be mixed together in any degree. (That is not, Hardy stresses, the same as assuming a quantum superposition – so ‘quantumness’ isn’t being inserted by fiat.) The simplest relationship consistent with the continuity axiom is therefore K=N**2, which corresponds to a quantum picture.
But as physicist Rafael Sorkin of Syracuse University in New York had previously pointed out [3], there seems to be no fundamental reason why the higher-order theories (requiring N**3, N**4 measurements and so forth) should not also exist and have real effects. For example, Hardy says, the famous double-slit experiment for quantum particles adds a new behaviour (interference) where classical theory would just predict the outcome to be the sum of two single-slit experiments. But whereas quantum theory predicts nothing new on adding a third slit, a higher-order theory would introduce a new effect in that case – an experimental prediction, albeit one that might be very hard to detect experimentally.
In this way Hardy claims to have begun to set up quantum theory as a general theory of probability, which he thinks could have been derived in principle by nineteenth-century mathematicians without any knowledge of the empirical motivations that led Planck and Einstein to initiate quantum mechanics at the start of the twentieth century.
Indeed, perhaps the most startling aspect of quantum reconstruction is that what seemed to the pioneers of quantum theory such as Planck, Einstein and Bohr to be revolutionary about it – the quantization rather than continuum of energy – may in fact be something of a sideshow. Quantization is not an axiomatic concept in quantum reconstructions, but emerges from them. “The historical development of quantum mechanics may have led us a little astray in our view of what it is all about”, says Schlosshauer. “The whole talk of waves versus particles, quantization and so on has made many people gravitate toward interpretations where wavefunctions represent some kind of actual physical wave property, creating a lot of confusion. Quantum mechanics is not a descriptive theory of nature, and that to read it as such is to misunderstand its role.”
The new QBism
Fuchs says that Hardy’s paper “convinced me to pursue the idea that a quantum state is not just like a set of probability distributions, but very literally is a probability distribution itself – a quantification of partial belief, and nothing more.” He says “it hit me over the head like a hammer and has shaped my thinking ever since” – although he admits that Hardy does not draw the same lesson from the work himself.
Fuchs was particularly troubled by the concept of entanglement. According to Schrödinger, who coined the term in the first place, this “is the characteristic trait of quantum mechanics, the one that enforces its entire departure from classical lines of thought” [4]. In most common expositions of the theory, entanglement is depicted as seeming to permit the kind of instantaneous ‘action at a distance’ Einstein’s theory of relativity forbade. Entangled particles have interdependent states, such that a measurement on one of them is instantaneously ‘felt’ by the other. For example, two photons can be entangled such that they have opposed orientations of polarization (vertical or horizontal). Before a measurement is made on the photons, their polarization is indeterminate: all we know is that these are correlated. But if we measure one photon, collapsing the probabilities into a well-defined outcome, then we automatically and instantaneously determine the other’s polarization too, no matter how far apart the two photons are. In 1935 Einstein and coworkers presented this as a paradox intended to undermine the probabilistic Copenhagen interpretation; but experiments on photons in the 1980s showed that it really happens [5]. Entanglement, far from being a contrived quirk, is the key to quantum information theory and its associated technologies, such as quantum computers and cryptography.
But although quantum theory can predict the outcomes of entanglement experiments perfectly adequately, it still seems an odd way for the world to behave. We can write down the equations, but we can’t feel the physics behind them. That’s what prompted Fuchs to call for a fresh approach to quantum foundations [1]. His approach [6, 7] argues that quantum states themselves – the entangled state of two photons, say, or even just the spin state of a single photon – don’t exist as objective realities. Rather, “quantum states represent observers’ personal information, expectations and degrees of belief”, he says.
Fuchs calls this approach quantum Bayesianism or QBism (pronounced “cubism”), because he believes that, as standard Bayesian probability theory assumes, probabilities – including quantum probabilities – “are not real things out in the world; their only existence is in quantifying personal degrees of belief of what might happen.” This view, he says, “allows one to see all quantum measurement events as little ‘moments of creation’, rather than as revealing anything pre-existent.”
This idea that quantum theory is really about what we can and do know has always been somewhat in the picture. Schrödinger’s wavefunctions encode a probability distribution of measurement outcomes: what these measurements on a quantum system might be. In the Copenhagen view, it is meaningless to talk about what we actually will measure until we do it. Likewise, Heisenberg’s uncertainty principle insists that we can’t know everything about every observable property with arbitrarily exact accuracy. In other words, quantum theory seemed to impose limits on our precise knowledge of the state of the world – or perhaps better put, to expose a fundamental indeterminacy in our expectations of what measurement will show us. But Fuchs wants us to accept that this isn’t a question of generalized imprecision of knowledge, but a statement about what a specific individual can see and measure. We’re not just part of the painting: in a sense we are partially responsible for painting it.
Information is the key
The rise of quantum information theory over the past few decades has put a new spin on this consideration. One might say that it has replaced an impression of analog fuzziness (“I can’t see this clearly”) with digital error (“the answer might be this or that, but there’s such-and-such a chance of your prediction being wrong”). It is this focus on information – rather, knowledge – that characterizes several of the current attempts to rebuild quantum theory from scratch. As physicists Caslav Brukner and Anton Zeilinger of the University of Vienna put it, “quantum physics is an elementary theory of information” [8].
Jeffrey Bub of the University of Maryland agrees: quantum mechanics, he says, is “fundamentally a theory about the representation and manipulation of information, not a theory about the mechanics of nonclassical waves or particles” – as clear a statement as you could wish for of why early quantum theory got distracted by the wrong things. His approach to reconstruction builds on the formal properties of how different sorts of information can be ordered and permuted, which lie at the heart of the uncertainty principle [9].
In the quantum picture, certain pairs of quantities do not commute, which means that it matters in which order they are considered: momentum times position is not the same as position times momentum, rather as kneading and baking dough do not commute when making bread. Bub believes that noncommutativity is what distinguishes quantum from classical mechanics, and that entanglement is one of the consequences. This property, he says, is a feature of the way information is fundamentally structured, and it might emerge from a principle called ‘information causality’ [10], introduced by Marcin Pawlowski of the University of Gdansk and colleagues. This postulate describes how much information one observer (call him Bob) can gain about a data set held by another (Alice). Classically the amount is limited by what Alice communicates to Bob. Quantum correlations such as entanglement can increase this limit, but only within bounds set by the information causality postulate. Pawlowski and colleagues suspect that this postulate might single out precisely what quantum correlations permit about information transfer. If so, they argue, “information causality might be one of the foundational properties of nature” – in other words, an axiom of quantum theory.
Ontic or epistemic?
At the root of the matter is the issue of whether quantum theory pronounces on the nature of reality (a so-called ontic theory) or merely on our allowed knowledge of it (an epistemic theory). Ontic theories, such as the Many Worlds interpretation, take the view that wavefunctions are real entities. The Copenhagen interpretation, on the other hand, is epistemic, insisting that it’s not physically meaningful to look for any layer of reality beneath what we can measure. In this view, says Fuchs, God plays dice and so “the future is not completely determined by the past.” QBism takes this even further: what we see depends on what we look for. “In both Copenhagen and QBism, the wave function is not something “out there’”, says Fuchs. “QBism should be seen as a modern variant and refinement of Copenhagen.”
His faith in epistemic approaches to reconstruction is boosted by the work of Robert Spekkens, his colleague at the Perimeter Institute. Spekkens has devised a ‘toy theory’ that restricts the amount of information an observer can have about discrete ontic states of the system: specifically, one’s knowledge about these states can never exceed the amount of knowledge one lacks about them. Spekkens calls this the ‘knowledge balance principle’. It might seem an arbitrary imposition, but he finds that it alone is sufficient to reproduce many (but not all) of the characteristics of quantum theory, such as superposition, entanglement and teleportation [11]. Related ideas involving other kinds of restriction on what can be known about a suite of states also find quantum-like behaviours emerging [12,13].
Fuchs sees these insights as a necessary corrective to the way quantum information theory has tended to propagate the notion that information is something objective and real – which is to say, ontic. “It is amazing how many people talk about information as if it is simply some new kind of objective quantity in physics, like energy, but measured in bits instead of ergs”, he says. “You’ll often hear information spoken of as if it’s a new fluid that physics has only recently taken note of.” In contrast, he argues, what else can information possibly be except an expression of what we think we know?
“What quantum information gave us was a vast range of phenomena that nominally looked quite novel when they were first found”, Fuchs explains. For example, it seemed that quantum states, unlike classical states, can’t be ‘cloned’ to make identical copes. “But what Rob’s toy model showed was that so much of this vast range wasn’t really novel at all, so long as one understood these to be phenomena of epistemic states, not ontic ones”. Classical epistemic states can’t be cloned any more than quantum states can be, for much the same reason as you can’t be me.
What’s the use?
What’s striking about several of these attempts at quantum reconstruction is that they suggest that our universe is just one of many mathematical possibilities. “It turns out that many principles lead to a whole class of probabilistic theories, and not specifically quantum theory”, says Schlosshauer. “The problem has been to find principles that actually single out quantum theory”. But this is in itself a valuable insight: “a lot of the features we think of as uniquely quantum, like superpositions, interference and entanglement, are actually generic to many probabilistic theories. This allows us to focus on the question of what makes quantum theory unique.”
Hardy says that, after a hiatus following Fuchs’ call to arms and his own five-axiom proposal in the early 2000s, progress in reconstructions really began in 2009. “We’re now poised for some really significant breakthroughs, in a way that we weren’t ten years ago”, he says. While there’s still no consensus on what the basic axioms should look like, he is confident that “we’ll know them when we see them.” He suspects that ultimately the right description will prove to be ontic rather than epistemic: it will remove the human observer from the scene once more and return us to an objective view of reality. But he acknowledges that some, like Fuchs, disagree profoundly.
For Fuchs, the aim of reconstruction is not to rebuild the existing formalism of quantum theory from scratch, but to rewrite it totally. He says that approaches such as QBism are already motivating new experimental proposals, which might for example reveal a new, deep symmetry within quantum mechanics [14]. The existence of this symmetry, Fuchs says, would allow the quantum probability law to be re-expressed as a minor variation of the standard ‘law of total probability’ in probability theory, which relates the probability of an event to the conditional probabilities of all the ways it might come about. “That new view, if it proves valid, could change our understanding of how to build quantum computers and other quantum information kits,” he says.
Quantum reconstruction is gaining support. A recent poll of attitudes among quantum theorists showed that 60% think reconstructions give useful insights, and more than a quarter think they will lead to a new theory deeper than quantum mechanics [15]. That is a rare degree of consensus for matters connected to quantum foundations.
But how can we judge the success of these efforts? “Since the object is simply to reconstruct quantum theory as it stands, we could not prove that a particular reconstruction was correct since the experimental results are the same regardless”, Hardy admits. “However, we could attempt to do experiments that test that the given axioms are true.” For example, one might seek the ‘higher-order’ interference that his approach predicts.
“However, I would say that the real criterion for success are more theoretical”, he adds. “Do we have a better understanding of quantum theory, and do the axioms give us new ideas as to how to go beyond current day physics?” He is hopeful that some of these principles might assist the development of a theory of quantum gravity – but says that in this regard it’s too early to say whether the approach has been successful.
Fuchs agrees that “the question is not one of testing the reconstructions in any kind of experimental way, but rather through any insight the different variations might give for furthering physical theory along. A good reconstruction is one that has some ‘leading power’ for the way a theorist might think.”
Some remain skeptical. “Reconstructing quantum theory from a set of basic principles seems like an idea with the odds greatly against it”, admits Daniel Greenberger of the City College of New York. “But it’s a worthy enterprise” [16]. Yet Schlosshauer argues that “even if no single reconstruction program can actually find a universally accepted set of principles that works, it’s not a wasted effort, because we will have learned so much along the way.”
He is cautiously optimistic. “I believe that once we have a set of simple and physically intuitive principles, and a convincing story to go with them, quantum mechanics will look a lot less mysterious”, he says. “And I think a lot of the outstanding questions will then go away. I’m probably not the only one who would love to be around to witness the discovery of these principles.” Fuchs feels that could be revolutionary. “My guess is, when the answer is in hand, physics will be ready to explore worlds the faulty preconception of quantum states couldn’t dream of.”
References
1. Fuchs, C., http://arxiv.org/abs/quant-ph/0106166 (2001).
2. Hardy, L. E. http://arxiv.org/abs/quant-ph/0101012 (2003).
3. Sorkin, R., http://arxiv.org/pdf/gr-qc/9401003 (1994).
4. Schrödinger, E. Proc. Cambridge Phil. Soc. 31, 555–563 (1935).
5. A. Aspect et al., Phys. Rev. Lett. 49, 91 (1982).
6. Fuchs, C. http://arxiv.org/pdf/1003.5209
7. Fuchs, C. http://arxiv.org/abs/1207.2141 (2012).
8. Brukner, C. & Zeilinger, A. http://arxiv.org/pdf/quant-ph/0212084 (2008).
9. Bub, J. http://arxiv.org/pdf/quant-ph/0408020 (2008).
10. Pawlowski, M. et al., Nature 461, 1101-1104 (2009).
11. Spekkens, R. W. http://arxiv.org/abs/quant-ph/0401052 (2004).
12. Kirkpatrick, K. A. Found. Phys. Lett. 16, 199 (2003).
13. Smolin, J. A. Quantum Inform. Compu. 5, 161 (2005).
14. Renes, J. M., Blume-Kohout, R., Scott, A. J. & Caves, C. M. J. Math. Phys. 45, 2717 (2004).
15. Schlosshauer, M., Kofler, J. & Zeilinger, A. Stud. Hist. Phil. Mod. Phys. 44, 222–230 (2013).
16. In Schlosshauer, M. (ed.), Elegance and Enigma: The Quantum Interviews (Springer, 2011).
Sunday, September 15, 2013
Insects with cogs
Here’s the initial version of my latest news story for Nature.
___________________________________________________
Toothed gears allow young jumping planthoppers to synchronize their legs.
If you’re a young planthopper, leaping a metre in a single bound, you need to push off with both hindlegs perfectly in time or you’ll end up spinning crazily. Researchers in England have discovered that this synchrony is made possible by toothed gears connecting the two legs.
Zoologists Malcolm Burrows and Gregory Sutton of Cambridge University say that this seems to be the first example of rotary motion in nature coupled by toothed gears. They describe their results in Science [1].
Their microscopic images of the hindleg mechanism of the planthopper Issus coleoptratus show that the topmost leg segments, ending in partly circular structures, are connected by a series of tiny intermeshing teeth about 20 micrometres (thousandths of a millimetre) long.
When the insects jump, the two legs rotate together, the cog teeth ensuring that they thrust at exactly the same time. “The gears add an extra level of synchronisation beyond that which can be achieved by the nervous system”, says Burrows.
Planthopper nymphs can take off in just 2 milliseconds, reaching take-off speeds of almost 4 metres per second. For motions this rapid, some mechanical device is needed to keep the legs synchronized and avoid lopsided jumps that lead to spinning along the body axis. The problem doesn’t arise for grasshoppers and fleas: they have legs at the side of the body that push in separate planes rather than counter-rotating in a single plane, and so they can jump one-legged.
Toothed gears have been used in rotating machinery for millenia: Aristotle and Archimedes described them, and they seem to have been used in ancient China much earlier. But like the wheel, this human invention seemed previously to have little value in the natural world.
Now, however, the gear joins the screw-and-nut as a mechanism whose complex shape has been mastered by evolution. In 2011 Alexander Riedel of the State Museum of Natural History in Karlsruhe, Germany and his colleagues reported a screw-and-nut system in the leg joints of a weevil [2].
Riedel considers this new work “significant and exciting”. It adds to the view that “most of the basic components of engineering have been developed in the natural world”, he says. Perhaps gears are not more common, he adds, because there are different ways to achieve the same goal. Honeybees, for example, “couple the movement of both wings to stabilize their flight by using pegs, not as moving gears but more like a Velcro fastener.”
Curiously, the gears are only found in the young nymph insects. When they undergo their final moult, sloughing off their exoskeleton for the last time to reach full adulthood, the gears disappear and instead the legs are synchronized by simpler frictional contact.
Burrows and Sutton aren’t yet sure why this is so, but it might be because of ease of repair. “If a gear breaks it can’t be replaced in adults”, says Burrows. “But in nymphs a repair can be made at the next of several moults.” He also explains that the larger and more rigid adult bodies might make the frictional method work better.
References
1. Burrows, M. & Sutton, G. Science 341, 1254-1256 (2013).
2. van de Kamp, T., Vagovic, P., Baumbach, T. & Riedel, A., Science 333, 52 (2011).
Some additional comments from biomimetics expert Steven Vogel of Duke University:
Interesting business. I can't think of another case of rotary gears at the moment. The closest thing that has yet come to mind is the zipper-like closure once described in (if I recall right) ctenophore mouths.
So many creatures jump without such an obvious mechanical coupling between paired legs that it can't be too difficult to keep from going awry. In any case, some compensation would often be necessary for irregularity in stiffness and level of substratum, etc. One does wonder about whether proprioceptive feedback can work at the short times that would necessarily be involved.
M. Scherge and S.N. Gorb, in their 2000 book, Biological Micro- and Nanotribology, do quite a thorough job. The upshot seems to be that what Burrows describes may be functionally novel, but from a structural point of view it represents (as is so typical of evolutionary innovations) no spectacular discontinuity. They talk about coxal rather than trochanteral segments, one unit more proximal, of course, for whatever that matters.
Synchronizing legs may be no absolute requirement. After all, surfaces are irregular in level, resilience, and so forth, and the legs never push directly at the center of gravity of the insect. So perfect synchrony won't necessarily give a straight trajectory anyway. And some post-launch adjustment may be possible, either inertially or aerodynamically. (Zero-angular-velocity turns, as righting cats, or tail-swinging, as perhaps in jumping rodents that have convergently evolved long tails with hair tufts at their ends.)
Maybe gears such as these come with an odd disability--they really louse things up if they mesh out of register. Or maybe they're tricky to molt and get back into register.
Filleting the gear bottoms is an interesting fillip. For us that's a relatively recent development, I gather. We've made gears for a long time--the antikythera mechanism (100 bce) is a bunch of 'em. Ones that take reasonable torque might be more recent, but are still old - I found some in Agostino Ramelli (1588), unfilleted. And the gears salvaged from a horse ferry (1830) scuttled on Lake Champlain were unfilleted. Odd that no one seemed to have noticed that filleted gears are much, much less prone to getting knocked off, particularly with brittle cast iron.
I take mild issue with Burrow's use of ‘force' for acceleration. It's not only incorrect, but it tends to perpetrate the myth that insects are superstrong, instead of recognizing artifacts of scaling. I wrote an essay about the matter a few years ago; it became chapter 2 in "Glimpses of Creature in their Physical Worlds" (2009). The upshot is that we expect, and find, that acceleration scales (or bumps into a limit line) inversely with length - from jumping cougars down to shooting spores, five orders of magnitude. That keeps the stress on launch materials roughly constant, assuming roughly constant investment in the relevant guns. 200 or 500 g isn't out of line for their size. Good, but not earthshaking.
I'm amused to learn of yet another case of something I once commented on (in "Cats' Paws and Catapults") when trying to inject a note of reality into the hype and hope of biomimetics: "The biomechanic usually recognizes nature's use of some neat device only when the engineer has already provided us with a model. Put another way, biomechanics still studies how, where, and why nature does what engineers do."
Friday, September 13, 2013
Storm in a test tube
Here’s the last of the La Recherche pieces on 'controversies': a short profile of Martin Fleischmann and cold fusion.
_____________________________________________________________
It would be unfair to Martin Fleischmann, who died last year aged 85, if he were remembered solely for his work on ‘cold fusion’ – the alleged discovery with his coworker Stanley Pons in 1989 that nuclear fusion of heavy hydrogen (deuterium), and the consequent release of energy, could be achieved with benchtop chemistry. Before making that controversial claim, Fleischmann enjoyed international repute for his work in electrochemistry. But to many scientists, cold fusion – now rejected by all but a handful of ‘true believers’ – cast doubt on his judgement and even his integrity.
Fleischmann was born in 1927 to a family with Jewish heritage in Czechoslovakia, and came to England as a young boy to escape the Nazis. He conducted his most celebrated work at the University of Southampton, where in 1974 he discovered a technique for monitoring chemical processes at surfaces. This and his later work on ultra-small electrodes made him a respected figure in electrochemistry.
After officially retiring, he conducted his work with Pons at the University of Utah in the late 1980s. They claimed that the electrolysis of lithium deuteroxide using palladium electrodes generated more energy than it consumed, presumably because of fusion of deuterium atoms packed densely into the ‘hydrogen sponge’ of the palladium metal. Their announcement of the results in a press conference – before publication of a paper, accompanied by very scanty evidence, and scooping a similar claim by a team at the nearby Brigham Young University – ensured that cold fusion was controversial from the outset. At the April 1989 meeting of the American Chemical Society, Fleischmann and Pons were welcomed like rock stars for apparently having achieved what physicists had been trying to do for decades: to liberate energy by nuclear fusion.
Things quickly fell apart. Genuine fusion should be accompanied by other telltale signatures, such as the formation of helium and the emission of neutrons with a particular energy. The claim also depended on control experiments using ordinary hydrogen in place of deuterium. Pons and Fleischmann were evasive when asked whether they had done these checks, or what the results were, and the only paper they published on the subject offered no clarification. Several other groups soon reported ‘excess heat’ and other putative fusion signatures, but the claims were never repeatable, and several exhaustive studies failed to find convincing evidence for fusion. The affair ended badly, amidst law suits, recriminations and accusations of fraud.
Fleischmann always maintained that cold fusion was real, albeit perhaps not quite the phenomenon he’d originally thought. The pattern of marginal and irreproducible effects and ad hoc, shifting explanations fits Irving Langmuir’s template of “pathological science”. But even now, some cling to the alluring dream that cold fusion could be an energy source.
_____________________________________________________________
It would be unfair to Martin Fleischmann, who died last year aged 85, if he were remembered solely for his work on ‘cold fusion’ – the alleged discovery with his coworker Stanley Pons in 1989 that nuclear fusion of heavy hydrogen (deuterium), and the consequent release of energy, could be achieved with benchtop chemistry. Before making that controversial claim, Fleischmann enjoyed international repute for his work in electrochemistry. But to many scientists, cold fusion – now rejected by all but a handful of ‘true believers’ – cast doubt on his judgement and even his integrity.
Fleischmann was born in 1927 to a family with Jewish heritage in Czechoslovakia, and came to England as a young boy to escape the Nazis. He conducted his most celebrated work at the University of Southampton, where in 1974 he discovered a technique for monitoring chemical processes at surfaces. This and his later work on ultra-small electrodes made him a respected figure in electrochemistry.
After officially retiring, he conducted his work with Pons at the University of Utah in the late 1980s. They claimed that the electrolysis of lithium deuteroxide using palladium electrodes generated more energy than it consumed, presumably because of fusion of deuterium atoms packed densely into the ‘hydrogen sponge’ of the palladium metal. Their announcement of the results in a press conference – before publication of a paper, accompanied by very scanty evidence, and scooping a similar claim by a team at the nearby Brigham Young University – ensured that cold fusion was controversial from the outset. At the April 1989 meeting of the American Chemical Society, Fleischmann and Pons were welcomed like rock stars for apparently having achieved what physicists had been trying to do for decades: to liberate energy by nuclear fusion.
Things quickly fell apart. Genuine fusion should be accompanied by other telltale signatures, such as the formation of helium and the emission of neutrons with a particular energy. The claim also depended on control experiments using ordinary hydrogen in place of deuterium. Pons and Fleischmann were evasive when asked whether they had done these checks, or what the results were, and the only paper they published on the subject offered no clarification. Several other groups soon reported ‘excess heat’ and other putative fusion signatures, but the claims were never repeatable, and several exhaustive studies failed to find convincing evidence for fusion. The affair ended badly, amidst law suits, recriminations and accusations of fraud.
Fleischmann always maintained that cold fusion was real, albeit perhaps not quite the phenomenon he’d originally thought. The pattern of marginal and irreproducible effects and ad hoc, shifting explanations fits Irving Langmuir’s template of “pathological science”. But even now, some cling to the alluring dream that cold fusion could be an energy source.
Thursday, September 12, 2013
Remembering the memory
Here’s my second piece for La Recherche’s special issue in August on scientific controversies – this one on the ‘memory of water’.
_____________________________________________________________
So far, “The Memory of Water” has been used as the title of a play, two movies, a collection of poems and a rock song. When the French immunologist Jacques Benveniste proposed in 1988 that water has a memory, he gave birth to a catchphrase with considerable cultural currency.
But Benveniste, who died in 2004, also ignited a scientific controversy that is still simmering a quarter of a century later. While most physicists and chemists consider Benveniste’s original idea – that water can retain a memory of substances it has dissolved, so that they can display chemical effects even when diluted to vanishing point – to be inconsistent with all we know about the properties of liquid water, Benveniste’s former colleagues and a handful of converts still believe there was something in it.
The claim would be provocative under any circumstances. But the dispute is all the fiercer because Benveniste’s ‘memory of water’ seems to offer an explanation for how homeopathy can work. This ‘alternative’ medical treatment, in which putative remedies are so diluted that active ingredients remain, has a huge following worldwide, and is particularly popular in France. But most medical practitioners consider it to be sheer superstition sustained by ignorance and the placebo effect.
Yet while there seems no good reason to believe that water has a ‘memory’, no one is quite sure how to account for the peculiar results Benveniste reported in 1988. This episode illustrates how hard it is for science to deal with deeply unorthodox findings, especially when they bear on wider cultural issues. In such cases an objective assessment of the data might not be sufficient, and perhaps not even possible, and the business of doing science is revealed for the human endeavour that it is, with all its ambiguities, flaws and pitfalls.
Rise and fall
Benveniste did not set out to ‘discover’ anything about water. As the head of Unit 200 of the French national medical research organization INSERM in Clamart on the edge of Paris, he was respected for his work on allergic responses. In 1987 he and his team spotted something strange while investigating the response of a type of human white blood cell, called basophils, to antibodies. Basophils patrol the bloodstream for foreign particles, and are triggered into releasing histamine – a response called degranulation – when they encounter allergy-inducing substances called allergens. Degranulation begins when allergens attach to antibodies called immunoglobulin E (IgE) anchored to the basophil surface. Benveniste’s team were using a ‘fake allergen’ to initiate this process: another antibody called anti-IgE, produced in non-human animals.
The researchers sometimes found that degranulation happened even when the concentration of anti-IgE was too low to be expected to have any effect. Benveniste and colleagues diluted a solution of anti-IgE gradually and monitored the amount of basophil degranulation. Basic chemistry suggests that the activity of anti-IgE should fall smoothly to zero as its concentration falls. But instead, the activity seemed to rise and fall almost rhythmically as the solution got more dilute. Even stranger, it went on behaving that way when the solution was so dilute that not a single anti-IgE molecule should remain.
That made no sense. How can molecules have an effect if they’re not there? Benveniste considered this finding striking enough to submit to Nature.
The editor of Nature at that time was John Maddox, who often displayed empathy for outsiders and a healthy scepticism of smug scientific consensus. Rather against the wishes of his staff, he insisted on sending the paper for peer review. The referees were puzzled but could find no obvious flaw in Benveniste’s experiments. After they had been replicated in independent laboratories in Canada, Italy and Israel, there seemed to be no option but to publish Benveniste’s paper, which Nature did in June 1988 [E. Davenas et al., Nature 333, 816 (1988)] – accompanied by an editorial from Maddox admitting that “There is no objective explanation of these observations.”
Hope for homeopathy?
The Nature paper caused pandemonium. It was clear at once that Benveniste’s results seemed to be offering scientific validation of homeopathy, the system of medicine introduced in the early nineteenth century by the German physician Samuel Hahnemann, in which the ‘active’ ingredients, already diluted to extinction, are said to get even more potent as they get more dilute.
Advocates swear that some clinical trials support the efficacy of homeopathy, but most medical experts consider there to be no solid evidence that it is effective beyond what would be expected from placebo effects. Even many homeopaths admit that there is no obvious scientific way to account for the effects they claim.
Not, at least, until the memory of water. “Homeopathy finds scientific support”, proclaimed Newsweek after Benveniste’s paper was published.
But how could water do this? The French team evidently had no idea. They suggested that “water could act as a ‘template’ for the [anti-IgE] molecule” – but this made no sense. For one thing, they evidently meant it the other way round: the antibody was acting as a template to imprint some kind of molecular structure on water, which could then act as a surrogate when the antibody was diluted away. But why should a negative imprint of the molecule act like the molecule itself? In any case, the properties of antibodies don’t just depend on their shape, but on the positions of particular chemical groups within the folded-up protein chain. And most of all, water is a liquid: its H2O molecules are constantly on the move in a molecular dance, sticking to one another by weak chemical bonds for typically just a trillionth of a second before separating to form new configurations. Any imprint would be washed away in an instant. If Benveniste and colleagues were right, shouldn’t water show the same behaviour as everything it has ever dissolved, making it sweet, salty, biologically active, toxic?
But data are data. Or are they? That’s what Maddox had begun to wonder. To get to the bottom of the affair, he launched an unprecedented investigation into INSERM Unit 200. Maddox travelled to Clamart to watch Benveniste’s team repeat their measurements before his eyes, accompanied by American biologist Walter Stewart, a ‘fraud-buster’ at the National Institutes of Health who had previously investigated allegations of misconduct in the laboratory of Nobel laureate David Baltimore, and stage magician James Randi, a debunker of pseudoscientific claims like those of the ‘psychic’ Uri Geller. “So now at last confirmation of what I have always suspected”, one correspondent wrote to Nature. “Papers for publication in Nature are referred by the Editor, a magician and his rabbit.”
The Nature team insisting that the researchers carry out a suite of double-blind experiments designed to rule out self-deception or trickery. Their conclusions were damning: “The anti-IgE at conventional dilutions caused degranulation, but at ‘high dilution’ there was no effect”, the investigators wrote [J. Maddox et al., Nature 334, 287 (1988)]. Some runs did seem to show high-dilution activity, but it was neither repeatable nor periodic as dilution increased.
Attempts by other labs to reproduce the results also failed to supported Benveniste’s claims. Although occasionally they did see strange high-dilution effects, it is not at all uncommon to find anomalous results in experiments on biological systems, which are notoriously messy and sensitive to impurities or small changes in conditions. The ‘high-dilution’ claims meet all the criteria for what the American chemist Irving Langmuir called ‘pathological science’ in 1925. For Langmuir, this was the science of “things that aren’t so”: phenomena that are illusory. Langmuir adduced several distinguishing features: the effects always operate at the margins of detectability, for example, and their supporters generally meet criticisms with ad hoc excuses dreamed up on the spur of the moment. His criteria apply equally to some other modern scientific controversies, notably the claim by Russian scientists in the late 1960s to have discovered a new, waxy form of water called polywater, and the claims of ‘cold nuclear fusion’ achieved using benchtop chemistry by Martin Fleischmann and Stanley Pons in Utah in 1989 [coming up next!].
Disappearing act
After Maddox’s investigation, most scientists dismissed the memory of water as a chimera. But Benveniste never recanted. He was sacked from INSERM after ignoring instructions not to pursue the high-dilution work, but he continued it with private funds, having attracted something of a cult following. These studies led him to conclude that water acts as a “vehicle for [biological] information”, carrying the signal that somehow encodes the biomolecule’s activity. Benveniste eventually decided that water can be “programmed” to behave like any biological agent – proteins, bacteria, viruses – by electromagnetic signals that can be recorded and sent down telephone wires. In 1997 he set up a private company, DigiBio, to promote this field of “digital biology”, and it is rumoured that the US Department of Defense funded research on this putative ‘remote transmission’ process.
Such studies continue after his death, and have recently acquired a high-profile supporter: the immunologist Luc Montagnier, who was awarded the 2008 Nobel prize for the co-discovery of the AIDS virus HIV. Montagnier believes that the DNA molecule itself can act as both a transmitter and a receiver of ultralow frequency electromagnetic signals that can broadcast biological effects. He believes that the signals emitted by pathogen DNA could be used to detect infection. He maintains that these emissions do not depend on the amount of DNA in suspensions of pathogens, and are sometimes detectable at very high dilution. They might originate, he says, from quantum effects in the water surrounding the DNA and other biological structures, according to a controversial theory that has also been invoked to explain Benveniste’s experiments [E. Del Guidice et al. Phys. Rev. Lett. 61, 1085 (1988)].
“Benveniste was rejected by everybody, because he was too far ahead”, Montagnier has said [Science 330, 1732 (2010)]. “I think he was mostly right but the problem was that his results weren't 100% reproducible.” In 2010 Montagnier began research on high-dilution DNA at a new research institute at Jiaotong University in Shanghai. “It's not pseudoscience, it's not quackery”, he insists. “These are real phenomena which deserve further study.” He is currently the head of the World Foundation for AIDS Research and Prevention in Paris, but his unorthodox views on water’s ‘memory’ have prompted some leading researchers to question his suitability to head AIDS projects.
Meanwhile, the idea that the undoubtedly unusual molecular structure of water – a source of continued controversy in its own right [see e.g. here and here] – might contrive to produce high-dilution effects still finds a few supporters among physical chemists. Homeopaths have never relinquished the hope that the idea might grant them the scientific vindication they crave: a special issue of the journal Homeopathy in 2007 was devoted to scientific papers alleging to explore water’s ‘memory’, although none provided either clear evidence for its existence or a plausible explanation for its mechanism [see here].
Such efforts remain firmly at the fringes of science. But what must we make of Benveniste’s claims? While inevitably the suspicion of fraud clouds such events, my own view – I joined Nature just after the ‘memory of water’ paper was published, and spoke to Benveniste shortly before his death – is that he fully believed what he said. A charming and charismatic man, he was convinced that he had been condemned by the ‘scientific priesthood’ for heresy. The irony is that he never recognized how his nemesis Maddox shared his maverick inclinations.
The “Galileo” rhetoric that Benveniste deployed is common from those who feel they have been ‘outlawed’ for their controversial scientific claims. But Benveniste never seemed to know how to make his results convincing, other than to pile up more of them. Faced with a puzzling phenomenon, the scientist’s instinct should be to break it down, to seek it in simpler systems that are more easily understood and controlled, and to pinpoint where the anomalies arise. In contrast, Benveniste studied ever more complicated biological systems – bacteria, plants, guinea pigs – until neither he nor anyone else could really tell what was going on. The last talk I saw his team deliver, in 2004, was a riot of graphs and numbers presented in rapid succession, as though any wild idea could be kept in the air so long as no one can pause to examine it.
This, perhaps, is the lesson of the memory of water: when you have a truly weird and remarkable result in science, your first duty is to try to show not why it must be true, but why it cannot be.
_____________________________________________________________
So far, “The Memory of Water” has been used as the title of a play, two movies, a collection of poems and a rock song. When the French immunologist Jacques Benveniste proposed in 1988 that water has a memory, he gave birth to a catchphrase with considerable cultural currency.
But Benveniste, who died in 2004, also ignited a scientific controversy that is still simmering a quarter of a century later. While most physicists and chemists consider Benveniste’s original idea – that water can retain a memory of substances it has dissolved, so that they can display chemical effects even when diluted to vanishing point – to be inconsistent with all we know about the properties of liquid water, Benveniste’s former colleagues and a handful of converts still believe there was something in it.
The claim would be provocative under any circumstances. But the dispute is all the fiercer because Benveniste’s ‘memory of water’ seems to offer an explanation for how homeopathy can work. This ‘alternative’ medical treatment, in which putative remedies are so diluted that active ingredients remain, has a huge following worldwide, and is particularly popular in France. But most medical practitioners consider it to be sheer superstition sustained by ignorance and the placebo effect.
Yet while there seems no good reason to believe that water has a ‘memory’, no one is quite sure how to account for the peculiar results Benveniste reported in 1988. This episode illustrates how hard it is for science to deal with deeply unorthodox findings, especially when they bear on wider cultural issues. In such cases an objective assessment of the data might not be sufficient, and perhaps not even possible, and the business of doing science is revealed for the human endeavour that it is, with all its ambiguities, flaws and pitfalls.
Rise and fall
Benveniste did not set out to ‘discover’ anything about water. As the head of Unit 200 of the French national medical research organization INSERM in Clamart on the edge of Paris, he was respected for his work on allergic responses. In 1987 he and his team spotted something strange while investigating the response of a type of human white blood cell, called basophils, to antibodies. Basophils patrol the bloodstream for foreign particles, and are triggered into releasing histamine – a response called degranulation – when they encounter allergy-inducing substances called allergens. Degranulation begins when allergens attach to antibodies called immunoglobulin E (IgE) anchored to the basophil surface. Benveniste’s team were using a ‘fake allergen’ to initiate this process: another antibody called anti-IgE, produced in non-human animals.
The researchers sometimes found that degranulation happened even when the concentration of anti-IgE was too low to be expected to have any effect. Benveniste and colleagues diluted a solution of anti-IgE gradually and monitored the amount of basophil degranulation. Basic chemistry suggests that the activity of anti-IgE should fall smoothly to zero as its concentration falls. But instead, the activity seemed to rise and fall almost rhythmically as the solution got more dilute. Even stranger, it went on behaving that way when the solution was so dilute that not a single anti-IgE molecule should remain.
That made no sense. How can molecules have an effect if they’re not there? Benveniste considered this finding striking enough to submit to Nature.
The editor of Nature at that time was John Maddox, who often displayed empathy for outsiders and a healthy scepticism of smug scientific consensus. Rather against the wishes of his staff, he insisted on sending the paper for peer review. The referees were puzzled but could find no obvious flaw in Benveniste’s experiments. After they had been replicated in independent laboratories in Canada, Italy and Israel, there seemed to be no option but to publish Benveniste’s paper, which Nature did in June 1988 [E. Davenas et al., Nature 333, 816 (1988)] – accompanied by an editorial from Maddox admitting that “There is no objective explanation of these observations.”
Hope for homeopathy?
The Nature paper caused pandemonium. It was clear at once that Benveniste’s results seemed to be offering scientific validation of homeopathy, the system of medicine introduced in the early nineteenth century by the German physician Samuel Hahnemann, in which the ‘active’ ingredients, already diluted to extinction, are said to get even more potent as they get more dilute.
Advocates swear that some clinical trials support the efficacy of homeopathy, but most medical experts consider there to be no solid evidence that it is effective beyond what would be expected from placebo effects. Even many homeopaths admit that there is no obvious scientific way to account for the effects they claim.
Not, at least, until the memory of water. “Homeopathy finds scientific support”, proclaimed Newsweek after Benveniste’s paper was published.
But how could water do this? The French team evidently had no idea. They suggested that “water could act as a ‘template’ for the [anti-IgE] molecule” – but this made no sense. For one thing, they evidently meant it the other way round: the antibody was acting as a template to imprint some kind of molecular structure on water, which could then act as a surrogate when the antibody was diluted away. But why should a negative imprint of the molecule act like the molecule itself? In any case, the properties of antibodies don’t just depend on their shape, but on the positions of particular chemical groups within the folded-up protein chain. And most of all, water is a liquid: its H2O molecules are constantly on the move in a molecular dance, sticking to one another by weak chemical bonds for typically just a trillionth of a second before separating to form new configurations. Any imprint would be washed away in an instant. If Benveniste and colleagues were right, shouldn’t water show the same behaviour as everything it has ever dissolved, making it sweet, salty, biologically active, toxic?
But data are data. Or are they? That’s what Maddox had begun to wonder. To get to the bottom of the affair, he launched an unprecedented investigation into INSERM Unit 200. Maddox travelled to Clamart to watch Benveniste’s team repeat their measurements before his eyes, accompanied by American biologist Walter Stewart, a ‘fraud-buster’ at the National Institutes of Health who had previously investigated allegations of misconduct in the laboratory of Nobel laureate David Baltimore, and stage magician James Randi, a debunker of pseudoscientific claims like those of the ‘psychic’ Uri Geller. “So now at last confirmation of what I have always suspected”, one correspondent wrote to Nature. “Papers for publication in Nature are referred by the Editor, a magician and his rabbit.”
The Nature team insisting that the researchers carry out a suite of double-blind experiments designed to rule out self-deception or trickery. Their conclusions were damning: “The anti-IgE at conventional dilutions caused degranulation, but at ‘high dilution’ there was no effect”, the investigators wrote [J. Maddox et al., Nature 334, 287 (1988)]. Some runs did seem to show high-dilution activity, but it was neither repeatable nor periodic as dilution increased.
Attempts by other labs to reproduce the results also failed to supported Benveniste’s claims. Although occasionally they did see strange high-dilution effects, it is not at all uncommon to find anomalous results in experiments on biological systems, which are notoriously messy and sensitive to impurities or small changes in conditions. The ‘high-dilution’ claims meet all the criteria for what the American chemist Irving Langmuir called ‘pathological science’ in 1925. For Langmuir, this was the science of “things that aren’t so”: phenomena that are illusory. Langmuir adduced several distinguishing features: the effects always operate at the margins of detectability, for example, and their supporters generally meet criticisms with ad hoc excuses dreamed up on the spur of the moment. His criteria apply equally to some other modern scientific controversies, notably the claim by Russian scientists in the late 1960s to have discovered a new, waxy form of water called polywater, and the claims of ‘cold nuclear fusion’ achieved using benchtop chemistry by Martin Fleischmann and Stanley Pons in Utah in 1989 [coming up next!].
Disappearing act
After Maddox’s investigation, most scientists dismissed the memory of water as a chimera. But Benveniste never recanted. He was sacked from INSERM after ignoring instructions not to pursue the high-dilution work, but he continued it with private funds, having attracted something of a cult following. These studies led him to conclude that water acts as a “vehicle for [biological] information”, carrying the signal that somehow encodes the biomolecule’s activity. Benveniste eventually decided that water can be “programmed” to behave like any biological agent – proteins, bacteria, viruses – by electromagnetic signals that can be recorded and sent down telephone wires. In 1997 he set up a private company, DigiBio, to promote this field of “digital biology”, and it is rumoured that the US Department of Defense funded research on this putative ‘remote transmission’ process.
Such studies continue after his death, and have recently acquired a high-profile supporter: the immunologist Luc Montagnier, who was awarded the 2008 Nobel prize for the co-discovery of the AIDS virus HIV. Montagnier believes that the DNA molecule itself can act as both a transmitter and a receiver of ultralow frequency electromagnetic signals that can broadcast biological effects. He believes that the signals emitted by pathogen DNA could be used to detect infection. He maintains that these emissions do not depend on the amount of DNA in suspensions of pathogens, and are sometimes detectable at very high dilution. They might originate, he says, from quantum effects in the water surrounding the DNA and other biological structures, according to a controversial theory that has also been invoked to explain Benveniste’s experiments [E. Del Guidice et al. Phys. Rev. Lett. 61, 1085 (1988)].
“Benveniste was rejected by everybody, because he was too far ahead”, Montagnier has said [Science 330, 1732 (2010)]. “I think he was mostly right but the problem was that his results weren't 100% reproducible.” In 2010 Montagnier began research on high-dilution DNA at a new research institute at Jiaotong University in Shanghai. “It's not pseudoscience, it's not quackery”, he insists. “These are real phenomena which deserve further study.” He is currently the head of the World Foundation for AIDS Research and Prevention in Paris, but his unorthodox views on water’s ‘memory’ have prompted some leading researchers to question his suitability to head AIDS projects.
Meanwhile, the idea that the undoubtedly unusual molecular structure of water – a source of continued controversy in its own right [see e.g. here and here] – might contrive to produce high-dilution effects still finds a few supporters among physical chemists. Homeopaths have never relinquished the hope that the idea might grant them the scientific vindication they crave: a special issue of the journal Homeopathy in 2007 was devoted to scientific papers alleging to explore water’s ‘memory’, although none provided either clear evidence for its existence or a plausible explanation for its mechanism [see here].
Such efforts remain firmly at the fringes of science. But what must we make of Benveniste’s claims? While inevitably the suspicion of fraud clouds such events, my own view – I joined Nature just after the ‘memory of water’ paper was published, and spoke to Benveniste shortly before his death – is that he fully believed what he said. A charming and charismatic man, he was convinced that he had been condemned by the ‘scientific priesthood’ for heresy. The irony is that he never recognized how his nemesis Maddox shared his maverick inclinations.
The “Galileo” rhetoric that Benveniste deployed is common from those who feel they have been ‘outlawed’ for their controversial scientific claims. But Benveniste never seemed to know how to make his results convincing, other than to pile up more of them. Faced with a puzzling phenomenon, the scientist’s instinct should be to break it down, to seek it in simpler systems that are more easily understood and controlled, and to pinpoint where the anomalies arise. In contrast, Benveniste studied ever more complicated biological systems – bacteria, plants, guinea pigs – until neither he nor anyone else could really tell what was going on. The last talk I saw his team deliver, in 2004, was a riot of graphs and numbers presented in rapid succession, as though any wild idea could be kept in the air so long as no one can pause to examine it.
This, perhaps, is the lesson of the memory of water: when you have a truly weird and remarkable result in science, your first duty is to try to show not why it must be true, but why it cannot be.
The antimony wars
The August issue of La Recherche has the theme of ‘controversies in science’. I wrote several pieces for it – this is the first, on the battle between the Galenists and Paracelsians in the French court in the early 17th century.
_____________________________________________
“I am different”, the sixteenth-century Swiss alchemist and physician Paracelsus once wrote, adding “let this not upset you”. But he upset almost everyone who came into contact with him and his ideas, and his vision of science and medicine continued to spark dispute for at least a hundred years after his death in 1541. For Paracelsus wanted to pull up by its roots the entire system of medicine and natural philosophy that originated with the ancient Greeks – particularly Aristotle – and replace it with a system that seemed to many to have more in common with the practices of mountebanks and peasant healers.
Paracelsus – whose splendid full name was Philip Theophrastus Aureolus Bombastus von Hohenheim – had a haphazard career as a doctor, mostly in the German territories but also in Italy, France and, if his own accounts can be believed, as far afield as Sweden, Russia and Egypt. Born in the Swiss village of Einsiedeln, near Zurich, into a noble Swabian family fallen on hard times, he trained in medicine in the German universities and Ferrara in Italy before wandering throughout Europe offering his services. He attended kings and treated peasants, sometimes with a well-filled purse but more often penniless. Time and again his argumentative nature ruined his chances of a stable position: at one time town physician of Basle, he made himself so unpopular with the university faculty and the authorities that he had to flee under cover of darkness to avoid imprisonment.
Paracelsus could be said to have conceived of a Theory of Everything: a system that explained medicine and the human body, alchemy, astrology, religion and the fundamental structure of the cosmos. He provided one of the first versions of what science historians now call the ‘chemical philosophy’: a theory that makes chemical transformation the analogy for all processes. For Paracelsus, every natural phenomenon was essentially an alchemical process. The rising of moisture from the earth and its falling back as rain was the equivalent of distillation and condensation in the alchemist’s flask. Growth of plants and animals from seeds was a kind of alchemy too, and in fact even the Biblical creation of the world was basically an alchemical process: a separation of earth from water. This philosophy seems highly fanciful now, but it was nonetheless rational and mechanistic: it could ascribe natural and comprehensible causes to events.
Although Paracelsus was one of the most influential advocates of these ideas in the early Renaissance, they weren’t entirely his invention (although he characteristically exaggerated his originality). The chemical philosophy was rooted in the tradition known as Neoplatonism, derived from the teachings of Plato but shaped into a kind of mystical philosophy by the third-century Greek philosopher Plotinus. One of the central ideas of Neoplatonism is the correspondence between the macrocosm and the microcosm, so that events that occurred in the heavens and in the natural world have direct analogies within the human body – or with the processes conducted in an alchemist’s flasks and retorts. This correspondence provided the theoretical basis for a belief in astrology, although Paracelsus denied that our destiny is absolutely fixed by our horoscope. He proposed that the macro-micro correspondence led to ‘signatures’ in nature which revealed, for example, the medical uses of plants: those shaped like a kidney could treat renal complaints. These signatures were signs left by God to guide the physician towards the proper use of herbal medicines. They exemplify the symbolic character of the chemical philosophy, which was based on such analogies of form and appearance.
What the chemical philosophy implied for medicine conflicted with the tradition taught to physicians at the universities, which drew on ideas from antiquity, particularly those attributed to the Greek philosopher Hippocrates and the Roman doctor Galen. This classical tradition asserted that our health is governed by four bodily fluids called humours: blood, phlegm, and black and yellow bile. Illness results from an imbalance of the humours, and the doctor’s task was to restore this balance – by drugs, diet or, commonly, by blood-letting.
Academic doctors in the Middle Ages adopted the humoral system as the theoretical basis of their work, but its connection to their working practices was generally rather tenuous. Often they prescribed drugs, made from herbs or minerals and sold by medieval pharmacists called apothecaries. Doctors charged high fees for their services, which only merchants and nobles could afford. They were eminent in society, and often dressed lavishly.
Paracelsus despised all of this. He did not share the doctors’ disdain of manual work, and he hated how they paraded their wealth. Worse still, he considered that the whole foundation of classical medicine, with its doctrine of humours, was mistaken. When he discovered at university that becoming a doctor of medicine was a matter of simply learning and memorizing the books of Galen and Avicenna, he was outraged. He insisted that it was only through experience, not through book-learning, that one could become a true healer.
By bringing an alchemical perspective to the study of life and medicine, Paracelsus helped to unify the sciences. Previously, alchemy had been about the transmutation of metals. But for Paracelsus, its principle purpose was to make medicines. Just as alchemists could mimic the natural transmutation of metals, so could they use alchemical medicines to bring about the natural process of healing. This was possible, in fact, because human biology was itself a kind of alchemy. In one of his most fertile ideas, Paracelsus asserted that there is an alchemist inside each one of us, a kind of principle that he called the archeus, which separates the good from the bad in the food and drink that we ingest. The archeus uses the good matter to make flesh and blood, and the bad is expelled as waste. Paracelsus devised a kind of bio-alchemy, the precursor to modern biochemistry, which indeed now regards nature as a superb chemist that takes molecules apart and puts them back together as the constituents of our cells.
Most of all, Paracelsus argued that medicine should involve the use of specific chemical drugs to treat specific ailments: it was a system of chemotherapy, which had little space for the general-purpose blood-letting treatments prescribed by the humoral theory. This Paracelsian, chemical approach to healing became known in the late sixteenth century as ‘iatrochemistry’, meaning the chemistry of medicine.
Paracelsus was able to publish relatively little of his writings while he was alive, but from around 1560 several publishers scoured Europe for his manuscripts and published compendia of Paracelsian medicine. Once in print, his ideas attracted adherents, and by the last decades of the century Paracelsian medicine was exciting furious debate between traditionalists and progressives. Iatrochemistry found a fairly receptive audience in England, but the disputes they provoked in France were bitter, especially among the conservative medical faculty of the University of Paris.
That differing reception was partly motivated by religion. Paracelsus belonged to no creed, but he was widely identified with the Reformation – he even compared himself to Martin Luther – and so his views found more sympathy from Protestants than Catholics. The religious tensions were especially acute in France when the Huguenot prince of Navarre was crowned Henri IV in 1589. Fears that Henri would create a Huguenot court seemed confirmed when the new king appointed the Swiss doctor Jean Ribit as his premier médicin, and summoned also two other Huguenot doctors with Paracelsian ideas, the Gascon Joseph Duchesne and another Genevan, Theodore Turquet de Mayerne.
In 1603 Jean Riolan, the head of the Paris medical faculty, published an attack on Mayerne and Duchesne, asserting the supremacy of the medicine of Hippocrates and Galen. Although these two Paracelsians sought to defend themselves, they only secured a retraction of this damning charge by agreeing to practice medicine according to the rules of the classical authorities.
But the Paracelsians struck back. Around 1604, Ribit and Mayerne helped a fellow Huguenot and iatrochemist named Jean Béguin set up a pharmaceutical laboratory in Paris to promote chemical medicine. In 1610 Béguin published a textbook laying out the principles of iatrochemistry in a clear, straightforward manner free from the convoluted style and fanciful jargon used by Paracelsus. When this Latin text was translated into French five years later as Les elemens de chymie, it served much the same propagandizing role as Antoine Lavoisier’s Traité élémentaire de chemie did for Lavoisier’s own system of chemistry at the end of the eighteenth century.
But the war between the Galenists and the Paracelsians raged well into the seventeenth century. Things looked bad for the radicals when Henri IV, who had been prevented in 1609 from making Mayerne his new premier médicin, was assassinated the following year. Lacking royal protection, Mayerne took up an earlier offer from James I of England and fled there, where he flourished.
Yet when Riolan’s equally conservative son (also Jean) drew up plans for a royal herb garden in 1618, he did not anticipate that this institution would finally be established 20 years later as the Jardin du Roi by the iatrochemist Gui de la Brosse. In 1647 the Jardin appointed the first French professor of chemistry, a Scotsman named William Davidson, who was an ardent Paracelsian.
Most offensive of all to the Paris medical faculty was Davidson’s support for the medical use of antimony. Ever since the start of the century, Paracelsians and Galenists had been split over whether antimony was a cure or poison (it is in fact quite toxic). Davidson’s claim that “there is no more lofty medicine under heaven” so enraged the faculty that they hounded him from his post in 1651, when the younger Riolan republished his father’s condemnation of Duchense and Mayerne.
Yet it was all too late for the Galenists, for the Jardin du Roi, which became one of the most influential institutions in French chemistry and medicine, continued to support iatrochemistry. The professors there produced a string of successful chemical textbooks, most famously that of Nicolas Lemery, called Cours de chimie, in 1675. These men were sober, practical individuals who helped to strip iatrochemistry of its Paracelsian fantasies and outlandish jargon. They placed chemical medicine, and chemistry itself, on a sound footing, paving the way to Lavoisier’s triumphs.
What was this long and bitter dispute really about? Partly, of course, it was a power struggle: over who had the king’s ear, but also who should dictate the practice (and thus reap the financial rewards) of medicine. But it would be too easy to cast Riolan and his colleagues as outdated reactionaries. After all, they were right about antimony (if for the wrong reasons) – and they were right too to criticize some of the wild excesses of Paracelsus’s ideas. Their opposition forced the iatrochemists to prune those ideas, sorting the good from the bad. Besides, since no kind of medicine was terribly effective in those days, there wasn’t much empirical justification for throwing out the old ways. The dispute is a reminder that introducing new scientific ideas may depend as much on the power of good rhetoric as on the evidence itself. And it shows that in the end a good argument can leave science healthier.
_____________________________________________
“I am different”, the sixteenth-century Swiss alchemist and physician Paracelsus once wrote, adding “let this not upset you”. But he upset almost everyone who came into contact with him and his ideas, and his vision of science and medicine continued to spark dispute for at least a hundred years after his death in 1541. For Paracelsus wanted to pull up by its roots the entire system of medicine and natural philosophy that originated with the ancient Greeks – particularly Aristotle – and replace it with a system that seemed to many to have more in common with the practices of mountebanks and peasant healers.
Paracelsus – whose splendid full name was Philip Theophrastus Aureolus Bombastus von Hohenheim – had a haphazard career as a doctor, mostly in the German territories but also in Italy, France and, if his own accounts can be believed, as far afield as Sweden, Russia and Egypt. Born in the Swiss village of Einsiedeln, near Zurich, into a noble Swabian family fallen on hard times, he trained in medicine in the German universities and Ferrara in Italy before wandering throughout Europe offering his services. He attended kings and treated peasants, sometimes with a well-filled purse but more often penniless. Time and again his argumentative nature ruined his chances of a stable position: at one time town physician of Basle, he made himself so unpopular with the university faculty and the authorities that he had to flee under cover of darkness to avoid imprisonment.
Paracelsus could be said to have conceived of a Theory of Everything: a system that explained medicine and the human body, alchemy, astrology, religion and the fundamental structure of the cosmos. He provided one of the first versions of what science historians now call the ‘chemical philosophy’: a theory that makes chemical transformation the analogy for all processes. For Paracelsus, every natural phenomenon was essentially an alchemical process. The rising of moisture from the earth and its falling back as rain was the equivalent of distillation and condensation in the alchemist’s flask. Growth of plants and animals from seeds was a kind of alchemy too, and in fact even the Biblical creation of the world was basically an alchemical process: a separation of earth from water. This philosophy seems highly fanciful now, but it was nonetheless rational and mechanistic: it could ascribe natural and comprehensible causes to events.
Although Paracelsus was one of the most influential advocates of these ideas in the early Renaissance, they weren’t entirely his invention (although he characteristically exaggerated his originality). The chemical philosophy was rooted in the tradition known as Neoplatonism, derived from the teachings of Plato but shaped into a kind of mystical philosophy by the third-century Greek philosopher Plotinus. One of the central ideas of Neoplatonism is the correspondence between the macrocosm and the microcosm, so that events that occurred in the heavens and in the natural world have direct analogies within the human body – or with the processes conducted in an alchemist’s flasks and retorts. This correspondence provided the theoretical basis for a belief in astrology, although Paracelsus denied that our destiny is absolutely fixed by our horoscope. He proposed that the macro-micro correspondence led to ‘signatures’ in nature which revealed, for example, the medical uses of plants: those shaped like a kidney could treat renal complaints. These signatures were signs left by God to guide the physician towards the proper use of herbal medicines. They exemplify the symbolic character of the chemical philosophy, which was based on such analogies of form and appearance.
What the chemical philosophy implied for medicine conflicted with the tradition taught to physicians at the universities, which drew on ideas from antiquity, particularly those attributed to the Greek philosopher Hippocrates and the Roman doctor Galen. This classical tradition asserted that our health is governed by four bodily fluids called humours: blood, phlegm, and black and yellow bile. Illness results from an imbalance of the humours, and the doctor’s task was to restore this balance – by drugs, diet or, commonly, by blood-letting.
Academic doctors in the Middle Ages adopted the humoral system as the theoretical basis of their work, but its connection to their working practices was generally rather tenuous. Often they prescribed drugs, made from herbs or minerals and sold by medieval pharmacists called apothecaries. Doctors charged high fees for their services, which only merchants and nobles could afford. They were eminent in society, and often dressed lavishly.
Paracelsus despised all of this. He did not share the doctors’ disdain of manual work, and he hated how they paraded their wealth. Worse still, he considered that the whole foundation of classical medicine, with its doctrine of humours, was mistaken. When he discovered at university that becoming a doctor of medicine was a matter of simply learning and memorizing the books of Galen and Avicenna, he was outraged. He insisted that it was only through experience, not through book-learning, that one could become a true healer.
By bringing an alchemical perspective to the study of life and medicine, Paracelsus helped to unify the sciences. Previously, alchemy had been about the transmutation of metals. But for Paracelsus, its principle purpose was to make medicines. Just as alchemists could mimic the natural transmutation of metals, so could they use alchemical medicines to bring about the natural process of healing. This was possible, in fact, because human biology was itself a kind of alchemy. In one of his most fertile ideas, Paracelsus asserted that there is an alchemist inside each one of us, a kind of principle that he called the archeus, which separates the good from the bad in the food and drink that we ingest. The archeus uses the good matter to make flesh and blood, and the bad is expelled as waste. Paracelsus devised a kind of bio-alchemy, the precursor to modern biochemistry, which indeed now regards nature as a superb chemist that takes molecules apart and puts them back together as the constituents of our cells.
Most of all, Paracelsus argued that medicine should involve the use of specific chemical drugs to treat specific ailments: it was a system of chemotherapy, which had little space for the general-purpose blood-letting treatments prescribed by the humoral theory. This Paracelsian, chemical approach to healing became known in the late sixteenth century as ‘iatrochemistry’, meaning the chemistry of medicine.
Paracelsus was able to publish relatively little of his writings while he was alive, but from around 1560 several publishers scoured Europe for his manuscripts and published compendia of Paracelsian medicine. Once in print, his ideas attracted adherents, and by the last decades of the century Paracelsian medicine was exciting furious debate between traditionalists and progressives. Iatrochemistry found a fairly receptive audience in England, but the disputes they provoked in France were bitter, especially among the conservative medical faculty of the University of Paris.
That differing reception was partly motivated by religion. Paracelsus belonged to no creed, but he was widely identified with the Reformation – he even compared himself to Martin Luther – and so his views found more sympathy from Protestants than Catholics. The religious tensions were especially acute in France when the Huguenot prince of Navarre was crowned Henri IV in 1589. Fears that Henri would create a Huguenot court seemed confirmed when the new king appointed the Swiss doctor Jean Ribit as his premier médicin, and summoned also two other Huguenot doctors with Paracelsian ideas, the Gascon Joseph Duchesne and another Genevan, Theodore Turquet de Mayerne.
In 1603 Jean Riolan, the head of the Paris medical faculty, published an attack on Mayerne and Duchesne, asserting the supremacy of the medicine of Hippocrates and Galen. Although these two Paracelsians sought to defend themselves, they only secured a retraction of this damning charge by agreeing to practice medicine according to the rules of the classical authorities.
But the Paracelsians struck back. Around 1604, Ribit and Mayerne helped a fellow Huguenot and iatrochemist named Jean Béguin set up a pharmaceutical laboratory in Paris to promote chemical medicine. In 1610 Béguin published a textbook laying out the principles of iatrochemistry in a clear, straightforward manner free from the convoluted style and fanciful jargon used by Paracelsus. When this Latin text was translated into French five years later as Les elemens de chymie, it served much the same propagandizing role as Antoine Lavoisier’s Traité élémentaire de chemie did for Lavoisier’s own system of chemistry at the end of the eighteenth century.
But the war between the Galenists and the Paracelsians raged well into the seventeenth century. Things looked bad for the radicals when Henri IV, who had been prevented in 1609 from making Mayerne his new premier médicin, was assassinated the following year. Lacking royal protection, Mayerne took up an earlier offer from James I of England and fled there, where he flourished.
Yet when Riolan’s equally conservative son (also Jean) drew up plans for a royal herb garden in 1618, he did not anticipate that this institution would finally be established 20 years later as the Jardin du Roi by the iatrochemist Gui de la Brosse. In 1647 the Jardin appointed the first French professor of chemistry, a Scotsman named William Davidson, who was an ardent Paracelsian.
Most offensive of all to the Paris medical faculty was Davidson’s support for the medical use of antimony. Ever since the start of the century, Paracelsians and Galenists had been split over whether antimony was a cure or poison (it is in fact quite toxic). Davidson’s claim that “there is no more lofty medicine under heaven” so enraged the faculty that they hounded him from his post in 1651, when the younger Riolan republished his father’s condemnation of Duchense and Mayerne.
Yet it was all too late for the Galenists, for the Jardin du Roi, which became one of the most influential institutions in French chemistry and medicine, continued to support iatrochemistry. The professors there produced a string of successful chemical textbooks, most famously that of Nicolas Lemery, called Cours de chimie, in 1675. These men were sober, practical individuals who helped to strip iatrochemistry of its Paracelsian fantasies and outlandish jargon. They placed chemical medicine, and chemistry itself, on a sound footing, paving the way to Lavoisier’s triumphs.
What was this long and bitter dispute really about? Partly, of course, it was a power struggle: over who had the king’s ear, but also who should dictate the practice (and thus reap the financial rewards) of medicine. But it would be too easy to cast Riolan and his colleagues as outdated reactionaries. After all, they were right about antimony (if for the wrong reasons) – and they were right too to criticize some of the wild excesses of Paracelsus’s ideas. Their opposition forced the iatrochemists to prune those ideas, sorting the good from the bad. Besides, since no kind of medicine was terribly effective in those days, there wasn’t much empirical justification for throwing out the old ways. The dispute is a reminder that introducing new scientific ideas may depend as much on the power of good rhetoric as on the evidence itself. And it shows that in the end a good argument can leave science healthier.
Tuesday, September 10, 2013
Before it gets too previous, here is an earlier piece for BBC Future.
_________________________________________________________
It’s time for one of those imagined futures which always miss the mark by a mile – you know, “Imagine setting off for work with your jet-pack…” But here we go anyway: imagine that photographs, newspapers and books speak, that you can play music out of your curtains, that food wrapping calls out “I’m nearly past my sell-by date!” OK, so perhaps it’s all a bit nightmarish rather than utopian, but the point is that some weird and wonderful things would be possible if a loudspeaker could be made as thin, light and flexible as a sheet of paper.
That’s what is envisaged in a study by Andrew Barnard and colleagues at the Pennsylvania State University. They have revisited an idea nearly a hundred years old, and sounding decidedly steampunk: the thermophone or thermoacoustic loudspeaker, in which sound is generated by the effect of a material rapidly oscillating between hot and cold. In 1917 Harold Arnold and I. B. Crandall of the American Telephone and Telegraph Company and Western Electric Company showed that they could create sound by simultaneously passing alternating and direct currents through a very thin platinum foil. This heats up the foil, and the heat is conducted into the air surrounding it, in pulses that are paced by the frequency of the a.c. current.
A sound wave in air corresponds to an oscillation of the air pressure. An ordinary loudspeaker generates those pressure waves via a mechanical vibration of a membrane. But air pressure is also altered when the air gets hotter or cooler. So the thermal oscillations of Arnold and Crandall’s platinum film also generated a sound wave – without any of the cumbersome, heavy electromagnets used to excite vibrations in conventional speakers, or indeed without moving parts at all.
The problem was that the sound wasn’t very loud, however, and the frequency response wasn’t up to reproducing speech. So the idea was shelved for almost a century.
It was revitalized in 2008, when a team in China found that they could extract thermoacoustic sound from a new material: a thin, transparent film made from microscopic tubes called carbon nanotubes (CNTs), aligned parallel to the plane of the film. These tiny tubes, whose walls are one atom thick and made from pure carbon, are highly robust, need very little heat input to warm them up, and are extremely good heat conductors – just what is needed, in other words, to finally put the idea of Arnold and Crandall into practice and create gossamer-thin loudspeakers.
The Chinese team, led by Lin Xiao at Tsinghau University, showed that they could get their CNT films to emit sound. But that’s not the same as making a loudspeaker that will produce good-quality sound over the whole frequency range of human hearing, from a few tens of hertz (oscillations per second) to several thousand. So while the CNT speakers might have valuable applications such as sonar – they work perfectly well underwater – it isn’t yet clear if they can produce hifi-quality sound in your living room.
That’s what Barnard and colleagues have sought to assess. One of the factors determining the loudness of the devices is how efficiently heat can be transferred into the surrounding gas to induce pressure waves. This depends on how much the gas heats up for a given input of heat energy: a property called the heat capacity. A low heat capacity means that only a small energy input can create a big change in temperature, and thus in pressure. So the sound output can be improved by surrounding the CNT film with a gas that has a lower heat capacity than air, such as the inert gases helium, argon or xenon. Xiao’s team has already demonstrated this effect, but Barnard and colleagues now show that it offers perhaps the best avenue for improving the performance of these devices. To transmit the acoustic vibrations of the inert gas to the air beyond, so that we can hear the results, one would separate the gas and air with a flexible membrane.
Another way to improve the sound output is to make the surface area of the film bigger. That can be done without ending up with a carpet-sized device by stacking several sheets in layers. The Pennsylvania group has shown that this works: a four-layer speaker, for example, is significantly louder for the same power input.
All things considered, Barnard and colleagues conclude that “a high power CNT loudspeaker appears to be feasible.” But it won’t be simple: the CNT films will probably need to be enclosed and immersed in xenon, for example, which would pose serious challenges for making robust ‘wearable’ speakers.
And there is already competition. For example, a small start-up British company called Novalia has created an interactive, touch-sensitive printed poster that can generate drum-kit sounds through vibrations of the paper itself. Curiously, that technology uses electrically conducting inks made from a pure-carbon material called graphene, which is basically the same stuff as the walls of carbon nanotubes but flattened into sheets. So one way or another, these forms of ‘nanocarbon’ look destined to make our isles full of noises.
Reference: A. R. Barnard et al., Journal of the Acoustical Society of America 134, EL280 (2013).
_________________________________________________________
It’s time for one of those imagined futures which always miss the mark by a mile – you know, “Imagine setting off for work with your jet-pack…” But here we go anyway: imagine that photographs, newspapers and books speak, that you can play music out of your curtains, that food wrapping calls out “I’m nearly past my sell-by date!” OK, so perhaps it’s all a bit nightmarish rather than utopian, but the point is that some weird and wonderful things would be possible if a loudspeaker could be made as thin, light and flexible as a sheet of paper.
That’s what is envisaged in a study by Andrew Barnard and colleagues at the Pennsylvania State University. They have revisited an idea nearly a hundred years old, and sounding decidedly steampunk: the thermophone or thermoacoustic loudspeaker, in which sound is generated by the effect of a material rapidly oscillating between hot and cold. In 1917 Harold Arnold and I. B. Crandall of the American Telephone and Telegraph Company and Western Electric Company showed that they could create sound by simultaneously passing alternating and direct currents through a very thin platinum foil. This heats up the foil, and the heat is conducted into the air surrounding it, in pulses that are paced by the frequency of the a.c. current.
A sound wave in air corresponds to an oscillation of the air pressure. An ordinary loudspeaker generates those pressure waves via a mechanical vibration of a membrane. But air pressure is also altered when the air gets hotter or cooler. So the thermal oscillations of Arnold and Crandall’s platinum film also generated a sound wave – without any of the cumbersome, heavy electromagnets used to excite vibrations in conventional speakers, or indeed without moving parts at all.
The problem was that the sound wasn’t very loud, however, and the frequency response wasn’t up to reproducing speech. So the idea was shelved for almost a century.
It was revitalized in 2008, when a team in China found that they could extract thermoacoustic sound from a new material: a thin, transparent film made from microscopic tubes called carbon nanotubes (CNTs), aligned parallel to the plane of the film. These tiny tubes, whose walls are one atom thick and made from pure carbon, are highly robust, need very little heat input to warm them up, and are extremely good heat conductors – just what is needed, in other words, to finally put the idea of Arnold and Crandall into practice and create gossamer-thin loudspeakers.
The Chinese team, led by Lin Xiao at Tsinghau University, showed that they could get their CNT films to emit sound. But that’s not the same as making a loudspeaker that will produce good-quality sound over the whole frequency range of human hearing, from a few tens of hertz (oscillations per second) to several thousand. So while the CNT speakers might have valuable applications such as sonar – they work perfectly well underwater – it isn’t yet clear if they can produce hifi-quality sound in your living room.
That’s what Barnard and colleagues have sought to assess. One of the factors determining the loudness of the devices is how efficiently heat can be transferred into the surrounding gas to induce pressure waves. This depends on how much the gas heats up for a given input of heat energy: a property called the heat capacity. A low heat capacity means that only a small energy input can create a big change in temperature, and thus in pressure. So the sound output can be improved by surrounding the CNT film with a gas that has a lower heat capacity than air, such as the inert gases helium, argon or xenon. Xiao’s team has already demonstrated this effect, but Barnard and colleagues now show that it offers perhaps the best avenue for improving the performance of these devices. To transmit the acoustic vibrations of the inert gas to the air beyond, so that we can hear the results, one would separate the gas and air with a flexible membrane.
Another way to improve the sound output is to make the surface area of the film bigger. That can be done without ending up with a carpet-sized device by stacking several sheets in layers. The Pennsylvania group has shown that this works: a four-layer speaker, for example, is significantly louder for the same power input.
All things considered, Barnard and colleagues conclude that “a high power CNT loudspeaker appears to be feasible.” But it won’t be simple: the CNT films will probably need to be enclosed and immersed in xenon, for example, which would pose serious challenges for making robust ‘wearable’ speakers.
And there is already competition. For example, a small start-up British company called Novalia has created an interactive, touch-sensitive printed poster that can generate drum-kit sounds through vibrations of the paper itself. Curiously, that technology uses electrically conducting inks made from a pure-carbon material called graphene, which is basically the same stuff as the walls of carbon nanotubes but flattened into sheets. So one way or another, these forms of ‘nanocarbon’ look destined to make our isles full of noises.
Reference: A. R. Barnard et al., Journal of the Acoustical Society of America 134, EL280 (2013).
Friday, September 06, 2013
Seven Ages of Science
I hope people have been listening to Lisa Jardine’s Seven Ages of Science on BBC Radio 4. It is very nice – a refreshingly personal and idiosyncratic take on the history of science, rather than the usual plod through the usual suspects. I made a few modest contributions to some episodes, plucked from some long but fun conversations.
Why you should appear in your papers
Here’s my latest Crucible column for Chemistry World.
________________________________________________________
The strange thing about Einstein’s classic 1905 papers on relativity, quantum theory and Brownian motion is that he is largely absent from them. That’s to say, he hardly ever uses the first person singular to put himself in the reference frame. “We have now derived…”, “We now imagine space to be…” – we and Einstein do it all together. He pops up a little thrillingly at the start of the extraordinarily brief “E=mc2” paper, but quickly vanishes beneath the passive voice and the impersonal “one concludes”.
It wasn’t his intention but this all makes Einstein sound magisterial. Lavoisier was already vacillating 130 years earlier, when he is sometimes “I” and sometimes “we” – calculatedly so, for he’s very much present in person when distinguishing his own discoveries from Priestley and Scheele, but tells us bossily that “we shall presently see what we ought to think” when it comes to choosing amongst them.
I’m left thinking about these questions of voice after reading a paper by ‘science studies’ researchers Daniele Fanelli of the University of Edinburgh and Wolfgang Glänzel of the Catholic University of Leuven (PLOS ONE 8, e66938; 2013). They report that bibliometric analysis of around 29,000 papers ranging across all the sciences from maths and physics to social sciences, as well as some in the humanities, show significant differences in style and content which point to a genuine hierarchy of sciences, along the lines first postulated by the French philosopher Auguste Comte in the 1830s. As we would put it today, physics and maths are the ‘hardest’ sciences, and they become progressively ‘softer’ as we move through chemistry, the life sciences, and the social sciences. The key criterion the authors use for this classification is the degree of consensus in the field, as revealed for example by the number, age and overlap of references.
There’s a lot to discuss in these interesting findings; but one aspect that caught my attention was the authors’ comparison of whether or not papers use personal pronouns. “Scientists aim at making universal claims, and their style of writing tends to be as impersonal as possible”, say Fanelli and Glänzel. “In the humanities, on the other hand, the emphasis tends to be on originality, individuality and argumentation, which makes the use of first person more common.” They found that indeed the ‘harder’ sciences tend to use personal pronouns less often.
The assumption here is that an impersonal, passive voice suggests a universal truth. It really does suggest that – and that’s the whole point. Fanelli and Glänzel’s implication that the passive voice reflects science’s ability to deliver absolute knowledge is a case of science falling for its own tricks. Scientists actively cultivated the impersonal tone as a rhetorical device to persuade and convey authority. This process began with the institutionalization of science in the seventeenth century, and it was a feature of what historian Steven Shapin has called the “literary technology” of that age: a style of writing calculated to sound convincing.
There were good reasons for this, to be sure. Experimental scientists like Robert Boyle wanted to free themselves from the claims of the Renaissance magi to have received deep insights through personal revelation; on the contrary, they’d found stuff out using procedures that anyone (with sufficient care and education) could conduct. So it didn’t matter any more who you were, an attitude encapsulated in Claude Bernard’s remark in 1865 that “Art is I; Science is We.” Or better still, science is “It is shown that…”
Yet the pendulum is swinging. Many books advising how to write scientific papers tend now to recommend the active voice. For example, in Successful Scientific Writing (Cambridge University Press, 1996), Janice Matthews and Robert Matthews say “Many scientists overuse the passive voice. They seem to feel that every sentence must be written in passive terms, and they undergo elaborate contortions to do so.” But the passive voice, the authors say, “often obscures your true meaning and compounds your chances of producing pompous prose.” The American Institute of Physics, American Chemical Society and American Medical Association all recommend the active voice and use of pronouns, although they accept the passive voice for methods sections.
I would go further. If scientists care about precise reporting, they should insist on planting themselves in their papers. Their fallibility, preconceptions and opinions are a part of the picture, and it’s misleading to imply otherwise. For many of the scientists who, during my years as an editor at Nature, balked at writing “I” rather than “We” in their single-author papers, the worry was not that they’d seem less authoritative but rather, too arrogant. But I suspect “I” also seemed disturbingly exposing. Either way, if you did the work, you’ve got to admit to it.
________________________________________________________
The strange thing about Einstein’s classic 1905 papers on relativity, quantum theory and Brownian motion is that he is largely absent from them. That’s to say, he hardly ever uses the first person singular to put himself in the reference frame. “We have now derived…”, “We now imagine space to be…” – we and Einstein do it all together. He pops up a little thrillingly at the start of the extraordinarily brief “E=mc2” paper, but quickly vanishes beneath the passive voice and the impersonal “one concludes”.
It wasn’t his intention but this all makes Einstein sound magisterial. Lavoisier was already vacillating 130 years earlier, when he is sometimes “I” and sometimes “we” – calculatedly so, for he’s very much present in person when distinguishing his own discoveries from Priestley and Scheele, but tells us bossily that “we shall presently see what we ought to think” when it comes to choosing amongst them.
I’m left thinking about these questions of voice after reading a paper by ‘science studies’ researchers Daniele Fanelli of the University of Edinburgh and Wolfgang Glänzel of the Catholic University of Leuven (PLOS ONE 8, e66938; 2013). They report that bibliometric analysis of around 29,000 papers ranging across all the sciences from maths and physics to social sciences, as well as some in the humanities, show significant differences in style and content which point to a genuine hierarchy of sciences, along the lines first postulated by the French philosopher Auguste Comte in the 1830s. As we would put it today, physics and maths are the ‘hardest’ sciences, and they become progressively ‘softer’ as we move through chemistry, the life sciences, and the social sciences. The key criterion the authors use for this classification is the degree of consensus in the field, as revealed for example by the number, age and overlap of references.
There’s a lot to discuss in these interesting findings; but one aspect that caught my attention was the authors’ comparison of whether or not papers use personal pronouns. “Scientists aim at making universal claims, and their style of writing tends to be as impersonal as possible”, say Fanelli and Glänzel. “In the humanities, on the other hand, the emphasis tends to be on originality, individuality and argumentation, which makes the use of first person more common.” They found that indeed the ‘harder’ sciences tend to use personal pronouns less often.
The assumption here is that an impersonal, passive voice suggests a universal truth. It really does suggest that – and that’s the whole point. Fanelli and Glänzel’s implication that the passive voice reflects science’s ability to deliver absolute knowledge is a case of science falling for its own tricks. Scientists actively cultivated the impersonal tone as a rhetorical device to persuade and convey authority. This process began with the institutionalization of science in the seventeenth century, and it was a feature of what historian Steven Shapin has called the “literary technology” of that age: a style of writing calculated to sound convincing.
There were good reasons for this, to be sure. Experimental scientists like Robert Boyle wanted to free themselves from the claims of the Renaissance magi to have received deep insights through personal revelation; on the contrary, they’d found stuff out using procedures that anyone (with sufficient care and education) could conduct. So it didn’t matter any more who you were, an attitude encapsulated in Claude Bernard’s remark in 1865 that “Art is I; Science is We.” Or better still, science is “It is shown that…”
Yet the pendulum is swinging. Many books advising how to write scientific papers tend now to recommend the active voice. For example, in Successful Scientific Writing (Cambridge University Press, 1996), Janice Matthews and Robert Matthews say “Many scientists overuse the passive voice. They seem to feel that every sentence must be written in passive terms, and they undergo elaborate contortions to do so.” But the passive voice, the authors say, “often obscures your true meaning and compounds your chances of producing pompous prose.” The American Institute of Physics, American Chemical Society and American Medical Association all recommend the active voice and use of pronouns, although they accept the passive voice for methods sections.
I would go further. If scientists care about precise reporting, they should insist on planting themselves in their papers. Their fallibility, preconceptions and opinions are a part of the picture, and it’s misleading to imply otherwise. For many of the scientists who, during my years as an editor at Nature, balked at writing “I” rather than “We” in their single-author papers, the worry was not that they’d seem less authoritative but rather, too arrogant. But I suspect “I” also seemed disturbingly exposing. Either way, if you did the work, you’ve got to admit to it.
Thursday, September 05, 2013
How plastics got under control
Several things to catch up with after the holidays, and here’s the spoddiest first: a leader for Nature Materials celebrating the 50th anniversary of the chemistry Nobel for Ziegler and Natta.
__________________________________________________________________
One could tell the history of the twentieth century through the medium of polymers. In a weird and ramshackle way that is almost what American author Thomas Pynchon attempted in his novel Gravity’s Rainbow, which shows the German cartel IG Farben clandestinely orchestrating the Second World War and making rockets with unnerving, sensory polymer skins. But the truth is scarcely less strange and no less dominated by the agencies of conflict, commerce and politics.
Karl Ziegler, who 50 years ago won the Nobel Prize in Chemistry alongside Italian chemist Giulio Natta for their work on the stereoselective catalysis of alkene polymerization, began his work on polymerization during the Second World War to make synthetic rubber for the German war effort as supplies from the Asian rubber plantations were cut off. When the war ended Ziegler was in Halle, soon to become Russian-occupied territory, and the American authorities encouraged him to take a post at Mülheim to preserve his expertise for the West. It was there in 1953 that he discovered the organometallic compounds, such as triethylaluminium, that would not only catalyse ethylene polymerization at lower temperatures and pressures than the standard industrial process then prevailing but would produce orderly straight-chain molecules without random branching, creating a high-density product with new potential uses.
Natta, working in Milan, was also drawn into synthetic-rubber work during the war, and once he heard about Ziegler’s discovery he realised that it could be used to make ordered polymers from other alkenes. He and his coworkers quickly discovered that ethylaluminium chloride and vanadium tetrachloride would catalyse the formation of polypropylene with a stereoregular isotactic chain structure: all the methyl side-chains on the ‘same’ side, enabling orderly crystalline packing into a solid, high-density form. The Italian chemicals company Montecatini, which funded Natta’s research, immediately developed this process on an industrial scale, and were marketing isotactic polypropylene at Ferrara by 1957 as a bulk plastic, a fibre and a packing film. Natta went on to conduct pioneering work on the synthesis of rubbers by controlled polymerization of butadiene.
Yet the stereoselective polymerization of propylene into a high-density plastic was in fact discovered independently before Ziegler and Natta, by American chemists J. Paul Hogan and Robert Banks working at the Phillips Petroleum Company in Oklahoma. They too were stimulated by the war – but in this case by its termination, which reduced the demand for oil and prompted Phillips to diversify its products. Hogan and Banks began in the early 1950s to look for ways to convert the small alkenes from oil refining into petrol. When they used a catalyst of nickel oxide and chromium oxide to process propylene, they found a solid white crystalline product.
This new, stiff plastic, marketed by Phillips from 1954 as Marlex, owed its commercial success to a craze that swept the United States in the late 1950s: the hula hoop. Demand for this toy consumed the Phillips plant’s entire output, and boosted production to a level that paved the way for more practical uses: industrial tubing, baby bottles and other household products. But the patent application filed by Hogan and Banks was contested by Ziegler’s rival claim, leading to a court battle that lasted three decades. Because of this, and since the American chemists were slow to publish, their discovery was eclipsed by the Ziegler-Natta Nobel – even though chromium catalysts are still widely used.
Even this is not the full extent of the priority dispute, for Alexander Zeltz and Ron Carmody of Standard Oil in Indiana also made a partially crystalline isotactic form of polypropylene in 1950 using a molybdenum catalyst. But there’s more to a discovery than being first: it’s not clear that they knew quite what they had made, and in any case there were complex questions to be addressed about the degree of stereoselectivity created by the different catalysts.
Basic science is here more the beneficiary than the begetter, for the work of Ziegler and Natta pointed the way to approaches to stereoselective formation of carbon-carbon bonds that remain a rich field of science today. Its value has occasionally surfaced in unexpected ways – it was an inadvertent excess of Ziegler-Natta catalyst, for example, that led Hideki Shirakawa to discover the first electrically conducting polymer, a form of polyacetylene, in Tokyo in 1967. The scale of the polyolefin industry, meanwhile, scarcely needs emphasizing: close to 50 million tons of polypropylene alone is produced each year.
One moral of these stories is that true discovery requires that you know what you’ve done, and show it. But they also reveal how the conventional narrative of technological advance, whereby ‘pure’ fundamental science leads to applications, is seldom of much relevance in fields such as materials chemistry. Social and cultural drivers often determine what gets explored – if not necessarily what comes out. And success may be determined by the fickle whims of the market rather than the merit of the product. One might add the lesson that, if you want recognition, publish quickly and get a good lawyer – not perhaps the most edifying moral, but that’s the way of the world.