Friday, September 21, 2012

How to copy faster

Yes, there’s more tonight. It's Friday. Here’s my latest news story for Nature. This one was tough, but hopefully worth it. Possibly I ended up with a better explanation on the Nature site than here of why reversibility is linked to the minimum heat output. But it’s a tricky matter, so no harm in having two bites of the cherry.
____________________________________________
Bacteria replicate close to the physical limit of efficiency, says a new study – but might we make them better still?

Bacteria such as E. coli typically take about 20 minutes to replicate. Can they do it any faster? A little, but not much, says biological physicist Jeremy England of the Massachusetts Institute of Technology. In a preprint [1], he estimates that bacteria are impressively close to – within a factor of 2-3 of – the limiting efficiency of replication set by the laws of physics.

“It is heartening to learn this”, says Gerald Joyce, a chemist at the Scripps Research Institute in La Jolla, California, who work includes the development of synthetic replicating molecules based on RNA. “I suppose I should take some comfort that our primitive RNA-based self-replicator apparently operates even closer to the thermodynamic lower bound”, he adds.

At the root of England’s work is a question that has puzzled many scientists: how do living systems seem to defy the Second Law of Thermodynamics by sustaining order instead of falling apart into entropic chaos? In his 1944 book What is Life?, physicist Erwin Schrödinger asserted that life feeds on ‘negative entropy’ – which was really not much more than restating the problem.

Life doesn’t really defy the Second Law because it produces entropy to compensate for its own orderliness – that is why we are warmer than our usual surroundings. England set out to make this picture rigorous by estimating the amount of heat that must unavoidably be produced when a living organism replicates – one of the key defining characteristics of life. In other words, how efficient can replication be while still respecting the Second Law?

To attack this problem, England uses the concepts of statistical mechanics, the microscopic basis of classical thermodynamics. Statistical mechanics relates different arrangements of a set of basic constituents, such as atoms or molecules, to the probabilities of their occurring. The Second Law – the inexorable increase of entropy or, loosely speaking, disorder – is generally considered to follow from the fact that there are many more disorderly arrangements of such constituents than orderly ones, so that these are far more likely to be the outcome of the particles’ movements and interactions.

The question is: what is the cheapest way, in terms of how much energy (technically free energy, which takes into account both the energy needed to make and break chemical bonds and the associated entropy changes) is involved, of going from one bacterium to two? That turns out to be a matter of how easily one can reverse the process.

For the analogous question of the minimal cost of doing a computation – combining two bits of information in a logic operation – the answer depends on how much energy it costs to reset a bit and ‘undo’ the computation. This quantity places a fundamental limit on how low the power consumption of a computer can be.

“The probability that the reverse transition from two cells to one could happen is the quantity that tells us how irreversible the replication process is”, says England. “Whatever this quantity is, it need not be dominated by the trajectories that would just look like the movie playing backwards: there are many ways of starting with two cells and ending up with one. I’m asking what class of paths should dominate that process.”

The problem is precisely those “many ways”. “You can drive yourself nuts trying to think of everything”, England says. But he considered the most general reversal route: if, by chance, the atoms in the replicated bacteria happen to move such that all its molecules disintegrate. That is, of course, immensely unlikely. But by figuring out exactly how unlikely, England can place a rough limit on how reversible replication is, and thus on its minimum energy cost.

By plugging some numbers into the equations describing the likelihood of a replication being reversed – how long on average the chemical bonds holding proteins together will last, say, and how many such bonds there are in a bacterium – England estimates that the minimal amount of heat a bacterium must generate to replicate is a little more than a third of the amount a real E. coli cell generates. That’s impressive: if the cells were only twice as efficient, they’d be approaching the maximum efficiency physically possible.

“The weakest point in my argument is the assumption that we know what the ‘most likely very unlikely path’ for spontaneous disintegration of a bacterium is”, England admits. “We’re talking about things that simply never happen, so we can’t have much intuition about them.” As a result, he says that his treatment “certainly shouldn't be thought of us a proof as much as a plausibility argument.”

It’s precisely this that troubles Joyce, who compares the calculation with the joke about a physicist trying to solve a problem in dairy farming. “As an experimentalist, it is hard for me relate to this ‘spherical cow’ treatment of a self-replicating system”, Joyce says. “Here E. coli seems to be nothing more than the equivalent of its dry weight in proteins.”

England says that we can hardly expect bacteria to do much better than they do given that they have to cope with many different environments and so can’t be optimized for any particular one. But if we want to engineer a bacterium for a highly specialized task using synthetic biology, he says, then there is room for improvement: such a modified E. coli could be at least twice as efficient at replicating, which means that a colony could grow twice as fast. That could be useful in biotechnology. “We may be able to build self-replicators that grow much more rapidly than the ones we're currently aware of,” he says.

He also concludes that there’s a trade-off between speed of replication and robustness: a replicator that is prone to falling apart produces less heat, and so can replicate faster, than one that is more robust. The findings might therefore have implications for understanding the origin of life. Many researchers, including Joyce, suspect that DNA-based replicators were preceded on the early Earth by those based on RNA, which both encoded genetic information and acted as an enzyme-like catalyst for proto-biological reactions. This fits with England’s hypothesis, because RNA is less chemically stable than DNA, and so would be more fleet and nimble in getting replication started. “Something else than RNA might work even better on a shorter timescale at an earlier stage,” England adds.

References
1. England, J. L. preprint http://www.arxiv.org/abs/1209.1179 (2012).

What your snoring says about you

OK, BBC Future again. Who said Zzzzz? [Incidentally, who did decree in kids’ books that sleeping would be denoted by “Zzzzz”? It sounds nothing like sleeping, and drives me crazy. Even Julia Donaldson does it. Listen, this is a big deal with a 3- and 7-year-old.]
______________________________________________________
Snoring is no joke for partners, but it’s not much fun for the snorer either. Severe snoring is the sound of a sleeper fighting for breath, as relaxed muscles in the pharynx (the top of the throat) allow the airway to become blocked. Lots of people snore, but the loud and irregular snoring caused by a condition known as obstructive sleep apnea (OSA) can leave a sufferer tired and fuddled during the day, even though he or she is rarely fully awoken by the night-time disruption. It’s difficult to treat – there are currently no effective drugs, and the usual intervention involves a machine that inflates the airway, or in extreme cases surgery. But the first step is to distinguish genuine OSA, which afflicts between 4 and 10 percent of the population, from ordinary snoring.

That kind of diagnosis is costly and laborious too. Often a snorer will need to sleep under observation in a laboratory. But some researchers believe that there is a signature of OSA encoded in the sounds of the snores themselves – which can be easily recorded at home for later analysis. A team in Brazil that brings together medics and physicists has now found a way of analysing snore recordings that is able not only to spot OSA but can distinguish between mild and severe cases.

Diagnosing OSA from snore sounds is not a new idea. The question is how, if at all, the clinical condition is revealed by the noises. Does OSA affect the total number of snores, or their loudness, or their acoustic quality, or their regularity – or several or all of these things? In 2008 a team in Turkey showed that the statistical regularity of snores has the potential to discriminate ordinary sleepers from OSA sufferers. And last year a group in Australia found that a rather complex analysis of the sound characteristics, such as the pitch, of snores might be capable of providing such a diagnosis, at least in cases where the sound is recorded under controlled and otherwise quiet conditions.

Physicist Adriano Alencar of the University of São Paulo and his colleagues have now added to this battery of acoustic methods for identifying OSA. They recorded the snoring of patients referred to the university’s Sleep Laboratory because of suspected OSA, and studied the measurements for a fingerprint of OSA in the regularity of snores.

A person who snores but does not suffer from OSA typically does so in synchrony with breathing, with successive snores less than about ten seconds apart. In these cases the obstruction of the airway that triggers snoring comes and goes, so that snoring might stop for perhaps a couple of minutes or more before resuming. So for ‘healthy’ snoring, the spacing between snores tends to be either less than ten seconds or, from time to time, more than about 100 seconds.

OSA patients, meanwhile, have snore intervals that fall within this time window. The snores follow one another in train, but with a spacing dictated by the more serious restriction of airflow rather than the steady in-and-out of breathing. The researchers measured what they call a snore time interval index, which is a measure of how often the time between snores falls between 10 and 100 seconds. They compare this with a standard clinical measure of OSA severity called the apnea-hypopnea index (AHI), which is obtained from complicated monitoring of a sleeping patient’s airflow in a laboratory. (Hypopnea is the milder form of OSA in which the airway becomes only partially blocked.)

Alencar and colleagues find that the higher the value of their snore interval index, the higher the patient’s corresponding AHI is. In other words, the snore index can be used as a pretty reliable proxy for the AHI: you can just record the snores rather than going through the rigmarole of the whole lab procedure.

That’s not all. The researchers could also use a snore recording to figure out how snores are related to each other – whether there is a kind of ‘snore memory’, so that, say, a particular snore is linked to a recent burst of snoring. This memory is measured by a so-called Hurst exponent, which reveals hidden patterns in a series of events that, at first glance, look random and disconnected. An automated computer analysis of the snore series could ‘learn’, based on training with known test cases, to use the Hurst exponent to distinguish moderate from severe cases of OSA, making the correct diagnosis for 16 of 17 patients.

The work of Alencar and colleagues hasn’t yet been peer-reviewed. But in the light of the earlier studies of OSA signatures in snore sounds, it adds to the promise of an easy and cheap way of spotting snorers who have a clinical condition that needs treatment. What’s more, it supports a growing belief that the human body generates several subtle but readily measured indicators of health and disease, revealed by statistical regularities in apparently random signals. For example, sensitive measurement of the crackling sounds generated when our airways open as we breathe in can tell us about the condition of our lungs, perhaps revealing respiratory problems, while ‘buried’ statistical regularities in heartbeat intervals or muscle movements encode information about cardiac health or sleep states. Our bodies tell us a lot about ourselves, if only we know how to listen.

Reference: A. M. Alencar et al., preprint http://www.arxiv.org/abs/1208.2242.

The silk laser

Here is my latest piece for BBC Future (which you can’t see in the UK). I have at least one other piece from this column yet to put up here – that will follow.
______________________________________________________________
Electronic waste from obsolete phones, cameras, computers and other mobile devices is one of the scourges of the age of information. The circuitry and packaging is not only non-biodegradable but is laced with toxic substances such as heavy metals. Imagine, then, a computer that can be disposed of by simply letting soil bacteria eat it – or even, should the fancy take you, by eating it yourself. Biodegradable information technology is now closer to appearing on the menu following the announcement by Fiorenzo Omenetto of Tufts University in Medford, Massachusetts, and coworkers of a laser made from silk.

In collaboration with David Kaplan, a specialist in the biochemistry of silk at Tufts, Omenetto has been exploring the uses of silk for several years. He is convinced that it can offer us much more than glamorous clothing. It is immensely strong – more so than steel – and can be used to make tough fibres and ropes. In the Far East silk was once used to pad armour, and in pre-revolutionary Russia a form of primitive bullet-proof clothing was made from it. It can be moulded like plastic, yet is biodegradable: silk cups can be thrown away to quickly break down in the environment. It is also biocompatible, and so could be used to make medical implants such as screws to hold together mending bones, or artificial blood vessels. You can even eat it safely, although it doesn’t taste good.

What’s more, all of this comes from sustainable and environmentally friendly processing. Spiders and silkworms make silk in water at ordinary body temperature, spinning the threads from a solution of the silk protein. Harvesting natural silk is one option, but the genes that encode the protein can be transferred to other species, so that it can be produced by bacteria in fermentation vats, or even expressed in the milk of transgenic goats. Turning this raw silk protein into strong fibres is not easy – it’s hard to reproduce the delicate thread-spinning apparatus of spiders – but if you just want to cast films of silk as if it were a plastic then this isn’t an issue.

Perhaps some of the most remarkable potential uses for this ancient material are in high-tech optical technology, like that which forms the basis of optical storage and telecommunications. Using moulds patterned on the microscopic scale, silk can be shaped into structures that reflect and diffract light, like those on DVDs – it will support holograms, for instance. Its transparency commends it for optical fibres, and Omenetto and colleagues have previously shaped silk films into so-called waveguides, rather like very thin optical fibres laid down directly on a solid surface such as a silicon chip. But rather than just using silk to passively guide and direct light, they wanted to generate light from it too. This is what the silk laser enables.

In a laser, a light-emitting substance – the lasing medium – is sandwiched between mirrors which allow the light to bounce back and forth. The medium is placed in a light-emitting state by pumping in energy, typically using either another light source or an electrical current. When light is emitted, its trapping by the mirrors means that it triggers still more emission as it bounces to and fro, so that all the light is released in an avalanche. This puts all the light waves in step with one another, which is what gives laser light its intensity and narrowly focused beam. The beam eventually escapes through one of the mirrors, which is designed to be only partially reflective.

Because of their brightness, focus and rapid on-off switching, lasers are used in telecommunications to transmit information as a stream of light pulses that encode the binary digital information of computers and microprocessors: a pulse corresponds to “1”, say, and a gap in the pulse stream to a “0”. In this way, information can be fed over long distances down optical fibres. Increasingly, computer and electrical engineers are now aiming to move and process information directly on microchips in the form of light. Then there’s no cumbersome light-to-electrical conversion of data at each end of the transmission, and light-based information processing could potentially be faster and carry more signal, since different data streams can be conveyed simultaneously in light of different colours. These so-called photonic chips could transform information technology, and Omenetto believes that with silk it should be possible to create ‘biophotonic’ circuits. That demands not just channelling light but generating it – in a laser.

Silk doesn’t absorb or emit light at the visible and infrared frequencies used in conventional telecommunications and optical information technology. So to make it into a laser medium, one needs to add substances that do. Organic dyes (carbon-based molecules) are already widely used, dispersed in a liquid solvent or in some solid matrix, to make dye lasers. The researchers figured they could mix such a dye into silk. They used one called stilbene, which is water-soluble and closely related to chemical compounds found in plants and used as textile brighteners.

Working with Stefano Toffanin and colleagues at the Institute for the Study of Nanostructured Materials in Bologna, Italy, Omenetto and his coworkers patterned a thin layer of silica (silicon dioxide) on the surface of a slice of silicon into a series of grooves about a quarter of a micrometre wide, which act as a mirror for the bluish-purple light that stilbene emits. They then covered this with a layer of silk spiced with the dye, and found that when they pumped this structure with ultraviolet light, it emitted light with the characteristic signature of laser light: an intense beam with a very narrow spread in frequency.

Making the device on silicon means that it could potentially be controlled electronically and merged with conventional chip circuitry. But that’s not essential – silk-based light circuits and devices could be laid down on other materials, perhaps on cheap, flexible and degradable plastics.

This isn’t the first time that biological materials have been used to make lasers. For example, in 2002 a team in Japan made one using films of DNA infused with organic dyes. And last year, two researchers in the US and Korea made lasers from living cells that were engineered to produce a fluorescent protein found naturally in a species of jellyfish. But the attraction of silk is that it is cheap, easy to process, biodegradable and already used to make a range of other light-based devices.

There might be even more dramatic possibilities. Recently, Omenetto and colleagues showed that silk is a good support for growing neurons, the cells that communicate nerve signals in the nervous system and the brain. This leads them to speculate that silk might mediate between optical and electronic technology and our nervous system, for example by bringing light sources intimately close to nerve cells for imaging them, or perhaps even developing circuitry that can transmit signals across damaged nerves.

Reference: S. Toffanin et al., Applied Physics Letters 101, 091110 (2012)

Tuesday, September 11, 2012

As easy as ABC?

Here’s my latest news story for Nature. There’s a lot of superscripts in here, for which I’ll use the x**n notation. Every time I encounter mathematicians, I’m reminded what a very different world they live in.
_____________________________________________________________________________

If it’s true, a Japanese mathematician’s solution to a conjecture about whole numbers would be an ‘astounding achievement’

The recondite world of mathematics is abuzz with a claim that one of the most important problems in number theory has been solved.

Japanese mathematician Shinichi Mochizuki of Kyoto University has released a 500-page proof of the ABC conjecture, which describes a purported relationship between whole numbers – a so-called Diophantine problem.

The ABC conjecture might not be as familiar to the wider world as Fermat’s Last Theorem, but in some ways it is more significant. “The ABC conjecture, if proved true, at one stroke solves many famous Diophantine problems, including Fermat's Last Theorem”, says Dorian Goldfeld, a mathematician at Columbia University in New York.

“If Mochizuki’s proof is correct, it will be one of the most astounding achievements of mathematics of the 21st century”, he adds.

Like Fermat’s theorem, the ABC conjecture is a postulate about equations of the deceptively simple form A+B=C that relate three whole numbers A, B and C. It involves the concept of a square-free number: one that can’t be divided by the square of any number. 15 and 17 are square free-numbers, but 16 and 18 – divisible by 4**2 and 3**2 respectively – are not.

The “square-free” part of a number n, denoted sqp(n), is the largest square-free number that can be formed by multiplying prime factors of n. For instance, sqp(18) = 2×3 = 6.

If you’ve got that, you should get the ABC conjecture. Proposed independently by David Masser and Joseph Oesterle in 1985, it concerns a property of the product of the three integers A×B×C, or ABC – or more specifically, of the square-free part of this product, which involves their distinct prime factors.

The conjecture states that the ratio of sqp(ABC)**r/C always has some minimum value greater than zero for any value of r greater than 1. For example, if A=3 and B=125, so that C=128, sqp(ABC)=30 and sqp(ABC)**2/C = 900/128. In this case, where r=2, sqp(ABC)**r/C is nearly always greater than 1, and always greater than zero.

It turns out that this conjecture encapsulates many other Diophantine problems, including Fermat’s Last Theorem (which states that A**n + B**n = C**n has no integer solutions if n>2). Like many Diophantine problems, it is at root all about the relationships between prime numbers – according to Brian Conrad of Stanford University, “it encodes a deep connection between the prime factors of A, B and A+B”.

“The ABC conjecture is the most important unsolved problem in Diophantine analysis”, says Goldfeld. “To mathematicians it is also a thing of beauty. Seeing so many Diophantine problems unexpectedly encapsulated into a single equation drives home the feeling that all the subdisciplines of mathematics are aspects of a single underlying unity.”

Unsurprisingly, then, many mathematicians have expended a great deal of effort trying to prove the conjecture. In 2007 the French mathematician Lucien Szpiro, whose work in 1978 led to the ABC conjecture in the first place, claimed to have a proof of it, but it was soon found to be flawed.

Like Szpiro, and also like Andrew Wiles who proved Fermat’s Last Theorem in 1994, Mochizuki has attacked the problem using the theory of elliptic curves, which are the smooth curves generated by algebraic relationships of the sort y**2 = x**3 + ax + b.

There, however, the relationship of Mochizuki’s work to previous efforts stops. In the present and earlier papers he has developed entirely new techniques that very few other mathematicians yet fully understand. “His work is extremely novel”, says Conrad. “It uses a huge number of new insights that are going to take a long time to be digested by the community.”

This novelty invokes entirely new mathematical ‘objects’ – abstract entities analogous to more familiar examples such as geometric objects, sets, permutations, topologies and matrices. “At this point he is probably the only one that knows it all”, says Goldfeld.

As a result, Goldfeld says, “if the proof is correct it will take a long time to check all the details.” The proof is spread over four long papers, each of which rests on earlier long papers. “It can require a huge investment of time to understand a long and sophisticated proof, so the willingness by others to do this rests not only on the importance of the announcement but also on the track record of the authors”, Conrad explains.

Mochizuki’s track record certainly makes the effort worthwhile. “He has proved extremely deep theorems in the past, and is very thorough in his writing, so that provides a lot of confidence”, says Conrad. And he adds that the payoff would be more than a matter of simply verifying the claim. “The exciting aspect is not just that the conjecture may have now been solved, but more importantly that the new techniques and insights he must have had to introduce should be very powerful tools for solving future problems in number theory.”

Mochizuki’s papers:
Paper 1
Paper 2
Paper 3
Paper 4

Friday, September 07, 2012

Cold fusion redux

This obituary of Martin Fleischmann appears in a similar form in the latest issue of Nature. No one, of course, wants to seem carping or churlish in an obit, and I hope this achieves some kind of balance. But I have to admit that, looking back now, I couldn’t help but be reminded of how badly Pons and Fleischmann behaved while cold fusion was at its height. In particular, the way they threatened Utah physicist Michael Salamon, who tried to replicate their experiments with their own equipment, was unforgivable. And it was pretty distasteful to see Fleischmann more recently bad-mouthing all his critics, searching for ways to belittle or dismiss Mark Wrighton, Frank Close, Nate Lewis, Gary Taubes and others. As I try to say here, it’s not getting things wrong that should count against you, but how you handle it.

____________________________________________________

OBITUARY
Martin Fleischmann, 1927-2012

Pioneering electrochemist who claimed to have discovered cold fusion

“Whatever one’s opinion about cold fusion, it should not be allowed to dominate our view of a remarkable and outstanding scientist.” This plea appears in the University of Southampton’s obituary of Martin Fleischmann, who carried out much of the work there that made him renowned as an electrochemist. It is not clear that it will be heeded.

Fleischmann died on 3rd August at the age of 85 after illness from Parkinson’s disease, heart disease and diabetes. He made substantial contributions to his discipline, being the first person to observe surface-enhanced Raman emission (now the basis of a widely used technique) and developing the use of ultramicroelectrodes as sensitive electrochemical probes. But he is best known now for his claim in 1989 to have initiated nuclear fusion on a bench top using only the kind of equipment a school lab might possess.

The ‘cold fusion’ debacle provoked bitter disputes, court cases and controversies that reverberate today. Along with polywater and homeopathy, cold fusion is now regarded as one of the most notorious cases of what chemist Irving Langmuir called ‘pathological science’ – as he put it, “the science of things that aren’t so”.

It would be wrong to draw a veil over cold fusion as an aberration in Fleischmann’s otherwise distinguished career. For it was instead an extreme example of the style that characterized his research: a willingness to suggest bold and provocative ideas, to take risks and to make imaginative leaps that could sometimes yield a rich harvest.

Fleischmann was born in Karoly Vary (Karlsbad) in Czechoslovakia in 1927. His father was of Jewish heritage and opposed Hitler’s regime; his family fled just before the German invasion to the Netherlands and then England. Fleischmann studied chemistry at Imperial College in London and, after a PhD in electrochemistry, he moved to the University of Newcastle. In 1967 he was appointed to the Faraday Chair of Chemistry at Southampton, where he explored reactions at electrode surfaces.

In 1974, Fleischmann and his coworkers observed unusually intense Raman emission (scattered light shifted in energy by the interaction with molecular vibrational states) from organic molecules adsorbed on the surface of silver electrodes. They did not immediately recognize that the enhancement was caused by the surface, and indeed the mechanism is still not fully understood – but surface-enhanced Raman spectroscopy (SERS) has become a valuable tool for investigating surface chemistry.

Around 1980 Fleischmann and Mark Wightman independently pioneered the use of ultramicroelectrodes just a few micrometres across to study otherwise-inaccessible electrode processes – for example, at low electrolyte concentrations or with very fast rates of reaction. Such innovations gave Fleischmann international repute. In 1985, two years after his early retirement from Southampton, he was elected a Fellow of the Royal Society.

Fleischmann’s longstanding interest in hydrogen surface chemistry on palladium led to the cold fusion experiments. When hydrogen molecules adsorbed onto palladium dissociate into atoms, these atoms can diffuse into the metal lattice, making the metal a ‘sponge’ able to soak up large amounts of hydrogen. Very high pressures of hydrogen can build up – perhaps, Flesichmann wondered, sufficient to trigger nuclear fusion.

Fleischmann’s retirement in 1983 freed him to conduct self-funded experiments at the University of Utah – a location conducive to Fleischmann, a passionate skier – with his former student Stanley Pons. They electrolysed solutions of lithium deuteroxide, collecting deuterium at the palladium cathode, and claimed to measure more heat output than could be accounted for by the energy fed in – a signature, they said, of deuterium fusion within the electrode. On returning in the morning to one experiment left running overnight, they found that the apparatus had been vaporized and the fume cupboard and part of the floor destroyed. Was this a particularly violent outburst of fusion?

Not until 1989 did Fleischmann, Pons and their student Marvin Hawkins move to publish their data. They discovered they were in competition with a team at Brigham Young University in Utah, led by physicist Steven Jones, which was conducting similar studies. Initially Fleischmann and Pons accused Jones of plagiarizing their ideas, but eventually the groups agreed to coordinate their announcements with a joint submission to Nature on 24th March. Yet Fleischmann and Pons first rushed a (highly uninformative) paper into print with the Journal of Electroanalytical Chemistry, organized a press conference on 23nd March, and faxed their paper to Nature that same day without telling Jones.

The rest, as they say, is history, told for example in Frank Close’s Too Hot To Handle (1991) and Gary Taubes’ Bad Science (1993). The fusion claims shocked the world: physicists had been trying for decades, at great expense but with no success, to harness nuclear fusion for energy generation. Now it appeared that chemists had achieved it at a minuscule fraction of the expense, potentially solving the energy crisis. Jones’ paper was eventually [Nature 338, 737; 1989] published by Nature; that of Pons, Fleischmann and Hawkins was withdrawn when the authors professed to be too busy, in the wake of their astounding announcement, to address the reviewers’ comments. When Pons spoke at the spring meeting of the American Chemical Society on 12th April, the atmosphere was jubilant: it was hailed as a triumph of chemistry over physics. Physicists were more sceptical, and pointed out serious problems with Fleischmann and Pons’ claims to have detected the emission of neutrons diagnostic of deuterium fusion.

Accusations that they had manipulated the neutron data were never substantiated, but what really put paid to cold fusion was the persistent failure of other groups to reliably reproduce the purported excess heat generation and other signatures of potential fusion. More accusations, recriminations and general bad behaviour followed: coercion, intimidation, litigation (Pons’ lawyer threatened Utah physicist Michael Salamon with legal action after he published his negative attempts at replication in Nature), withholding of data (Fleischmann refused outright at one meeting of physicists to discuss crucial control experiments), and suspicions of experimental tampering (were some groups spiking their equipment with tritium, a fingerprint of fusion?). The University of Utah sought aggressively to capitalize, throwing $5m at a ‘National Cold Fusion Institute’ that closed only two years after it opened.

Once cold fusion lost its credibility, Fleischmann and Pons moved to France to continue their work with private funding, but later fell out. Now only a few lone ‘believers’ pursue the work. Fleischmann did not distinguish himself in the aftermath, belittling critical peers in interviews and hinting at paranoid conspiracy theories. But perhaps the biggest casualty of cold fusion was electrochemistry itself, suddenly made to seem a morass of charlatanism and poor technique. That was unfair: some of the most authoritative (negative) attempts to replicate the results were conducted by electrochemists.

Flesichmann’s tragedy was almost Shakespearean, not least because he was himself in many ways a sympathetic character: resourceful, energetic, immensely inventive and remembered warmly by collaborators. As Linus Pauling and Fred Hoyle also exemplified, once you’ve been proved right against the odds, it becomes harder to accept the possibility of error. “Many a time in my life I have been accused of coming up with crazy ideas,” he once said. “Fortunately, I'm able to say that, so far, the critics have had to back off.” But although a final reckoning should not let genuine achievements be overshadowed by errors, the blot that cold fusion left on Fleischmann’s reputation is hard to expunge. To make a mistake or a premature claim, even to fall prey to self-deception, is a risk any scientist runs. The true test is how one deals with it.