Here’s my last story for BBC Future.
_______________________________________________________________
Who will find dark matter first? We’re looking everywhere for this elusive stuff: deep underground, out in space, in the tunnels of particle colliders. After the Higgs boson, this is the next Big Hunt for modern physics, and arguably there’s even more at stake, since we think there’s more than four times more dark matter than there is all the stuff we can actually see.
And you can join the hunt. It’s probably not worth turning out your cupboards to see if there’s any dark matter lurking at the back, but there is a different way that all comers – at least, those with mathematical skills – can contribute. A team of astronomers has reported that crowdsourcing has improved the computational methods they will use to map out the dark matter dispersed through distant galaxies – which is where it was discovered in the first place.
The hypothesis of dark matter is needed to explain why galaxies hold together. Without its gravitational effects, rotating galaxies would fly apart, something that has been known since the 1930s. Yet although this stuff is inferred from its gravity, there’s nothing visible to astronomers – it doesn’t seem to absorb or emit light of any sort. That seems to make it a kind of matter different from any of the fundamental particles currently known. There are several theories for what dark matter might be, but they all have to start from negative clues: what we don’t know or what it doesn’t do.
The current favourite invokes a new fundamental particle called a WIMP: a weakly interacting massive particle. “Weakly interacting” means that it barely feels ordinary matter at all, but can just pass straight through it. However, the idea is that those feeble interactions are just enough to make a WIMP occasionally collide with a particle of ordinary matter and generate an observable effect in the form of a little burst of light that has no other discernible cause. Such flashes would be a telltale signature of dark matter.
To see them, it’s necessary to mask out all other possible causes – in particular, to exclude collisions involving cosmic rays, which are ordinary particles such as electrons and protons streaming through space after being generated in violent astrophysical processes such as supernovae. Cosmic rays are eventually soaked up by rock as they penetrate the earth, and so several dark-matter detectors are situated far underground, at the bottom of deep mineshafts. They comprise sensitive light detectors that surround a reservoir of fluid and look for inordinately rare dark-matter flashes.
One such experiment, called LUX and located in a mine in South Dakota, has recently reported the results of the first several months of operation. LUX looks for collisions of WIMPs within a tank of liquid xenon. So far, it hasn’t seen any. That wouldn’t be such a big deal if it wasn’t for the fact that some earlier experiments, have reported a few unexplained events that could possibly have been caused by WIMPs. LUX is one of the most sensitive dark-matter experiments now running, and if those earlier signals were genuinely caused by dark matter, LUX would have been expected to see such things too. So the new results suggest that the earlier, enticing findings were a false alarm.
Another experiment, called the Alpha Magnetic Spectrometer (AMS) and carried on board the International Space Station, looks for signals from the mutual annihilation of colliding WIMPs. And there are hopes that the Large Hadron Collider at CERN in Geneva might, once it resumes operation in 2014, be able to conduct particle smashes at the energies where some theories suggest that WIMPs might actually be produced from scratch, and so put these theories to the test.
In the meantime, the more information we can collect about dark matter in the cosmos, the better placed we are to figure out where and how to look for it. That’s the motivation for making more detailed astronomical observations of galaxies where dark matter is thought to reside. The largest concentrations of the stuff are thought to be in gravitationally attracting groups of galaxies called galaxy clusters, where dark matter can apparently outweigh ordinary matter by a factor of up to a hundredfold. By mapping out where the dark matter sits in these clusters relative to their visible matter, it should be possible to deduce some of the basic properties that its mysterious particles have, such as whether they are ‘cold’ and easy slowed down by gravity, or hot and thus less easily retarded.
One way of doing this mapping is to look for dark matter via its so-called gravitational lensing effect. As Einstein’s theory of general relativity predicted, gravitational fields can bend light. This means that dark matter (and ordinary matter too) can act like a lens: the light coming from distant objects can be distorted when it passes by a dense clump of matter. David Harvey of the University of Edinburgh, Thomas Kitching of University College London, and their coworkers are using this lensing effect to find out how dark matter is distributed in galaxy clusters.
To do that, they need an efficient computational method that can convert observations of gravitational lensing by a cluster into its inferred dark-matter distribution. Such methods exist, but the researchers suspected they could do better. Or rather, someone else could.
Crowd-sourcing as a way of gathering and analysing large bodies of data is already well established in astronomy, most notably in the Zooniverse scheme, in which participants volunteer their services to classify data into different categories: to sort galaxies or lunar craters into their fundamental shape classes, for example. Humans are still often better at making these judgements than automated methods, and Zooniverse provides a platform for distributing and collating their efforts.
What Harvey and colleagues needed was rather more sophisticated than sorting data into boxes. To create an algorithm for actually analysing such data, you need to have some expertise. So they turned to Kaggle, a web platform that (for a time-based fee) connects people with a large data set to data analysts who might be able to crunch it for them. Last year Kitching and his international collaborators used Kaggle to generate the basic gravitational-lensing data for dark-matter mapping. Now he and his colleagues have shown that even the analysis of the data can be effectively ‘outsourced’ this way.
The researchers presented the challenge in the form of a competition called “Observing Dark Worlds”, in which the authors of the three best algorithms would receive cash prizes totalling $20,000 donated by the financial company Winton Capital Management. They found that the three winning entries could improve significantly on the performance of a standard, public algorithm for this problem, pinpointing the dark matter clumps with an accuracy around 30% better. Winton Capital benefitted too: Kitching says that “they managed to find some new recruits from the winners, at a fraction of the ordinary recruiting costs.”
It’s not clear that the ordinary citizen can quite compete at this level – the overall winner of Dark Worlds was Tim Salismans, who this year gained a PhD in analysis of “big data” at the Erasmus University Rotterdam. The other two winners were professionals too. But that is part of the point of the exercise too: crowd-sourcing is not just about soliciting routine, low-level effort from an untrained army of volunteers, but also about connecting skilled individuals to problems that would benefit from their expertise. And the search for dark matter needs all the help it can get.
Friday, November 29, 2013
Happy birthday MRS
Of all the regular meetings that I used to attend as a Nature editor, the one I enjoyed most was the annual Fall meeting of the US Materials Research Society. Partly because it was in Boston, but also because it was always full of diverse and interesting stuff, as well as being of a just about manageable scale. So I have a fondness for the MRS and was glad to be asked to write a series of portraits of areas in materials science for the MRS Bulletin to mark the society’s 40th anniversary. The result is a piece too long to set down here, but the kind folks at MRS Bulletin seem to have made the article freely available online here.
Tuesday, November 26, 2013
Shape-shifting
Oh, here’s one from BBC Future that I almost missed – the latest in ‘illusion optics’. I have a little video discussion of this too.
__________________________________________________________
In the tradition whereby science mines myth and legend for metaphors to describe its innovations, you might call this shape-shifting. Admittedly, the device reported in the journal Physical Review Letters by researchers in China is not going to equal Actaeon’s transformation into a stag, Metis into a fly, or Proteus into whatever he pleased. But it offers an experimental proof-of-principle that, using ideas and techniques related to invisibility cloaking, one object can be given the appearance of another. Oh, and the device does invisibility too.
This versatility is what marks out the ‘cloak’ made by Tie Jun Cui of the Southeast University in Nanjing, China, and his coworkers at Lanzhou University as distinct from the now considerable body of work on invisibility cloaks and other types of “transformation optics”. Surprisingly, perhaps, this versatility comes from a design that is actually easier to fabricate than many of the ‘invisibility cloaks’ made previously. The catch is that these shape-changes are not something you can actually see, but are apparent only when the transformed object is being detected from the effect it has on the electrical conductivity of the medium in which it is embedded.
The most sophisticated ‘invisibility cloaks’ made so far use structures called metamaterials to bend light around the hidden object, rather like water flowing around an obstacle in a stream. If the light rays from behind the object are brought back together again at the front, then to an observer they seem not to have deviated at all, but simply to have passed through empty space.
Researchers have also shown that, by rerouting light in other ways, a metamaterial cloak can enable so-called ‘illusion optics’ that gives one thing the appearance of another. However, with metamaterials this is a one-shot trick: the cloak would produce the same, single visual illusion regardless of what is hidden within it. What’s more, genuine invisibility and illusion optics are tremendously challenging to achieve with metamaterials, which no one really yet knows how to make in a way that will work with visible light for all the wavelengths we see. So at present, invisibility cloaks have been limited either to microwave frequencies or to simplified, partial cloaks in which an object may be hidden but the cloak itself is visible.
What’s more, each cloak only does one sort of transformation, for which it is designed at the outset. Cui and colleagues say that a multi-purpose shape-shifting cloak could be produced by making the components active rather than passive. That’s to say, rather than redirecting light along specified routes, they might be switchable so that the light can take different paths when the device is configured differently. You might compare it to a fixed rail track (passive), where there’s only one route, and a track with sets of points (active) for rerouting.
Active cloaks have not been much explored so far beyond the theory. Now Cui and his coworkers have made one. It hides or transforms objects that are sensed electrically, in a process that the researchers compare to the medical technology called electrical impedance tomography. Here, electrical currents or voltages measured on the surface of an object or region are used to infer the conductivity within it, and thereby to deduce the hidden structure. A similar technique is used in geophysics to look at buried rock structures using electrodes at the surface or down boreholes, and in industrial processes to look for buried pipes. It’s a little like using radar to reconstruct the shape of an object from the way it reflects and reshapes the echo.
Here, hiding an object would mean constructing a cloak to manipulate the electrical conductivity around it so that it seems as though the object isn’t there. And transforming its appearance involves rejigging the electric field so that the measurements made at a distance would infer an embedded object of a different shape. Cui and colleagues have built a two-dimensional version of such an illusionistic cloak, consisting of a network of resistors joined in a concentric ‘spider’s web’ pattern on an electrically conducting disk, with the cloaked region in a space at their centre.
To detect the object, an electrode at one position on the plate sets up an electric field, and this is measured around the periphery of the plate. Last year Cui and his colleagues made a passive version of an invisibility cloak, in which the resistor network guided electric currents around the central hole so as to give the impression, when the field was measured at the edges of the disk, that the cloak and its core were just part of the uniform background medium. Now they have wired up such a resistor network so that the voltage across each component, and thus the current passing through it, can be altered in a way that changes the apparent shape of the cloaked region, as inferred from measurements made at the disk’s edge.
In this way, the researchers could alter the ‘appearance’ of the central region to look invisible, or like a perfectly conducting material, or like a hole with zero conductivity. And all that’s needed is some nifty soldering to create the network from standard resistors, without any of the complications of metamaterials. That means it should be relatively easy to make the cloaks bigger, or indeed smaller.
In theory this device could sustain the illusion even if the probe signal changes in some way (such as its position), by using a rapid feedback mechanism to recalculate how the voltages across the resistors need to be altered to keep the same appearance. The researchers say that it might even work for oscillating electrical fields, as long as their frequency is not too high – in other words, perhaps to mask or transform objects being sensed by radio waves. Here the resistor network would be constantly tuned to cancel out distortions in the probe signal. And because resistors warm up, the device could also be used to manipulate appearances as sensed by changes in the heat flow through the cloaked region.
Reference: Q. Ma et al., Physical Review Letters 111, 173901 (2013).
__________________________________________________________
In the tradition whereby science mines myth and legend for metaphors to describe its innovations, you might call this shape-shifting. Admittedly, the device reported in the journal Physical Review Letters by researchers in China is not going to equal Actaeon’s transformation into a stag, Metis into a fly, or Proteus into whatever he pleased. But it offers an experimental proof-of-principle that, using ideas and techniques related to invisibility cloaking, one object can be given the appearance of another. Oh, and the device does invisibility too.
This versatility is what marks out the ‘cloak’ made by Tie Jun Cui of the Southeast University in Nanjing, China, and his coworkers at Lanzhou University as distinct from the now considerable body of work on invisibility cloaks and other types of “transformation optics”. Surprisingly, perhaps, this versatility comes from a design that is actually easier to fabricate than many of the ‘invisibility cloaks’ made previously. The catch is that these shape-changes are not something you can actually see, but are apparent only when the transformed object is being detected from the effect it has on the electrical conductivity of the medium in which it is embedded.
The most sophisticated ‘invisibility cloaks’ made so far use structures called metamaterials to bend light around the hidden object, rather like water flowing around an obstacle in a stream. If the light rays from behind the object are brought back together again at the front, then to an observer they seem not to have deviated at all, but simply to have passed through empty space.
Researchers have also shown that, by rerouting light in other ways, a metamaterial cloak can enable so-called ‘illusion optics’ that gives one thing the appearance of another. However, with metamaterials this is a one-shot trick: the cloak would produce the same, single visual illusion regardless of what is hidden within it. What’s more, genuine invisibility and illusion optics are tremendously challenging to achieve with metamaterials, which no one really yet knows how to make in a way that will work with visible light for all the wavelengths we see. So at present, invisibility cloaks have been limited either to microwave frequencies or to simplified, partial cloaks in which an object may be hidden but the cloak itself is visible.
What’s more, each cloak only does one sort of transformation, for which it is designed at the outset. Cui and colleagues say that a multi-purpose shape-shifting cloak could be produced by making the components active rather than passive. That’s to say, rather than redirecting light along specified routes, they might be switchable so that the light can take different paths when the device is configured differently. You might compare it to a fixed rail track (passive), where there’s only one route, and a track with sets of points (active) for rerouting.
Active cloaks have not been much explored so far beyond the theory. Now Cui and his coworkers have made one. It hides or transforms objects that are sensed electrically, in a process that the researchers compare to the medical technology called electrical impedance tomography. Here, electrical currents or voltages measured on the surface of an object or region are used to infer the conductivity within it, and thereby to deduce the hidden structure. A similar technique is used in geophysics to look at buried rock structures using electrodes at the surface or down boreholes, and in industrial processes to look for buried pipes. It’s a little like using radar to reconstruct the shape of an object from the way it reflects and reshapes the echo.
Here, hiding an object would mean constructing a cloak to manipulate the electrical conductivity around it so that it seems as though the object isn’t there. And transforming its appearance involves rejigging the electric field so that the measurements made at a distance would infer an embedded object of a different shape. Cui and colleagues have built a two-dimensional version of such an illusionistic cloak, consisting of a network of resistors joined in a concentric ‘spider’s web’ pattern on an electrically conducting disk, with the cloaked region in a space at their centre.
To detect the object, an electrode at one position on the plate sets up an electric field, and this is measured around the periphery of the plate. Last year Cui and his colleagues made a passive version of an invisibility cloak, in which the resistor network guided electric currents around the central hole so as to give the impression, when the field was measured at the edges of the disk, that the cloak and its core were just part of the uniform background medium. Now they have wired up such a resistor network so that the voltage across each component, and thus the current passing through it, can be altered in a way that changes the apparent shape of the cloaked region, as inferred from measurements made at the disk’s edge.
In this way, the researchers could alter the ‘appearance’ of the central region to look invisible, or like a perfectly conducting material, or like a hole with zero conductivity. And all that’s needed is some nifty soldering to create the network from standard resistors, without any of the complications of metamaterials. That means it should be relatively easy to make the cloaks bigger, or indeed smaller.
In theory this device could sustain the illusion even if the probe signal changes in some way (such as its position), by using a rapid feedback mechanism to recalculate how the voltages across the resistors need to be altered to keep the same appearance. The researchers say that it might even work for oscillating electrical fields, as long as their frequency is not too high – in other words, perhaps to mask or transform objects being sensed by radio waves. Here the resistor network would be constantly tuned to cancel out distortions in the probe signal. And because resistors warm up, the device could also be used to manipulate appearances as sensed by changes in the heat flow through the cloaked region.
Reference: Q. Ma et al., Physical Review Letters 111, 173901 (2013).
Thursday, November 21, 2013
The LHC comes to London
Here’s my latest piece for the Prospect blog. I also have a piece in the latest issue of the magazine on quantum computing, but I’ll post that shortly.
______________________________________________________________________
It may come as a surprise that not all physicists are thrilled by the excitement about the Higgs boson, now boosted further by the award of the physics Nobel prize to Peter Higgs and François Englert, who first postulated its existence. Some of them feel twinges of resentment at the way the European centre for particle physics CERN in Switzerland, where the discovery was made with the Large Hadron Collider (LHC), has managed to engineer public perception to imply that the LHC itself, and particle physics generally, is at the centre of gravity of modern physics. In fact most physicists don’t work on the questions that predominate at CERN, and the key concepts of the discipline are merely exemplified by, and not defined by, those issues.
I have shared some of this frustration at the skewed view that wants to make all physicists into particle-smashers. But after taking a preview tour of the new exhibition Collider just opening at London’s Science Museum, I am persuaded that griping is not the proper response. It is true that CERN has enviable public-relations resources, but the transformation of an important scientific result (the Higgs discovery) into an extraordinary cultural event isn’t a triumph of style over substance. It marks a shift in science communication that other disciplines can usefully learn from. Collider reflects this.
The exhibition has ambitions beyond the usual pedagogical display of facts and figures, evident from the way that the creative team behind it brought in theatrical expertise: video designer Finn Ross, who worked on the stage play of Mark Haddon’s The Curious Incident of the Dog in the Night Time, and playwright Michael Wynne. They have helped to recreate a sense of what it is like to actually work at CERN. The exhibits, many of them lumps of hardware from the LHC, are displayed in a mock-up of the centre’s offices (with somewhat over-generous proportions) and corridors, complete with poster ads for recondite conferences and the “CERN choir”. Faux whiteboards and blackboards – some with explanatory notes, others just covered with decorative maths – abound. Actors in a video presentation aim to convince us of the ordinariness of the men and women who work here, as well as of their passionate engagement and excitement with the questions they are exploring.
The result is that the findings of the LHC’s experiments so far – which are difficult to explain at the best of times, although most interested folks have probably gathered by now that the Higgs boson is a particle responsible for giving some other fundamental particles their mass – are not, as in the traditional science-museum model, spruced up and served up to the public as it were on a plate, in the form of carefully honed metaphors. The makeshift feel of the environment, a work-in-progress with spanners and bits of kit still lying around, is itself an excellent metaphor for the science itself: still under construction, making use of what is to hand, its final shape as yet undetermined. The experience is as much about what it means to do science as it is about what the science tells us.
This is a good thing, and the fact that CERN itself has become a kind of living exhibition – with more than 100,000 visitors a year and an active outreach programme with strong involvement of schools – is worth celebrating. The short presentations at the preview event also made it clear why scientists need help in thinking about public engagement. It has never been a secret that Peter Higgs himself has little interest in the hoopla and celebrity that his Nobel award has sent stratospheric. In a rare appearance here, he admitted to being concerned that all the emphasis on the particle now named after him might eclipse the other exciting questions the LHC will explore. Those are what will take us truly into uncharted territory; the Higgs boson is the last, expected part in the puzzle we have already assembled (the so-called Standard Model), whereas questions about whether all known particles have “supersymmetric” partners, and what dark matter is, demand hitherto untested physics.
Higgs is the classic scientist’s scientist, interested only in the work. When asked how he visualized the Higgs boson himself, he didn’t launch into the stock image of Margaret Thatcher moving through a cocktail party and “accreting mass” in the form of hangers-on, but just said that he didn’t visualize it at all, since he considers it impossible to visualize fundamental particles. He said he had little idea of why what seemed to be a previous lack of public interest in science has now become a hunger for it.
All this is not uncommon in scientists, who are not interested in developing pretty pictures and fancy words to communicate their thoughts. That no doubt helps them get on with the job, but it is why they need leaders such as CERN’s current director general Rolf-Dieter Heuer, who can step back and think about the message and the role in society. Hearteningly, Heuer asserted that “the interest in society was always there – we scientists just made the mistake of not satisfying it.”
As Heuer pointed out, the bigger picture is mind-boggling. “It took us fifty years to complete the Standard Model”, he said. “But ninety-five percent of the universe is still unknown. It’s time to enter the dark universe.”
______________________________________________________________________
It may come as a surprise that not all physicists are thrilled by the excitement about the Higgs boson, now boosted further by the award of the physics Nobel prize to Peter Higgs and François Englert, who first postulated its existence. Some of them feel twinges of resentment at the way the European centre for particle physics CERN in Switzerland, where the discovery was made with the Large Hadron Collider (LHC), has managed to engineer public perception to imply that the LHC itself, and particle physics generally, is at the centre of gravity of modern physics. In fact most physicists don’t work on the questions that predominate at CERN, and the key concepts of the discipline are merely exemplified by, and not defined by, those issues.
I have shared some of this frustration at the skewed view that wants to make all physicists into particle-smashers. But after taking a preview tour of the new exhibition Collider just opening at London’s Science Museum, I am persuaded that griping is not the proper response. It is true that CERN has enviable public-relations resources, but the transformation of an important scientific result (the Higgs discovery) into an extraordinary cultural event isn’t a triumph of style over substance. It marks a shift in science communication that other disciplines can usefully learn from. Collider reflects this.
The exhibition has ambitions beyond the usual pedagogical display of facts and figures, evident from the way that the creative team behind it brought in theatrical expertise: video designer Finn Ross, who worked on the stage play of Mark Haddon’s The Curious Incident of the Dog in the Night Time, and playwright Michael Wynne. They have helped to recreate a sense of what it is like to actually work at CERN. The exhibits, many of them lumps of hardware from the LHC, are displayed in a mock-up of the centre’s offices (with somewhat over-generous proportions) and corridors, complete with poster ads for recondite conferences and the “CERN choir”. Faux whiteboards and blackboards – some with explanatory notes, others just covered with decorative maths – abound. Actors in a video presentation aim to convince us of the ordinariness of the men and women who work here, as well as of their passionate engagement and excitement with the questions they are exploring.
The result is that the findings of the LHC’s experiments so far – which are difficult to explain at the best of times, although most interested folks have probably gathered by now that the Higgs boson is a particle responsible for giving some other fundamental particles their mass – are not, as in the traditional science-museum model, spruced up and served up to the public as it were on a plate, in the form of carefully honed metaphors. The makeshift feel of the environment, a work-in-progress with spanners and bits of kit still lying around, is itself an excellent metaphor for the science itself: still under construction, making use of what is to hand, its final shape as yet undetermined. The experience is as much about what it means to do science as it is about what the science tells us.
This is a good thing, and the fact that CERN itself has become a kind of living exhibition – with more than 100,000 visitors a year and an active outreach programme with strong involvement of schools – is worth celebrating. The short presentations at the preview event also made it clear why scientists need help in thinking about public engagement. It has never been a secret that Peter Higgs himself has little interest in the hoopla and celebrity that his Nobel award has sent stratospheric. In a rare appearance here, he admitted to being concerned that all the emphasis on the particle now named after him might eclipse the other exciting questions the LHC will explore. Those are what will take us truly into uncharted territory; the Higgs boson is the last, expected part in the puzzle we have already assembled (the so-called Standard Model), whereas questions about whether all known particles have “supersymmetric” partners, and what dark matter is, demand hitherto untested physics.
Higgs is the classic scientist’s scientist, interested only in the work. When asked how he visualized the Higgs boson himself, he didn’t launch into the stock image of Margaret Thatcher moving through a cocktail party and “accreting mass” in the form of hangers-on, but just said that he didn’t visualize it at all, since he considers it impossible to visualize fundamental particles. He said he had little idea of why what seemed to be a previous lack of public interest in science has now become a hunger for it.
All this is not uncommon in scientists, who are not interested in developing pretty pictures and fancy words to communicate their thoughts. That no doubt helps them get on with the job, but it is why they need leaders such as CERN’s current director general Rolf-Dieter Heuer, who can step back and think about the message and the role in society. Hearteningly, Heuer asserted that “the interest in society was always there – we scientists just made the mistake of not satisfying it.”
As Heuer pointed out, the bigger picture is mind-boggling. “It took us fifty years to complete the Standard Model”, he said. “But ninety-five percent of the universe is still unknown. It’s time to enter the dark universe.”
Wednesday, November 13, 2013
Sceptical chemists
Here’s my latest Crucible column for the November issue of Chemistry World. It’s something that’s always puzzled me. I suppose I could lazily claim that the Comments section below the piece proves my point, but obviously the voices there are self-selecting. (All the same, enlisting Boyle to the cause of climate skepticism is laughable. And Boyle was, among other things, determined to keep politics out of his science.)
_______________________________________________________________________
“While global warming is recognised, I am not sure that all the reasons have been fully explored. Carbon dioxide is a contributor, but what about cyclic changes caused by the Earth’s relationship in distance to the Sun?”
“While climate change is occurring, the drivers of change are less clear.”
It’s those pesky climate sceptics again, right? Well yes – but ones who read Chemistry and Industry, and who are therefore likely to be chemists of some description. When the magazine ran a survey in 2007 on its readers’ attitudes to climate change, it felt compelled to admit that “there are still some readers who remain deeply sceptical of the role of carbon dioxide in global warming, or of the need to take action.”
“Our survey revealed there remain those who question whether the problem exists or if reducing carbon dioxide emissions will have any effect at all,” wrote C&I’s Cath O’Driscoll. The respondents who felt that “the industry should be doing more to help tackle climate change” were in a clear majority of 72% - but that left 28% who didn’t. This is even more than the one in five members of the general population who, as the IPCC releases its 5th Report on Climate Change, now seem to doubt that global warming is real.
This squares with my subjective impression, on seeing the Letters pages of Chemistry World (and its predecessor) over the years, that the proportion of this magazine’s readers who are climate sceptics is rather higher than the 3% of the world’s climate scientists apparently still undecided about the causes (or reality) of global warming. A letter from 2007 complaining about “the enormous resources being put into the campaign to bring down carbon emissions on the debatable belief that atmospheric carbon dioxide level is the main driver of climate change rather than the result of it” seemed fairly representative of this subset.
Could it be that chemists are somehow more prone to climate scepticism than other scientists? I believe there is reason to think so, although I’m of course aware that this means some of you might already be sharpening your quills.
One of the most prominent sceptics has been Jack Barrett, formerly a well-respected chemical spectroscopist at Imperial College whose tutorial texts were published by the RSC. Barrett now runs the campaigning group Barrett Bellamy Climate with another famous sceptic, naturalist David Bellamy. Several other high-profile merchants of doubt, such as Nicholas Drapela (fired by Oregon State University last year) and Andrew Montford, trained as chemists. It’s not clear if there is strong chemical expertise in the Australian climate-sceptic Lavoisier Group, but they choose to identify themselves with Lavoisier’s challenge to the mistaken “orthodoxy” of phlogiston.
If, as I suspect, a chemical training seems to confer no real insulation against the misapprehensions evident in the non-scientific public, why should that be? One possible reason is that anyone who has spent a lifetime in the chemical industry (especially in petrochemicals), assailed by the antipathy of some eco-campaigners to anything that smacks of chemistry, will be likely to develop an instinctive aversion to, and distrust of, scare stories about environmental issues. That would be understandable, even if it were motivated more by heart than mind.
But I wonder if there’s another factor too. (Given that I’ve already dug a hole with some readers, I might as well jump in it.) If I were asked to make gross generalizations about the character of different fields of science, I would suggest that physicists are idealistic, biologists are conservative, and chemists are best described by that useful rustic Americanism, “ornery”. None of these are negative judgements – they all have pros as well as cons. But there does seem to be a contrarian streak that runs through the chemically trained, from William Crookes and Henry Armstrong to James Lovelock, Kary Mullis, Martin Fleischmann and of course the king of them all, Linus Pauling (who I’d have put money on being some kind of climate sceptic). This is part of what makes chemistry fun, but it is not without its complications.
In any event, it could be important for chemists to consider whether (and if so, why) there is an unusually high proportion of climate-change doubters in their ranks. Of course, it’s equally true that chemists have made major contributions to the understanding of climate, beginning with Svante Arrhenius’s intuition of the greenhouse effect in 1896 and continuing through to the work of atmospheric chemists such as Paul Crutzen. Spectroscopists, indeed, have played a vital role in understanding the issues in the planet’s radiative balance, and chemists have been foremost in identifying and tackling other environmental problems such as ozone depletion and acid rain. Chemistry has a huge part to play in finding solutions to the daunting problems that the IPCC report documents. A vocal contingent of contrarians won’t alter that.
_______________________________________________________________________
“While global warming is recognised, I am not sure that all the reasons have been fully explored. Carbon dioxide is a contributor, but what about cyclic changes caused by the Earth’s relationship in distance to the Sun?”
“While climate change is occurring, the drivers of change are less clear.”
It’s those pesky climate sceptics again, right? Well yes – but ones who read Chemistry and Industry, and who are therefore likely to be chemists of some description. When the magazine ran a survey in 2007 on its readers’ attitudes to climate change, it felt compelled to admit that “there are still some readers who remain deeply sceptical of the role of carbon dioxide in global warming, or of the need to take action.”
“Our survey revealed there remain those who question whether the problem exists or if reducing carbon dioxide emissions will have any effect at all,” wrote C&I’s Cath O’Driscoll. The respondents who felt that “the industry should be doing more to help tackle climate change” were in a clear majority of 72% - but that left 28% who didn’t. This is even more than the one in five members of the general population who, as the IPCC releases its 5th Report on Climate Change, now seem to doubt that global warming is real.
This squares with my subjective impression, on seeing the Letters pages of Chemistry World (and its predecessor) over the years, that the proportion of this magazine’s readers who are climate sceptics is rather higher than the 3% of the world’s climate scientists apparently still undecided about the causes (or reality) of global warming. A letter from 2007 complaining about “the enormous resources being put into the campaign to bring down carbon emissions on the debatable belief that atmospheric carbon dioxide level is the main driver of climate change rather than the result of it” seemed fairly representative of this subset.
Could it be that chemists are somehow more prone to climate scepticism than other scientists? I believe there is reason to think so, although I’m of course aware that this means some of you might already be sharpening your quills.
One of the most prominent sceptics has been Jack Barrett, formerly a well-respected chemical spectroscopist at Imperial College whose tutorial texts were published by the RSC. Barrett now runs the campaigning group Barrett Bellamy Climate with another famous sceptic, naturalist David Bellamy. Several other high-profile merchants of doubt, such as Nicholas Drapela (fired by Oregon State University last year) and Andrew Montford, trained as chemists. It’s not clear if there is strong chemical expertise in the Australian climate-sceptic Lavoisier Group, but they choose to identify themselves with Lavoisier’s challenge to the mistaken “orthodoxy” of phlogiston.
If, as I suspect, a chemical training seems to confer no real insulation against the misapprehensions evident in the non-scientific public, why should that be? One possible reason is that anyone who has spent a lifetime in the chemical industry (especially in petrochemicals), assailed by the antipathy of some eco-campaigners to anything that smacks of chemistry, will be likely to develop an instinctive aversion to, and distrust of, scare stories about environmental issues. That would be understandable, even if it were motivated more by heart than mind.
But I wonder if there’s another factor too. (Given that I’ve already dug a hole with some readers, I might as well jump in it.) If I were asked to make gross generalizations about the character of different fields of science, I would suggest that physicists are idealistic, biologists are conservative, and chemists are best described by that useful rustic Americanism, “ornery”. None of these are negative judgements – they all have pros as well as cons. But there does seem to be a contrarian streak that runs through the chemically trained, from William Crookes and Henry Armstrong to James Lovelock, Kary Mullis, Martin Fleischmann and of course the king of them all, Linus Pauling (who I’d have put money on being some kind of climate sceptic). This is part of what makes chemistry fun, but it is not without its complications.
In any event, it could be important for chemists to consider whether (and if so, why) there is an unusually high proportion of climate-change doubters in their ranks. Of course, it’s equally true that chemists have made major contributions to the understanding of climate, beginning with Svante Arrhenius’s intuition of the greenhouse effect in 1896 and continuing through to the work of atmospheric chemists such as Paul Crutzen. Spectroscopists, indeed, have played a vital role in understanding the issues in the planet’s radiative balance, and chemists have been foremost in identifying and tackling other environmental problems such as ozone depletion and acid rain. Chemistry has a huge part to play in finding solutions to the daunting problems that the IPCC report documents. A vocal contingent of contrarians won’t alter that.
Saturday, November 09, 2013
Reviewing the Reich
Time to catch up a little with what has been happening with my new book Serving The Reich. It has had some nice reviews in the Observer, the Guardian, and Nature. I have also talked about the issues on the Nature podcast, of which there is now an extended version. I’ve also discussed it for the Guardian science podcast, although that’s apparently not yet online. It seems I’ll be talking about the book next year at the Brighton Science Festival, the Oxford Literary Festival (probably in tandem with Graham Farmelo, who has written a nicely complementary book on Churchill and the bomb) and the Hay Festival – I hope to have dates and details soon.
Friday, November 01, 2013
WIMPs are the new Higgs
Here’s a blog posting for Prospect. You can see a little video podcast about it too.
________________________________________________________________
So with the Higgs particle sighted and the gongs distributed, physics seems finally ready to move on. Unless the Higgs had remained elusive, or had turned out to have much more mass than theories predicted, it was always going to be the end of a story: the final piece of a puzzle assembled over the past several decades. But now the hope is that the Large Hadron Collider, and several other big machines and experiments worldwide, will be able to open a new book, containing physics that we don’t yet understand at all. And the first chapter seems likely to be all about dark matter.
Depending on how you look at it, this is one of the most exciting or the most frightening problems facing physicists today. We have ‘known’ about dark matter for around 80 years, and yet we still don’t have a clue what it is. And this is a pretty big problem, because there seems to be more than five times as much dark matter as there is ordinary matter in the universe.
It’s necessary to invoke dark matter to explain why rotating galaxies don’t fly apart: there’s not enough visible matter to hold them together by gravity, and so some additional, unseen mass appears to be essential to fulfil that role. But it must be deeply strange stuff – since it apparently doesn’t emit or absorb light or any other electromagnetic radiation (whence ‘dark’), it can’t be composed of any of the fundamental subatomic particles known so far. There are several other astronomical observations that support the existence of dark matter, but so far theories about what it might consist of are pretty much ad hoc guesses.
Take the current favourite: particles called WIMPs, which stands for weakly interacting massive particles. Pull that technical moniker apart and you’re left with little more than a tautology, a bland restatement of the fact that we known dark matter must have mass but can’t interact (or barely) in any other way with light or regular matter.
It’s that “barely” on which hopes are pinned for detecting the stuff. Perhaps, just once in a blue moon, a WIMP careening through space does bump into common-or-garden atoms, and so discloses clues about its identity. The idea here is that, as well as gravity, WIMPs might also respond to another of the four fundamental forces of nature, called the weak nuclear force – the most exotic and hardest to explain of the forces. An atom knocked by a WIMP should emit light, which could be detected by sensitive cameras. To hope to see such a rare event in an experiment on earth, it’s necessary to exclude all other kinds of colliding cosmic particles, such as cosmic rays, which is why detectors hoping to spot a WIMP are typically housed deep underground.
One such, called LUX, sits at the foot of a 1500m mineshaft in the Black Hills of South Dakota, and has just announced the results of its first three months of WIMP-hunting. LUX stands for Large Underground Xenon experiment, because it seeks WIMP collisions within a cylinder filled with liquid xenon, and it is the most sensitive of the dark-matter detectors currently operating.
The result? Nothing. Not a single glimmer of a dark-matter atom-crash. But this tells us something worth knowing, which is that previous claims by other experiments, such as the Cryogenic Dark Matter Search in a Minnesota mine, to have seen potential dark-matter events probably now have to be rejected. What’s more, every time a dark-matter experiment fails to see anything, we discover more about where not to look: the possibilities are narrowed.
The LUX results are the highest-profile in a flurry of recent reports from dark-matter experiments. An experiment called DAMIC has just described early test runs at the underground SNOLAB laboratory in a mine near Sudbury in Canada, which hosts a variety of detectors for exotic particles, although the full experiment won’t be operating until next year. And a detector called the Alpha Magnetic Spectrometer (AMS) carried on board the International Space Station can spot the antimatter particles called positrons that should be produced when two WIMPs collide and annihilate. In April AMS reported a mysterious signal that might have – possibly, just about – been “consistent” (as they say) with positrons from dark-matter annihilation, but could also have more mundane explanations. LUX now makes the latter interpretation by far the most likely, although an international group of researchers has just clarified the constraints the AMS data place on what dark-matter can and can’t be like.
What now? LUX has plenty of searching still to do over the next two years. It’s even possible that dark-matter particles might be produced in the high-energy collisions of the LHC. But it is also possible that we’ve been barking up the wrong tree after all – for example, that what we think is dark matter is in fact a symptom of some other, unguessed physical principle. We’re still literally groping around in the dark.
________________________________________________________________
So with the Higgs particle sighted and the gongs distributed, physics seems finally ready to move on. Unless the Higgs had remained elusive, or had turned out to have much more mass than theories predicted, it was always going to be the end of a story: the final piece of a puzzle assembled over the past several decades. But now the hope is that the Large Hadron Collider, and several other big machines and experiments worldwide, will be able to open a new book, containing physics that we don’t yet understand at all. And the first chapter seems likely to be all about dark matter.
Depending on how you look at it, this is one of the most exciting or the most frightening problems facing physicists today. We have ‘known’ about dark matter for around 80 years, and yet we still don’t have a clue what it is. And this is a pretty big problem, because there seems to be more than five times as much dark matter as there is ordinary matter in the universe.
It’s necessary to invoke dark matter to explain why rotating galaxies don’t fly apart: there’s not enough visible matter to hold them together by gravity, and so some additional, unseen mass appears to be essential to fulfil that role. But it must be deeply strange stuff – since it apparently doesn’t emit or absorb light or any other electromagnetic radiation (whence ‘dark’), it can’t be composed of any of the fundamental subatomic particles known so far. There are several other astronomical observations that support the existence of dark matter, but so far theories about what it might consist of are pretty much ad hoc guesses.
Take the current favourite: particles called WIMPs, which stands for weakly interacting massive particles. Pull that technical moniker apart and you’re left with little more than a tautology, a bland restatement of the fact that we known dark matter must have mass but can’t interact (or barely) in any other way with light or regular matter.
It’s that “barely” on which hopes are pinned for detecting the stuff. Perhaps, just once in a blue moon, a WIMP careening through space does bump into common-or-garden atoms, and so discloses clues about its identity. The idea here is that, as well as gravity, WIMPs might also respond to another of the four fundamental forces of nature, called the weak nuclear force – the most exotic and hardest to explain of the forces. An atom knocked by a WIMP should emit light, which could be detected by sensitive cameras. To hope to see such a rare event in an experiment on earth, it’s necessary to exclude all other kinds of colliding cosmic particles, such as cosmic rays, which is why detectors hoping to spot a WIMP are typically housed deep underground.
One such, called LUX, sits at the foot of a 1500m mineshaft in the Black Hills of South Dakota, and has just announced the results of its first three months of WIMP-hunting. LUX stands for Large Underground Xenon experiment, because it seeks WIMP collisions within a cylinder filled with liquid xenon, and it is the most sensitive of the dark-matter detectors currently operating.
The result? Nothing. Not a single glimmer of a dark-matter atom-crash. But this tells us something worth knowing, which is that previous claims by other experiments, such as the Cryogenic Dark Matter Search in a Minnesota mine, to have seen potential dark-matter events probably now have to be rejected. What’s more, every time a dark-matter experiment fails to see anything, we discover more about where not to look: the possibilities are narrowed.
The LUX results are the highest-profile in a flurry of recent reports from dark-matter experiments. An experiment called DAMIC has just described early test runs at the underground SNOLAB laboratory in a mine near Sudbury in Canada, which hosts a variety of detectors for exotic particles, although the full experiment won’t be operating until next year. And a detector called the Alpha Magnetic Spectrometer (AMS) carried on board the International Space Station can spot the antimatter particles called positrons that should be produced when two WIMPs collide and annihilate. In April AMS reported a mysterious signal that might have – possibly, just about – been “consistent” (as they say) with positrons from dark-matter annihilation, but could also have more mundane explanations. LUX now makes the latter interpretation by far the most likely, although an international group of researchers has just clarified the constraints the AMS data place on what dark-matter can and can’t be like.
What now? LUX has plenty of searching still to do over the next two years. It’s even possible that dark-matter particles might be produced in the high-energy collisions of the LHC. But it is also possible that we’ve been barking up the wrong tree after all – for example, that what we think is dark matter is in fact a symptom of some other, unguessed physical principle. We’re still literally groping around in the dark.
Uncertainty about uncertainty
Here’s a news story I have written for Physics World. It makes me realize I still don’t understand the uncertainty principle, or at least not in the way I thought I did – so it doesn’t, then, apply to successive measurements on an individual quantum particle?!
But while on the topic of Heisenberg, I discuss my new book Serving the Reich on the latest Nature podcast, following a very nice review in the magazine from Robert Crease. I’m told there will be an extended version of the interview put up on the Nature site soon. I’ve also discussed the book and its context for the Guardian science podcast, which I guess will also appear soon.
____________________________________________________________
How well did Werner Heisenberg understand the uncertainty principle for which he is best known? When he proposed this central notion of quantum theory in 1927 [1], he offered a physical picture to help it make intuitive sense, based on the idea that it’s hard to measure a quantum particle without disturbing it. Over the past ten years an argument has been unfolding about whether Heisenberg’s original analogy was right or wrong. Some researchers have argued that Heisenberg’s ‘thought experiment’ isn’t in fact restricted by the uncertainty relation – and several groups recently claimed to have proved that experimentally.
But now another team of theorists has defended Heisenberg’s original intuition. And the argument shows no sign of abating, with each side sticking to their guns. The discrepancy might boil down to the irresolvable issue of what Heisenberg actually meant.
Heisenberg’s principle states that we can’t measure certain pairs of variables for a quantum object – position and momentum, say – both with arbitrary accuracy. The better we know one, the fuzzier the other becomes. The uncertainty principle says that the product of the uncertainties in position and momentum can be no smaller than a simple fraction of Planck’s constant h.
Heisenberg explained this by imagining a microscope that tries to image a particle like an electron [1]. If photons bounce off it, we can “see” and locate it, but at the expense of imparting energy and changing its momentum. The more gently it is probed, the less the momentum is perturbed but then the less clearly it can be “seen.” He presented this idea in terms of a tradeoff between the ‘error’ of a position measurement (Δx), owing to instrumental limitations, and the resulting ‘disturbance’ in the momentum (Δp).
Subsequent work by others showed that the uncertainty principle does not rely on this disturbance argument – it applies to a whole ensemble of identically prepared particles, even if every particle is measured only once to obtain either its position or its momentum. As a result, Heisenberg abandoned the argument based on his thought experiment. But this didn’t mean it was wrong.
In 1988, however, Masanao Ozawa, now at Nagoya University in Japan, argued that Heisenberg’s original relationship between error and disturbance doesn’t represent a fundamental limit of uncertainty [2]. In 2003 he proposed an alternative relationship in which, although the two quantities remain related, their product can be arbitrarily small [3].
Last year Ozama teamed up with Yuji Hasegawa at the University of Vienna and coworkers to see if his revised formulation of the uncertainty principle held up experimentally. Looking at the position and momentum of spin-polarized neutrons, they found that, as Ozawa predicted, error and disturbance still involve a tradeoff but with a product that can be smaller than Heisenberg’s limit [4].
At much the same time, Aephraim Steinberg and coworkers at the University of Toronto conducted an optical test of Ozawa’s relationship, which also seemed to bear out his prediction [5]. Ozawa has since collaborated with researchers at Tohoku University in another optical study, with the same result [6].
Despite all this, Paul Busch at the University of York in England and coworkers now defend Heisenberg’s position, saying that Ozawa’s argument does not apply to the situation Heisenberg described [7]. “Ozawa's inequality allows arbitrarily small error products for a joint approximate measurement of position and momentum, while ours doesn’t”, says Busch. “Ours says if the error is kept small, the disturbance must be large.”
“The two approaches differ in their definition of Δx and Δp, and there is the freedom to make these different choices”, explains quantum theorist Johannes Kofler of the Max Planck Institute of Quantum Optics in Garching, Germany. “Busch et al. claim to have the proper definition, and they prove that their uncertainty relation always holds, with no chance for experimental violation.”
The disagreement, then, is all about which definition is best. Ozawa’s is based on the variance in two measurements made sequentially on a particular quantum state, whereas that of Busch and colleagues considers the fundamental performance limits of a particular measuring device, and thus is independent of the initial quantum state. “We think that must have been Heisenberg's intention”, says Busch.
But Ozawa feels Busch and colleagues are focusing on instrumental limitations that have little relevance to the way devices are actually used. “My theory suggest if you use your measuring apparatus as suggested by the maker, you can make better measurement than Heisenberg's relation”, he says. “They now prove that if you use it very badly – if, say, you use a microscope instead of telescope to see the moon – you cannot violate Heisenberg's relation. Thus, their formulation is not interesting.”
Steinberg and colleagues have already responded to Busch et al. in a preprint that tries to clarify the differences between their definition and Ozawa’s. What Busch and colleagues quantify, they say, “is not how much the state that one measures is disturbed, but rather how much ‘disturbing power’ the measuring apparatus has.”
“Heisenberg's original formula holds if you ask about "disturbing power," but the less restrictive inequalities of Ozawa hold if you ask about the disturbance to particular states”, says Steinberg. “I personally think these are two different but both interesting questions.” But he feels Ozawa’s formulation is closer to the spirit of Heisenberg’s.
In any case, all sides agree that the uncertainty principle is not, as some popular accounts imply, about the mechanical effects of measurement – the ‘kick’ to the system. “It is not the mechanical kick but the quantum nature of the interaction and of the measuring probes, such as a photon, that are responsible for the uncontrollable quantum disturbance”, says Busch.
In part the argument comes down to what Heisenberg had in mind. “I cannot exactly say how much Heisenberg understood about the uncertainty principle”, Ozawa says. “But”, he adds, “I can say we know much more than Heisenberg.”
References
1. W. Heisenberg, Z. Phys. 43, 172 (1927).
2. M. Ozawa, Phys. Rev. Lett. 60, 385 (1988).
3. M. Ozawa, Phys. Rev. A 67, 042105 (2003).
4. J. Erhart et al., Nat. Phys. 8, 185 (2013).
5. L. A. Rozema et al., Phys. Rev. Lett. 109, 100404 (2012)
6. S.-Y. Baek, F. Kaneda, M. Ozawa & K. Edamatsu, Sci. Rep. 3, 2221 (2013).
7. P. Busch, P. Lahti & R. F. Werner, Phys. Rev. Lett. 111, 160405 (2013).
8. L. A. Rozema, D. H. Mahler, A. Hayat & A. M. Steinberg, http://www.arxiv.org/1307.3604 (2013).
But while on the topic of Heisenberg, I discuss my new book Serving the Reich on the latest Nature podcast, following a very nice review in the magazine from Robert Crease. I’m told there will be an extended version of the interview put up on the Nature site soon. I’ve also discussed the book and its context for the Guardian science podcast, which I guess will also appear soon.
____________________________________________________________
How well did Werner Heisenberg understand the uncertainty principle for which he is best known? When he proposed this central notion of quantum theory in 1927 [1], he offered a physical picture to help it make intuitive sense, based on the idea that it’s hard to measure a quantum particle without disturbing it. Over the past ten years an argument has been unfolding about whether Heisenberg’s original analogy was right or wrong. Some researchers have argued that Heisenberg’s ‘thought experiment’ isn’t in fact restricted by the uncertainty relation – and several groups recently claimed to have proved that experimentally.
But now another team of theorists has defended Heisenberg’s original intuition. And the argument shows no sign of abating, with each side sticking to their guns. The discrepancy might boil down to the irresolvable issue of what Heisenberg actually meant.
Heisenberg’s principle states that we can’t measure certain pairs of variables for a quantum object – position and momentum, say – both with arbitrary accuracy. The better we know one, the fuzzier the other becomes. The uncertainty principle says that the product of the uncertainties in position and momentum can be no smaller than a simple fraction of Planck’s constant h.
Heisenberg explained this by imagining a microscope that tries to image a particle like an electron [1]. If photons bounce off it, we can “see” and locate it, but at the expense of imparting energy and changing its momentum. The more gently it is probed, the less the momentum is perturbed but then the less clearly it can be “seen.” He presented this idea in terms of a tradeoff between the ‘error’ of a position measurement (Δx), owing to instrumental limitations, and the resulting ‘disturbance’ in the momentum (Δp).
Subsequent work by others showed that the uncertainty principle does not rely on this disturbance argument – it applies to a whole ensemble of identically prepared particles, even if every particle is measured only once to obtain either its position or its momentum. As a result, Heisenberg abandoned the argument based on his thought experiment. But this didn’t mean it was wrong.
In 1988, however, Masanao Ozawa, now at Nagoya University in Japan, argued that Heisenberg’s original relationship between error and disturbance doesn’t represent a fundamental limit of uncertainty [2]. In 2003 he proposed an alternative relationship in which, although the two quantities remain related, their product can be arbitrarily small [3].
Last year Ozama teamed up with Yuji Hasegawa at the University of Vienna and coworkers to see if his revised formulation of the uncertainty principle held up experimentally. Looking at the position and momentum of spin-polarized neutrons, they found that, as Ozawa predicted, error and disturbance still involve a tradeoff but with a product that can be smaller than Heisenberg’s limit [4].
At much the same time, Aephraim Steinberg and coworkers at the University of Toronto conducted an optical test of Ozawa’s relationship, which also seemed to bear out his prediction [5]. Ozawa has since collaborated with researchers at Tohoku University in another optical study, with the same result [6].
Despite all this, Paul Busch at the University of York in England and coworkers now defend Heisenberg’s position, saying that Ozawa’s argument does not apply to the situation Heisenberg described [7]. “Ozawa's inequality allows arbitrarily small error products for a joint approximate measurement of position and momentum, while ours doesn’t”, says Busch. “Ours says if the error is kept small, the disturbance must be large.”
“The two approaches differ in their definition of Δx and Δp, and there is the freedom to make these different choices”, explains quantum theorist Johannes Kofler of the Max Planck Institute of Quantum Optics in Garching, Germany. “Busch et al. claim to have the proper definition, and they prove that their uncertainty relation always holds, with no chance for experimental violation.”
The disagreement, then, is all about which definition is best. Ozawa’s is based on the variance in two measurements made sequentially on a particular quantum state, whereas that of Busch and colleagues considers the fundamental performance limits of a particular measuring device, and thus is independent of the initial quantum state. “We think that must have been Heisenberg's intention”, says Busch.
But Ozawa feels Busch and colleagues are focusing on instrumental limitations that have little relevance to the way devices are actually used. “My theory suggest if you use your measuring apparatus as suggested by the maker, you can make better measurement than Heisenberg's relation”, he says. “They now prove that if you use it very badly – if, say, you use a microscope instead of telescope to see the moon – you cannot violate Heisenberg's relation. Thus, their formulation is not interesting.”
Steinberg and colleagues have already responded to Busch et al. in a preprint that tries to clarify the differences between their definition and Ozawa’s. What Busch and colleagues quantify, they say, “is not how much the state that one measures is disturbed, but rather how much ‘disturbing power’ the measuring apparatus has.”
“Heisenberg's original formula holds if you ask about "disturbing power," but the less restrictive inequalities of Ozawa hold if you ask about the disturbance to particular states”, says Steinberg. “I personally think these are two different but both interesting questions.” But he feels Ozawa’s formulation is closer to the spirit of Heisenberg’s.
In any case, all sides agree that the uncertainty principle is not, as some popular accounts imply, about the mechanical effects of measurement – the ‘kick’ to the system. “It is not the mechanical kick but the quantum nature of the interaction and of the measuring probes, such as a photon, that are responsible for the uncontrollable quantum disturbance”, says Busch.
In part the argument comes down to what Heisenberg had in mind. “I cannot exactly say how much Heisenberg understood about the uncertainty principle”, Ozawa says. “But”, he adds, “I can say we know much more than Heisenberg.”
References
1. W. Heisenberg, Z. Phys. 43, 172 (1927).
2. M. Ozawa, Phys. Rev. Lett. 60, 385 (1988).
3. M. Ozawa, Phys. Rev. A 67, 042105 (2003).
4. J. Erhart et al., Nat. Phys. 8, 185 (2013).
5. L. A. Rozema et al., Phys. Rev. Lett. 109, 100404 (2012)
6. S.-Y. Baek, F. Kaneda, M. Ozawa & K. Edamatsu, Sci. Rep. 3, 2221 (2013).
7. P. Busch, P. Lahti & R. F. Werner, Phys. Rev. Lett. 111, 160405 (2013).
8. L. A. Rozema, D. H. Mahler, A. Hayat & A. M. Steinberg, http://www.arxiv.org/1307.3604 (2013).
On the edge
In working on my next book (details soon), I have recently been in touch with a well-known science-fiction author, who very understandably felt he should take the precaution of saying that our correspondence was private and not intended for blurb-mining. He said he’d had a bad experience of providing a blurb years back and had vowed to have a blanket ban on that henceforth.
That’s fair enough, but I’m glad I’m able to remain open to the idea. I often have to decline (not that my opinion is likely to shift many copies), but if I never did it at all then I’d miss out on seeing some interesting material. I certainly had no hesitation in offering quotes for a book just published by OUP, Aid on the Edge of Chaos by Ben Ramalingam. Having seen the rather stunning list of endorsements on Amazon, I’m inclined to say I’m not worthy anyway, but there’s no doubt that Ben’s book deserves it (along with the glowing reader reviews so far). Quite aside from the whole perspective on aid, the book provides one of the best concise summaries I have seen of complexity science and its relation to human affairs generally – it is worth reading for that alone.
The book’s primary thesis is that these ideas should inform a rethinking of the entire basis of international aid. In particular, aid needs to be adaptive, interconnected and bottom-up, rather than being governed by lumbering international bodies with fixed objectives and templates. But Ben is able to present this idea in a way that does not offer some glib, vague panacea, but is closely tied in with the practical realities of the matter. It is a view wholly in accord with the kind of thinking that was (and hopefully still is) being fostered by the FuturICT project, although aid was one of the social systems that I don’t think they had considered in any real detail – I certainly had not.
I very much hope this book gets seen and, more importantly, acted on. There are plans afoot for its ideas to be debated at the Wellcome Trust's centre in London in January, which is sure to be an interesting event.
Subscribe to:
Posts (Atom)