Yes, it looks different. It sort of just happened. I was updating my links, and then they all vanished and I got this new look, complete with Twitter feed. The web works in mysterious ways.
Anyway, here is a piece that has been published in the February issue of Prospect. There will be more here on the Polymath project some time in the near future.
______________________________________________________________
Our cultural relationship with the world of mathematics is mythologized like no other academic discipline. While the natural sciences are seen to keep some roots planted in the soil of daily life, in inventions and cures and catastrophes, maths seems to float freely in an abstract realm of number, as much an art as a science. More than any white-coated boffin, its proponents are viewed as unworldly, with minds unfathomably different from ours. We revel in stories of lone geniuses who crack the most refractory problems yet reject status, prizes and even academic tenure. Maths is not just a foreign country but an alien planet.
Some of the stereotypes are true. When the wild-haired Russian Grigori Perelman solved the notorious Poincaré conjecture in 2003, he declined first the prestigious Fields Medal and then (more extraordinarily to some) the $1m Millennium Prize officially awarded to him in 2010. The prize was one of seven offered by the non-profit, US-based Clay Mathematics Foundation for solutions to seven of the most significant outstanding problems in maths.
Those prizes speak to another facet of the maths myth. It is seen as a range of peaks to be scaled: a collection of ‘unsolved problems’, solutions of which are guaranteed to bring researchers (if they want it) fame, glory and perhaps riches. In this way maths takes on a gladiatorial aspect, encouraging individuals to lock themselves away for years to focus on the one great feat that will make their reputation. Again, this is not all myth; most famously, Andrew Wiles worked in total secrecy in the 1990s to conquer Fermat’s Last Theorem. Even if maths is in practice more comradely than adversarial – people have been known to cease working on a problem, or to avoid it in the first place, because they know someone else is already doing so – nonetheless its practitioners can look like hermits bent on Herculean Labours.
It is almost an essential part of this story that those labours are incomprehensible to outsiders. And that too is often the reality. I have reported for several years now on the Abel prize, widely see as the ‘maths Nobel’ (not least because it is awarded by the Norwegian Academy of Science and Letters). Invariably, describing what the recipients are being rewarded for becomes an impressionistic exercise, a matter of sketching out a nested Russian doll of recondite concepts in a tone that implies “Don’t ask”.
Yet this public image of maths is only part of the story. For one thing, some of the hardest problems are actually the most simply stated. Fermat’s Last Theorem, named after the seventeenth-century mathematician Pierre Fermat who claimed to have a solution that he couldn’t fit in the page margin, is a classic example. It states that there are no whole-number solutions for a, b, and c in the equation a**n + b**n = c**n if n is a whole number larger than 2. Because it takes only high-school maths to understand the problem, countless amateurs were convinced that high-school maths would suffice to solve it. When I was an editor at Nature, ‘solutions’ would arrive regularly, usually handwritten in spidery script by authors who would never accept they had made trivial errors. (Apparently Wiles’ solution, which occupied 150 pages and used highly advanced maths, has not deterred these folks, who now seek acclaim for a ‘simpler’ solution.)
The transparency of Fermat’s Last Theorem is shared by some of the other Millennium Prize problems and further classic challenges in maths. Take Goldbach’s conjecture, which makes a claim about the most elusive of all mathematical entities – the prime numbers. These are integers that have no other factors except itself and 1: for example, 2, 3, 5, 7, 11 and 13. The eighteenth-century German mathematician Christian Goldbach is credited with proposing that every even integer greater than 2 can be expressed as the sum of two primes: for example, 4=2+2, 6=3+3, and 20=7+13. One can of course simply work through all the even numbers in turn to see if they can be chopped up this way, and so far the conjecture has been found empirically to hold true up to about 4x10**18. But such number-crunching is no proof, and without it one can’t be sure that an exception won’t turn up around, say, 10**21. Those happy to accept that, given the absence of exceptions so far, they’re unlikely to appear later, are probably not destined to be mathematicians.
Goldbach’s conjecture would be an attractive target for young mathematicians seeking to make their name, but it won’t make them money – it’s not a Millennium Problem. One of the most alluring of that select group is a problem that doesn’t really involve numbers at all, but concerns computation. It is called the P versus NP problem, and is perhaps best encapsulated in the idea that the answer to a problem is obvious once you know it. In other words, it is often easier to verify an answer than to find it in the first place. The NP vs P question is whether, for all problems that can be verified quickly (there’s a technical definition of ‘quickly’), there exists a way of actually finding the right answer comparably fast. Most mathematicians and computer scientists think that this isn’t so – in formal terms, that NP is not equal to P, meaning that some problems are truly harder to solve than to verify. But there’s no proof of that.
This is a maths challenge with unusually direct practical implications. If NP=P, we would know that, for some computing problems that are currently very slow to solve, such as finding the optimal solution to a complex routing problem, there is in fact have a relatively efficient way to get the answer. The problem has philosophical ramifications too. If NP=P, this would imply that anyone who can understand Andrew Wiles’ solution to Fermat’s Last Theorem (which is more of us than you might think, given the right guidance) could also in principle have found it. The rare genius with privileged insight would vanish.
Perhaps the heir to the mystique of Fermat’s Last Theorem, meanwhile, is another of the Millennium Problems: the Riemann hypothesis. This is also about prime numbers. They keep popping up as one advances through the integers, and the question is: is there any pattern to the way they are distributed? The Riemann hypothesis implies something about that, although the link isn’t obvious. Its immediate concern is the Riemann zeta function, denoted ζ(s), which is equal to the sum of 1**-s + 2**-s +3**-s +…, where s is a complex number, meaning that it contains a real part (an ‘ordinary’ number) and an imaginary part incorporating the square root of -1. (Already I’m skimping on details.) If you plot a graph of the curve ζ as a function of s, you’ll find that for certain values of s it is equal to zero. Here’s Riemann’s hypothesis: that the values of s for which ζ(s)=0 are always (sorry, with the exception of the negative even integers) complex numbers for which the real part is precisely ½. It turns out that these zero values of ζ determine how far successive prime numbers deviate from the smooth distribution predicted by the so-called prime number theorem. Partly because it pronounces on the distribution of prime numbers, if the Riemann hypothesis can be shown to be true then several other important conjectures would also be proved.
The distribution of the primes set the context for a recent instructive episode in the way maths is done. Although primes become ever rarer as the numbers get bigger, every so often two will be adjacent odd numbers: so-called twins, such as 26681 and 26683. But do these ‘twin primes’ keep cropping up forever? The (unproven) twin-primes hypothesis says that they do.
In April of last year, a relatively unknown mathematician at the University of New Hampshire named Yitang Zhang unveiled a proof of a "weaker" version of the twin-primes hypothesis which showed that there are infinitely many near-twins separated by less than 70 million. (That sounds like a much wider gap than 2, but its still relatively small when the primes themselves are gargantuan.) Zhang, a Chinese immigrant who had earlier been without an academic job for several years, fits the bill of the lone genius conquering a problem in seclusion. But after news of his breakthrough spread on maths blogs, something unusual happened. Others started chipping in to reduce Zhang’s bound of 70 million, and in June one of the world’s most celebrated mathematicians, Terence Tao at the University of California at Los Angeles, set up an online ‘crowdsourcing’ project called Polymath to pool resources. Before long, 70 million had dropped to 4680. Now, thanks to work by a young researcher named James Maynard at the University of Montreal, it is down to 600.
This extraordinarily rapid progress on a previously recalcitrant problem was thus a collective effort: maths isn’t just about secret labours by individuals. And while the shy, almost gnomically terse Zhang might fit the popular image, the gregarious and personable Tao does not.
What’s more, while projects like the Millennium Problems play to the image of maths as a set of peaks to scale, mathematicians themselves value other traits besides the ability to crack a tough problem. Abel laureates are commonly researchers who have forged new tools and revealed new connections between different branches of mathematicians. Last year’s winner, the Belgian Pierre Deligne, who among other things solved a problem in algebraic geometry analogous to the Riemann hypothesis, was praised for being a “theory builder” as well as a “problem solver”, and the 2011 recipient John Milnor was lauded as a polymath who “opened up new fields”. The message for the young mathematician, then, might be not to lock yourself away but to broaden your horizons.
Friday, January 31, 2014
Wednesday, January 29, 2014
Follow this
I thought I could resist. I really did. I was convinced it would just waste my time, and perhaps it will. But here it is: I’m on Twitter. https://twitter.com/philipcball, in case you’re interested. Time will tell.
Friday, January 24, 2014
Great balls of fire
This ball lightning has got everywhere (like here and here and here), often inaccurately – the paper in Physical Review Letters doesn’t report the first time ball lightning has been captured on video, you can see several such events on YouTube. But it’s the first time that a spectrum of ball lightning has been captured, which lets us see what it’s made of. In this case, that’s apparently dirt: the spectrum shows atomic emission lines characteristic of silicon, calcium and iron. That seems to support the idea that this mysterious atmospheric phenomenon is caused by a conventional lightning strike vaporizing the soil – but actually it’s too early to say whether it supports any particular theory.
In any event, I got a preview of all this because I wrote the story for Physical Review Focus. And I have to say that the notion of conducting field observations on the Qinghai Plateau in the dark during a thunderstorm strikes me as not one of the most desirable jobs in science – Ping Yuan, the group leader, said in what I suspect is considerable understatement that this place is “difficult to access.”
It’s funny stuff, ball lightning. We used to regularly get papers submitted to Nature offering theories of it – these were particularly popular with the Russians – and I had the pleasure of publishing the oft-cited Japanese work in 1991 in which two researchers made what looked like a ball-lightning-style plasma ball in the lab. But we’ve still got a way to go in understanding it. Anyway, I’ve made a little video about this business too.
Addendum: Whoa, now I discover that I wrote this piece 14 years ago about the original theory that ball lightning is a bundle of vaporized dirt.
Tuesday, January 21, 2014
The year of crystallography
Here is 2012 Chemistry Nobel Laureate Brian Kobilka speaking yesterday at the opening ceremony of the International Year of Crystallography at the UNESCO building in Paris. It's a fun but slightly strange gathering, at least in my experience - a curious mixture of science, politics, development programs, and celebration. But UNESCO has some very commendable plans for what this year will achieve, for example in terms of research initiatives in Africa. I had a comment on the IYCr in Chemistry World late last year, and Athene Donald has a nice perspective piece online at the Guardian.
Friday, January 17, 2014
Flight of the robot jellyfish
Here’s my other little piece for Nature news. The videos of this thing in flight, provided on the Nature site, are rather beautiful.
_____________________________________
Its transparent wings fixed to a delicate wire framework recall the diaphanous, veined wings of an insect. But when the flying machine devised by applied mathematicians Leif Ristroph and Stephen Childress of New York University rises gracefully into the air, the undulations of its conical form resemble nothing so much as a jellyfish swimming through water, the device’s electrical power lead trailing like a tentacle. It is, in short, like no other flying machine you have seen before.
This is not the first small artificial ornithopter – a flying machine capable of hovering like a dragonfly or hummingbird by the beating of its wings. But what distinguishes Ristroph and Childress’s craft from those like the flapping insectoid robots reported by researchers at Harvard last year [1], with a wingspan of barely 3 cm, is that it can remain stable in flight with the movement of its wings alone, without the need for additional stabilizers or complex feedback control loops to avoid flipping over. The new ornithopter has four droplet-shaped wings of Mylar plastic film about 5 cm wide, arranged around a spherical body, attached to an articulated carbon-fibre framework driven by a tiny motor and weighing no more than 2.1g in total. It can execute forward flight and stable hovering, and can right itself automatically from tilting. The motion of the wings generates a downward jet, as do the undulations of a jellyfish bell. The absence of this strategy among flying animals, the researchers say, remains a mystery. The work is reported in the Journal of the Royal Society Interface [2].
References
1. Ma, K. Y., Chirarattananon, P., Fuller, S. B & Wood, R. J. Science 340, 603-607 (2013).
2. Ristroph, L. & Childress, S. J. R. Soc. Interface 20130992 (2014).
_____________________________________
Its transparent wings fixed to a delicate wire framework recall the diaphanous, veined wings of an insect. But when the flying machine devised by applied mathematicians Leif Ristroph and Stephen Childress of New York University rises gracefully into the air, the undulations of its conical form resemble nothing so much as a jellyfish swimming through water, the device’s electrical power lead trailing like a tentacle. It is, in short, like no other flying machine you have seen before.
This is not the first small artificial ornithopter – a flying machine capable of hovering like a dragonfly or hummingbird by the beating of its wings. But what distinguishes Ristroph and Childress’s craft from those like the flapping insectoid robots reported by researchers at Harvard last year [1], with a wingspan of barely 3 cm, is that it can remain stable in flight with the movement of its wings alone, without the need for additional stabilizers or complex feedback control loops to avoid flipping over. The new ornithopter has four droplet-shaped wings of Mylar plastic film about 5 cm wide, arranged around a spherical body, attached to an articulated carbon-fibre framework driven by a tiny motor and weighing no more than 2.1g in total. It can execute forward flight and stable hovering, and can right itself automatically from tilting. The motion of the wings generates a downward jet, as do the undulations of a jellyfish bell. The absence of this strategy among flying animals, the researchers say, remains a mystery. The work is reported in the Journal of the Royal Society Interface [2].
References
1. Ma, K. Y., Chirarattananon, P., Fuller, S. B & Wood, R. J. Science 340, 603-607 (2013).
2. Ristroph, L. & Childress, S. J. R. Soc. Interface 20130992 (2014).
Wednesday, January 15, 2014
"Irrational" behaviour can be rational
I have a couple of news stories on Nature’s site this week. Here’s the first. This is, I think, more of a cautionary tale than a surprising discovery. One researcher I spoke to put it like this:
“Imagine a person trying to clime to the top of the hill. Each step up takes this decision maker toward her goal. We see this person trudging along upward, but then we see this person not step uphill, but step downhill. Is this irrational? As it turns out there is a large bolder in her way and stepping down and around the boulder made sense considering the larger goal of getting to the top of the hill. But, if you only look at her behavior step by step, then moving downhill will look “irrational” or “intransitive” but really that’s a misunderstanding of the problem and landscape which is larger than just one step at a time. Moreover it is a fundamental misunderstanding of the idea of rationality to demand that every step of the decision maker be upward in order to be rational.”
The moral, I think, is that when we see choices like this that appear to be irrational, it pays to look for “boulders” before assuming that they are truly the result of error, limited cognitive resources, or sheer caprice.
__________________________________________________
Theory shows it may be best to rearrange your preferences if the options might change
You prefer apples to oranges, but cherries to apples. Yet if I offer you just cherries and oranges, you take the oranges. Are you stupid or crazy?
Not necessarily either, according to a new study. It shows that in some circumstances a decision like this, which sounds irrational, can actually be the best one. The work is published in Biology Letters [1].
Organisms, including humans, are often assumed to be evolutionarily hard-wired to make optimal decisions, to the best of their ability. Sticking with fixed preferences when weighing up choices – for example, in selecting food sources – would seem to be one aspect of such rationality. If A is preferred over B, and B over C, then surely A should be selected when the options are just A and C? This seemingly logical ordering of preferences is called transitivity.
What’s more, if A is preferred when both B and C are available, then A should ‘rationally’ remain the first choice from only A and B – a principle called the independence of irrelevant alternatives (IIA).
But sometimes animals don’t display such apparent logic. For example, honeybees and gray jays [2] and hummingbirds [3] have been seen to violate IIA. “On witnessing such behaviour in the past, people have simply assumed that it is not optimal, and then proposed various explanations for it”, says mathematical biologist Pete Trimmer of the University of Bristol in England. “They assume that the individual or species is not adapted to solve the given task, or that it is too costly to compute it accurately.”
The theoretical model of Trimmer and his colleagues shows that in contrast, violations of transitivity can sometimes be adaptively optimal and therefore perfectly rational. “It should mean that researchers will be less prone to quickly claiming that a particular species or individual is behaving irrationally” in these cases, he says.
The key to the apparent “irrationality” in the Bristol group’s model is that the various choices might appear or disappear in the future. Then the decision becomes more complicated than a simple, fixed ranking of preferences. Is it better to expend time and energy eating a less nutritious food that’s available now, or to ignore it because a better alternative might become available in a moment?
The researchers find that, for some particular choices of the nutritional values of food sources A, B and C, and of their probabilities of appearing or vanishing in the future, an optimal choice for pairs of foods can prefer B to A, C to B and A to C, which violates transitivity.
Trimmer and colleagues also find some situations where IIA is violated in the optimal solution. These choices look irrational, but aren’t.
Behavioural ecologist Tanya Latty of the University of Sydney, who has observed violations of IIA in the food choices of a slime mould [4], points out that some examples of apparent irrationality seen in the foraging decisions of non-human animals are already understood to come from the fact that animals rarely have all their options available at once. “The choice is not so much ‘which item should I consume’ as ‘should I spend time consuming this particular item, or should I keep looking’?” she explains. “Some of what we perceive as irrational behaviour would then simply be the result of presenting animals with the unusual case of a simultaneous choice, when they have evolved to make optimal sequential choices.”
Latty feels that the new work by Trimmer and colleagues “goes some way toward combining the sequential and simultaneous viewpoints”. It helps to show that “decision strategies that appear irrational in simplified experimental environments can be adaptive in the complex, dynamic worlds in which most organisms live.”
She thinks it might be possible to test these ideas. “I suspect it would be easy enough to train animals (or humans) to forage on items that had different probabilities of disappearing or reappearing. Then you could test whether or not playing with these probabilities influences preferences.” The difficulty is that organisms may already take into account natural tendencies for choices to disappear.
“I think it is absolutely worth investigating further”, Latty says. “It has certainly given me some ideas for future experiments.”
“The paper is very nicely done”, says economist and behavioural scientist Herb Gintis of the Santa Fe Institute in New Mexico, but he adds that “there is nothing anomalous or even surprising about these results.”
Gintis explains that the choices only seem to violate transitivity or IIA because there are in fact more than three. “Usually when IIA fails, the modeler is using the wrong choice space”, he says. “An expansion of the choice space to include probabilities of appearance and disappearance would correct this.”
Trimmer sees no reason why the results shouldn’t apply to humans. “Of course, much of the time we make errors, which is a very simple explanation for any behaviour which appears irrational”, he says. “But an individual who displays intransitive choices is not necessarily behaving erroneously.”
He feels that such behaviour could surface in economic contexts, for example in cases where people are choosing investment strategies from savings schemes that may or may not be available in the future. In other words, while economic behaviour is clearly not always rational (as some economists have assumed), we shouldn’t be too hasty in assuming that what seems irrational necessarily is.
References
1. McNamara, J. M., Trimmer, P. C. & Houston, A. I. Biol. Lett. 20130935 (2014).
2. Shafir, S., Waite, T. A. & Smith, B. H. Behav. Ecol. Sociobiol. 51, 180-187 (2002).
3. Bateson, M., Healy, S. D. & Hurly, T. A. Anim. Behav. 63, 587-596 (2002).
4. Latty, T. & Beekman, M. Proc. R. Soc. B 278, 307-312 (2011).
“Imagine a person trying to clime to the top of the hill. Each step up takes this decision maker toward her goal. We see this person trudging along upward, but then we see this person not step uphill, but step downhill. Is this irrational? As it turns out there is a large bolder in her way and stepping down and around the boulder made sense considering the larger goal of getting to the top of the hill. But, if you only look at her behavior step by step, then moving downhill will look “irrational” or “intransitive” but really that’s a misunderstanding of the problem and landscape which is larger than just one step at a time. Moreover it is a fundamental misunderstanding of the idea of rationality to demand that every step of the decision maker be upward in order to be rational.”
The moral, I think, is that when we see choices like this that appear to be irrational, it pays to look for “boulders” before assuming that they are truly the result of error, limited cognitive resources, or sheer caprice.
__________________________________________________
Theory shows it may be best to rearrange your preferences if the options might change
You prefer apples to oranges, but cherries to apples. Yet if I offer you just cherries and oranges, you take the oranges. Are you stupid or crazy?
Not necessarily either, according to a new study. It shows that in some circumstances a decision like this, which sounds irrational, can actually be the best one. The work is published in Biology Letters [1].
Organisms, including humans, are often assumed to be evolutionarily hard-wired to make optimal decisions, to the best of their ability. Sticking with fixed preferences when weighing up choices – for example, in selecting food sources – would seem to be one aspect of such rationality. If A is preferred over B, and B over C, then surely A should be selected when the options are just A and C? This seemingly logical ordering of preferences is called transitivity.
What’s more, if A is preferred when both B and C are available, then A should ‘rationally’ remain the first choice from only A and B – a principle called the independence of irrelevant alternatives (IIA).
But sometimes animals don’t display such apparent logic. For example, honeybees and gray jays [2] and hummingbirds [3] have been seen to violate IIA. “On witnessing such behaviour in the past, people have simply assumed that it is not optimal, and then proposed various explanations for it”, says mathematical biologist Pete Trimmer of the University of Bristol in England. “They assume that the individual or species is not adapted to solve the given task, or that it is too costly to compute it accurately.”
The theoretical model of Trimmer and his colleagues shows that in contrast, violations of transitivity can sometimes be adaptively optimal and therefore perfectly rational. “It should mean that researchers will be less prone to quickly claiming that a particular species or individual is behaving irrationally” in these cases, he says.
The key to the apparent “irrationality” in the Bristol group’s model is that the various choices might appear or disappear in the future. Then the decision becomes more complicated than a simple, fixed ranking of preferences. Is it better to expend time and energy eating a less nutritious food that’s available now, or to ignore it because a better alternative might become available in a moment?
The researchers find that, for some particular choices of the nutritional values of food sources A, B and C, and of their probabilities of appearing or vanishing in the future, an optimal choice for pairs of foods can prefer B to A, C to B and A to C, which violates transitivity.
Trimmer and colleagues also find some situations where IIA is violated in the optimal solution. These choices look irrational, but aren’t.
Behavioural ecologist Tanya Latty of the University of Sydney, who has observed violations of IIA in the food choices of a slime mould [4], points out that some examples of apparent irrationality seen in the foraging decisions of non-human animals are already understood to come from the fact that animals rarely have all their options available at once. “The choice is not so much ‘which item should I consume’ as ‘should I spend time consuming this particular item, or should I keep looking’?” she explains. “Some of what we perceive as irrational behaviour would then simply be the result of presenting animals with the unusual case of a simultaneous choice, when they have evolved to make optimal sequential choices.”
Latty feels that the new work by Trimmer and colleagues “goes some way toward combining the sequential and simultaneous viewpoints”. It helps to show that “decision strategies that appear irrational in simplified experimental environments can be adaptive in the complex, dynamic worlds in which most organisms live.”
She thinks it might be possible to test these ideas. “I suspect it would be easy enough to train animals (or humans) to forage on items that had different probabilities of disappearing or reappearing. Then you could test whether or not playing with these probabilities influences preferences.” The difficulty is that organisms may already take into account natural tendencies for choices to disappear.
“I think it is absolutely worth investigating further”, Latty says. “It has certainly given me some ideas for future experiments.”
“The paper is very nicely done”, says economist and behavioural scientist Herb Gintis of the Santa Fe Institute in New Mexico, but he adds that “there is nothing anomalous or even surprising about these results.”
Gintis explains that the choices only seem to violate transitivity or IIA because there are in fact more than three. “Usually when IIA fails, the modeler is using the wrong choice space”, he says. “An expansion of the choice space to include probabilities of appearance and disappearance would correct this.”
Trimmer sees no reason why the results shouldn’t apply to humans. “Of course, much of the time we make errors, which is a very simple explanation for any behaviour which appears irrational”, he says. “But an individual who displays intransitive choices is not necessarily behaving erroneously.”
He feels that such behaviour could surface in economic contexts, for example in cases where people are choosing investment strategies from savings schemes that may or may not be available in the future. In other words, while economic behaviour is clearly not always rational (as some economists have assumed), we shouldn’t be too hasty in assuming that what seems irrational necessarily is.
References
1. McNamara, J. M., Trimmer, P. C. & Houston, A. I. Biol. Lett. 20130935 (2014).
2. Shafir, S., Waite, T. A. & Smith, B. H. Behav. Ecol. Sociobiol. 51, 180-187 (2002).
3. Bateson, M., Healy, S. D. & Hurly, T. A. Anim. Behav. 63, 587-596 (2002).
4. Latty, T. & Beekman, M. Proc. R. Soc. B 278, 307-312 (2011).
Tuesday, January 14, 2014
The future of physics
Research Funding Insight recently asked me to write a piece on the “future of physics”, to accompany a critique of string theory and its offshoots by Jim Baggott (see below). I wanted to take the opportunity to explain that, whatever the shortcomings of string theory might be, they most certainly do not leave physics as a whole in crisis. It is doing very nicely, because it is much, much broader than both string theory in particular and what gets called “fundamental physics” in general. So here it is. The article first appeared in Funding Insight on 7 January 2014, and I’m reproducing it here with kind permission of Research Professional. For more articles like this (including Jim Baggott’s), visit www.researchprofessional.com.
________________________________________________________________
Is physics at risk of disappearing up its own foundations? To read some of the recent criticisms of work on string theory, which seeks a fundamental explanation of all known forces and particles, you might think so. After about three decades of work, the theory is no closer to a solution – or rather, the number of possible solutions has mushroomed astronomically, while none of them is testable and they all rest on a base of untried speculations.
But while scepticism about the prospects for this alleged Theory of Everything may be justified, it would be mistaken to imagine that the difficulties are gnawing away at the roots of physics. They are the concern of only a tiny fraction of physicists, while many others consider them esoteric at best and perhaps totally irrelevant.
Don’t imagine either that the entire physics community has been on tenterhooks to see what the Large Hadron Collider at CERN in Geneva will come up with, or whether, now that it seems to have found the Higgs boson, the particle accelerator will open up a new chapter in fundamental physics that takes in such mysterious or speculative concepts as dark matter and supersymmetry (a hitherto unseen connection between different classes of particles).
Strings and the LHC are the usual media face of physics: what most non-physicists think physicists do. This sometimes frustrates other physicists intensely. “High-energy physics experiments are over-rated, and are not as significant as they were decades ago”, says one, based in the US. “Now it is tiny increments in knowledge, at excessive costs – yet these things dominate the science news.”
Given the jamboree that has surrounded the work at the LHC, especially after the award of the 2013 Nobel prize in physics to Peter Higgs (with François Englert, who also proposed the particle now known by Higgs’ name), it is tempting to dismiss this as sour grapes. But there’s more to it than the resentment of one group of researchers at seeing the limelight grabbed by another. For the perception that the centre of gravity of physics lies with fundamental particles and string theory reflects a deep misunderstanding about the whole nature of the discipline. The danger is that this misunderstanding might move beyond the general public and media and start to infect funders, policy-makers and educationalists.
The fact is that physics is not a quest for isolated explanations of this or that phenomenon (and string theory, for all its vaunted status as a Theory of Everything, is equally parochial in what it might ‘explain’). Physics attempts to discover how common principles apply to many different aspects of the physical world. It would be foolish to suppose that we know what all these principles are, but we certainly know some of them. In a recent article in Physics World, Peter Main and Charles Tracy from the Institute of Physics’ education section made a decent stab at compiling a list of what constitutes “physics thinking”. It included the notions of Reductionism, Causality, Universality, Mathematical Modelling, Conservation, Equilibrium, the idea that differences cause change, Dissipation and Irreversibility, and Symmetry and Broken Symmetry. There’s no space to explain all of these, but one might sum up many of them in the idea that things change for identifiable reasons; often those reasons are the same in different kinds of system; we can develop simplified maths-based descriptions of them; and when change occurs, some things (like total energy) stay the same before and after.
Many of these notions are older than is sometimes supposed. Particle physicists, for example, have been known to imply that the concept of symmetry-breaking – whereby a system with particular symmetry properties appears spontaneously from one with more symmetry – was devised in the 1950s and 60s to answer some problems in their field. The truth is that this principle was already inherent in the work of the Dutch scientist Johannes Diderik van der Waals in 1873. Van der Waals wasn’t thinking about particle physics, which didn’t even exist then; he was exploring the way that matter interconverts between liquid and gas states, in what is called a phase transition. Phase transitions and symmetry breaking have since proved to be fundamental to all areas of physics, ranging from the cosmological theory of the Big Bang to superconductivity. Looked at one way, the Higgs boson is the product of just another phase transition, and indeed some of the ideas found in Higgs’ theory were anticipated by earlier work on the low-temperature transition that leads to resistance-free electrical conductivity in some materials (called superconductivity).
Or take quantum theory, which began to acquire its modern form when in 1926 Erwin Schrödinger wrote down a ‘wavefunction’ to describe the behaviour of quantum particles. Schrödinger didn’t just pluck his equation from nowhere: he adapted it from the centuries-old discipline of wave mechanics, which describes what ordinary waves do.
This is not to say that physicists are always stealing old ideas without attribution. Quite the opposite: it is precisely because they were so thoroughly immersed in the traditions and ideas of classical physics, going back to Isaac Newton and Galileo, that the physicists of the early twentieth century such as Einstein, Max Planck and Niels Bohr were able to instigate the revolutionary new ideas of quantum theory and relativity. All the best contemporary physicists, such as Richard Feynman and the Soviet Lev Landau, have had a deep appreciation of the connections between old and new ideas. Feynman’s so-called path-integral formulation of quantum electrodynamics, which supplied a quantum theory of how light interacts with matter, drew on the eighteenth-century classical mechanics of Joseph Louis Lagrange. It is partly because they point out these connections that Feynman’s famous Lectures on Physics are so revered; the links are also to be found, in more forbidding Soviet style, in the equally influential textbooks by Landau and his colleague Evgeny Lifshitz.
The truly profound aspect of central concepts like those proposed by Main and Tracy is that they don’t recognize any distinctions of time and space. They apply at all scales: to collisions of atoms and bumper cars, to nuclear reactions and solar cells. It seems absurd to imagine that the burst of ultrafast cosmic expansion called inflation, thought to be responsible for the large-scale structure of the universe we see today, has any connection with the condensation of water on the windowpane – but it does. Equally, that condensation is likely to find analogies in the appearance of dense knots and jams in moving traffic. Looked at this way, what is traditionally called fundamental physics – theories of the subatomic nature of matter – is no more fundamental than is the physics of sand or sound. It merely applies the same concepts at smaller scales.
This, then, is one important message for physics education: don’t teach it as a series of subdisciplines with their own unique set of concepts. Or if you must parcel it up in this way, keep the connections at the forefront. It’s also a message for students: always consider how the subject you’re working on finds analogues elsewhere.
All this remains true even while – one might even say especially as – physics ventures into applied fields. It’s possible (honestly) to see something almost sublime in the way quantum theory describes the behaviour of electrons in solids such as the semiconductors of transistors. On one level it’s obvious that it should: quantum theory describes very small things, and electrons are very small. But the beauty is that, under the auspices of quantum rules, electrons can get marshalled into states that mirror those in quite different and more exotic systems. They can acquire ‘orbits’ like those in atoms, so that blobs of semiconductor can act as artificial atoms. They can get bunched into pairs or other groups that travel in unison, giving us superconductivity, itself analogous to the weird frictionless superfluid behaviour of liquid helium. One of the most interesting features of the atom-thick carbon sheets called graphene is not that they will provide new kinds of touch-screen (we have those already) but that their electrons, partly by virtue of being trapped in two dimensions, can collectively behave like particles called Dirac fermions, which have no mass and move at the speed of light. The electrons don’t actually do this – they just ‘look’ like particles that do. In such ways, graphene enables experiments that seem to come from the nether reaches of particle physics, all in a flake of pencil lead on a desktop.
As graphene promises to show, these exotic properties can feed back into real applications. Other electronic ‘quasiparticles’ called excitons (a pairing of an electron with a gap or ‘hole’ in a pervasive electron ‘sea’) are responsible for the light emission from polymers that is bringing flexible plastics to screens and display technology. In one recent example, an exotic form of quantum-mechanical behaviour called Bose-Einstein condensation, which has attracted Nobel prizes after being seen in clouds of electromagnetically trapped ultracold gas, has been achieved in the electronic quasiparticles of an easily handled plastic material at room temperature, making it possible that this once arcane phenomenon could be harnessed cheaply to make new kinds of laser and other light-based devices.
There is a clear corollary to all this for allocating research priorities in physics: you never know. However odd or recondite a phenomenon or the system required to produce it, you never know where else it might crop up and turn out to have uses. That of course is the cliché attached to the laser: the embodiment of a quirky idea of Einstein’s in 1917, it has come to be almost as central to information technology as the transistor.
Does this mean that physics, by virtue of its universality, can in fact have no priorities, but must let a thousand flowers bloom? Probably the truth is somewhere in between: it makes sense, in any field of science, to put some emphasis on areas that look particularly technologically promising or conceptually enriching, as well as curbing areas that seem to have run their course. But it would be a mistake to imagine that physics, any more than Darwinian evolution, has any direction – that somehow the objective is to work down from the largest scales towards the smaller and more ‘fundamental’.
Another reason to doubt the overly reductive approach is supplied by Michael Berry, a distinguished physicists at the University of Bristol whose influential work has ranged from classical optics and mechanics to quantum chaos. “There are different kinds of fundamentality”, says Berry. “As well as high-energy and cosmology, there are the asymptotic regimes of existing theories, where new phenomena emerge, or lurk as borderland phenomena between the theories.” Berry has pointed out that an ‘asymptotic regime’ in which some parameter in a theory is shrunk to precisely zero (as opposed to being merely made very small), the outcomes of the theory can change discontinuously: you might find some entirely new, emergent behaviour.
As a result, these ‘singular limits’ can lead to new physics, making it not just unwise but impossible to try to derive the behaviour of a system at one level from that at a more ‘fundamental’ level. That’s a reason to be careful about Main and Tracy’s emphasis on reductionism. Some problems can be solved by breaking them down into simpler ones, but sometimes that will lose the very behaviour you’re interested in. “If you don’t think emergence is important too, you won't get far as a condensed matter physicist”, says physicist Richard Jones, Pro-Vice-Chancellor for Research and Innovation at the University of Sheffield.
It’s important to recognize too that the biggest mysteries, however alluring they seem, may not be the most pressing, nor indeed the most intellectually demanding or enriching. The search for dark matter is certainly exciting, well motivated, and worth pursuing. But at present it is only rather tenuously linked to the mainstream of ideas in physics – we have so few clues, either observationally or theoretically, about how to look or what we hope to find, that it is largely a matter of blind empiricism. It is usually wise not to spend too much of your time stumbling around in the dark.
With all this in mind, here are a few suggestions for where what we might call ‘small physics’ might usefully devote some of its energies in the coming years:
- quantum information and quantum optics: even if quantum computers aren’t going to be a universal game-changer any time soon, the implications of pursuing quantum theory as an information science are vast, ranging from new secure communications technologies to deeper insights into the principles that really underpin the quantum world.
- the physics of biology: this can mean many things, from understanding how the mechanics of cells determine their fate (stem cells sometimes select their eventual tissue type from how the individual cells are pulled and tugged) to the question of whether phase transitions underpin cancer, brain activity and even natural selection. This one needs handling with care: physicists are likely to go badly astray unless they talk to biologists.
- materials physics: from new strong materials to energy generation and conversion, it is essential to develop an understanding of how materials systems behave over a wide range of size scales (and that’s not necessarily a problem to tackle from the bottom up). Such knowhow is likely to be central to a scientific basis for sustainability.
- new optical technologies: you’ve probably heard about invisibility cloaks, and while some of those claims need to be taken with a pinch of salt, the general idea that light can be moulded, manipulated and directed by controlling the microstructure of materials (such as so-called photonic band-gap materials and metamaterials) is already leading to new possibilities in display technologies, telecommunications and computing.
- electronics: this one kind of goes without saying, perhaps, but the breadth and depth of the topic is phenomenal, going way beyond ways to make transistors ever smaller. There is a wealth of weird and wonderful behaviour in new and unusual materials, ranging from spintronics (electronics that uses the quantum spins of electrons), molecular and polymer electronics, and unusual electronic behaviour on the surfaces of insulators (check out “topological insulators”).
None of this is to deny the value of Big Physics: new accelerators, telescopes, satellites and particle detectors will surely continue to reveal profound insights into our universe. But they are only part of a bigger picture.
Most of all, it isn’t a matter of training physicists to be experts in any of these (or other) areas. Rather, they need to know how to adapt the powerful tools of physics to whatever problem is at hand. The common notion (or is it just in physics?) that a physicist can turn his or her hand to anything is a bit too complacent for comfort, but it is nonetheless true that a ‘physics way of thinking’ is a potential asset for any science.
________________________________________________________________
Is physics at risk of disappearing up its own foundations? To read some of the recent criticisms of work on string theory, which seeks a fundamental explanation of all known forces and particles, you might think so. After about three decades of work, the theory is no closer to a solution – or rather, the number of possible solutions has mushroomed astronomically, while none of them is testable and they all rest on a base of untried speculations.
But while scepticism about the prospects for this alleged Theory of Everything may be justified, it would be mistaken to imagine that the difficulties are gnawing away at the roots of physics. They are the concern of only a tiny fraction of physicists, while many others consider them esoteric at best and perhaps totally irrelevant.
Don’t imagine either that the entire physics community has been on tenterhooks to see what the Large Hadron Collider at CERN in Geneva will come up with, or whether, now that it seems to have found the Higgs boson, the particle accelerator will open up a new chapter in fundamental physics that takes in such mysterious or speculative concepts as dark matter and supersymmetry (a hitherto unseen connection between different classes of particles).
Strings and the LHC are the usual media face of physics: what most non-physicists think physicists do. This sometimes frustrates other physicists intensely. “High-energy physics experiments are over-rated, and are not as significant as they were decades ago”, says one, based in the US. “Now it is tiny increments in knowledge, at excessive costs – yet these things dominate the science news.”
Given the jamboree that has surrounded the work at the LHC, especially after the award of the 2013 Nobel prize in physics to Peter Higgs (with François Englert, who also proposed the particle now known by Higgs’ name), it is tempting to dismiss this as sour grapes. But there’s more to it than the resentment of one group of researchers at seeing the limelight grabbed by another. For the perception that the centre of gravity of physics lies with fundamental particles and string theory reflects a deep misunderstanding about the whole nature of the discipline. The danger is that this misunderstanding might move beyond the general public and media and start to infect funders, policy-makers and educationalists.
The fact is that physics is not a quest for isolated explanations of this or that phenomenon (and string theory, for all its vaunted status as a Theory of Everything, is equally parochial in what it might ‘explain’). Physics attempts to discover how common principles apply to many different aspects of the physical world. It would be foolish to suppose that we know what all these principles are, but we certainly know some of them. In a recent article in Physics World, Peter Main and Charles Tracy from the Institute of Physics’ education section made a decent stab at compiling a list of what constitutes “physics thinking”. It included the notions of Reductionism, Causality, Universality, Mathematical Modelling, Conservation, Equilibrium, the idea that differences cause change, Dissipation and Irreversibility, and Symmetry and Broken Symmetry. There’s no space to explain all of these, but one might sum up many of them in the idea that things change for identifiable reasons; often those reasons are the same in different kinds of system; we can develop simplified maths-based descriptions of them; and when change occurs, some things (like total energy) stay the same before and after.
Many of these notions are older than is sometimes supposed. Particle physicists, for example, have been known to imply that the concept of symmetry-breaking – whereby a system with particular symmetry properties appears spontaneously from one with more symmetry – was devised in the 1950s and 60s to answer some problems in their field. The truth is that this principle was already inherent in the work of the Dutch scientist Johannes Diderik van der Waals in 1873. Van der Waals wasn’t thinking about particle physics, which didn’t even exist then; he was exploring the way that matter interconverts between liquid and gas states, in what is called a phase transition. Phase transitions and symmetry breaking have since proved to be fundamental to all areas of physics, ranging from the cosmological theory of the Big Bang to superconductivity. Looked at one way, the Higgs boson is the product of just another phase transition, and indeed some of the ideas found in Higgs’ theory were anticipated by earlier work on the low-temperature transition that leads to resistance-free electrical conductivity in some materials (called superconductivity).
Or take quantum theory, which began to acquire its modern form when in 1926 Erwin Schrödinger wrote down a ‘wavefunction’ to describe the behaviour of quantum particles. Schrödinger didn’t just pluck his equation from nowhere: he adapted it from the centuries-old discipline of wave mechanics, which describes what ordinary waves do.
This is not to say that physicists are always stealing old ideas without attribution. Quite the opposite: it is precisely because they were so thoroughly immersed in the traditions and ideas of classical physics, going back to Isaac Newton and Galileo, that the physicists of the early twentieth century such as Einstein, Max Planck and Niels Bohr were able to instigate the revolutionary new ideas of quantum theory and relativity. All the best contemporary physicists, such as Richard Feynman and the Soviet Lev Landau, have had a deep appreciation of the connections between old and new ideas. Feynman’s so-called path-integral formulation of quantum electrodynamics, which supplied a quantum theory of how light interacts with matter, drew on the eighteenth-century classical mechanics of Joseph Louis Lagrange. It is partly because they point out these connections that Feynman’s famous Lectures on Physics are so revered; the links are also to be found, in more forbidding Soviet style, in the equally influential textbooks by Landau and his colleague Evgeny Lifshitz.
The truly profound aspect of central concepts like those proposed by Main and Tracy is that they don’t recognize any distinctions of time and space. They apply at all scales: to collisions of atoms and bumper cars, to nuclear reactions and solar cells. It seems absurd to imagine that the burst of ultrafast cosmic expansion called inflation, thought to be responsible for the large-scale structure of the universe we see today, has any connection with the condensation of water on the windowpane – but it does. Equally, that condensation is likely to find analogies in the appearance of dense knots and jams in moving traffic. Looked at this way, what is traditionally called fundamental physics – theories of the subatomic nature of matter – is no more fundamental than is the physics of sand or sound. It merely applies the same concepts at smaller scales.
This, then, is one important message for physics education: don’t teach it as a series of subdisciplines with their own unique set of concepts. Or if you must parcel it up in this way, keep the connections at the forefront. It’s also a message for students: always consider how the subject you’re working on finds analogues elsewhere.
All this remains true even while – one might even say especially as – physics ventures into applied fields. It’s possible (honestly) to see something almost sublime in the way quantum theory describes the behaviour of electrons in solids such as the semiconductors of transistors. On one level it’s obvious that it should: quantum theory describes very small things, and electrons are very small. But the beauty is that, under the auspices of quantum rules, electrons can get marshalled into states that mirror those in quite different and more exotic systems. They can acquire ‘orbits’ like those in atoms, so that blobs of semiconductor can act as artificial atoms. They can get bunched into pairs or other groups that travel in unison, giving us superconductivity, itself analogous to the weird frictionless superfluid behaviour of liquid helium. One of the most interesting features of the atom-thick carbon sheets called graphene is not that they will provide new kinds of touch-screen (we have those already) but that their electrons, partly by virtue of being trapped in two dimensions, can collectively behave like particles called Dirac fermions, which have no mass and move at the speed of light. The electrons don’t actually do this – they just ‘look’ like particles that do. In such ways, graphene enables experiments that seem to come from the nether reaches of particle physics, all in a flake of pencil lead on a desktop.
As graphene promises to show, these exotic properties can feed back into real applications. Other electronic ‘quasiparticles’ called excitons (a pairing of an electron with a gap or ‘hole’ in a pervasive electron ‘sea’) are responsible for the light emission from polymers that is bringing flexible plastics to screens and display technology. In one recent example, an exotic form of quantum-mechanical behaviour called Bose-Einstein condensation, which has attracted Nobel prizes after being seen in clouds of electromagnetically trapped ultracold gas, has been achieved in the electronic quasiparticles of an easily handled plastic material at room temperature, making it possible that this once arcane phenomenon could be harnessed cheaply to make new kinds of laser and other light-based devices.
There is a clear corollary to all this for allocating research priorities in physics: you never know. However odd or recondite a phenomenon or the system required to produce it, you never know where else it might crop up and turn out to have uses. That of course is the cliché attached to the laser: the embodiment of a quirky idea of Einstein’s in 1917, it has come to be almost as central to information technology as the transistor.
Does this mean that physics, by virtue of its universality, can in fact have no priorities, but must let a thousand flowers bloom? Probably the truth is somewhere in between: it makes sense, in any field of science, to put some emphasis on areas that look particularly technologically promising or conceptually enriching, as well as curbing areas that seem to have run their course. But it would be a mistake to imagine that physics, any more than Darwinian evolution, has any direction – that somehow the objective is to work down from the largest scales towards the smaller and more ‘fundamental’.
Another reason to doubt the overly reductive approach is supplied by Michael Berry, a distinguished physicists at the University of Bristol whose influential work has ranged from classical optics and mechanics to quantum chaos. “There are different kinds of fundamentality”, says Berry. “As well as high-energy and cosmology, there are the asymptotic regimes of existing theories, where new phenomena emerge, or lurk as borderland phenomena between the theories.” Berry has pointed out that an ‘asymptotic regime’ in which some parameter in a theory is shrunk to precisely zero (as opposed to being merely made very small), the outcomes of the theory can change discontinuously: you might find some entirely new, emergent behaviour.
As a result, these ‘singular limits’ can lead to new physics, making it not just unwise but impossible to try to derive the behaviour of a system at one level from that at a more ‘fundamental’ level. That’s a reason to be careful about Main and Tracy’s emphasis on reductionism. Some problems can be solved by breaking them down into simpler ones, but sometimes that will lose the very behaviour you’re interested in. “If you don’t think emergence is important too, you won't get far as a condensed matter physicist”, says physicist Richard Jones, Pro-Vice-Chancellor for Research and Innovation at the University of Sheffield.
It’s important to recognize too that the biggest mysteries, however alluring they seem, may not be the most pressing, nor indeed the most intellectually demanding or enriching. The search for dark matter is certainly exciting, well motivated, and worth pursuing. But at present it is only rather tenuously linked to the mainstream of ideas in physics – we have so few clues, either observationally or theoretically, about how to look or what we hope to find, that it is largely a matter of blind empiricism. It is usually wise not to spend too much of your time stumbling around in the dark.
With all this in mind, here are a few suggestions for where what we might call ‘small physics’ might usefully devote some of its energies in the coming years:
- quantum information and quantum optics: even if quantum computers aren’t going to be a universal game-changer any time soon, the implications of pursuing quantum theory as an information science are vast, ranging from new secure communications technologies to deeper insights into the principles that really underpin the quantum world.
- the physics of biology: this can mean many things, from understanding how the mechanics of cells determine their fate (stem cells sometimes select their eventual tissue type from how the individual cells are pulled and tugged) to the question of whether phase transitions underpin cancer, brain activity and even natural selection. This one needs handling with care: physicists are likely to go badly astray unless they talk to biologists.
- materials physics: from new strong materials to energy generation and conversion, it is essential to develop an understanding of how materials systems behave over a wide range of size scales (and that’s not necessarily a problem to tackle from the bottom up). Such knowhow is likely to be central to a scientific basis for sustainability.
- new optical technologies: you’ve probably heard about invisibility cloaks, and while some of those claims need to be taken with a pinch of salt, the general idea that light can be moulded, manipulated and directed by controlling the microstructure of materials (such as so-called photonic band-gap materials and metamaterials) is already leading to new possibilities in display technologies, telecommunications and computing.
- electronics: this one kind of goes without saying, perhaps, but the breadth and depth of the topic is phenomenal, going way beyond ways to make transistors ever smaller. There is a wealth of weird and wonderful behaviour in new and unusual materials, ranging from spintronics (electronics that uses the quantum spins of electrons), molecular and polymer electronics, and unusual electronic behaviour on the surfaces of insulators (check out “topological insulators”).
None of this is to deny the value of Big Physics: new accelerators, telescopes, satellites and particle detectors will surely continue to reveal profound insights into our universe. But they are only part of a bigger picture.
Most of all, it isn’t a matter of training physicists to be experts in any of these (or other) areas. Rather, they need to know how to adapt the powerful tools of physics to whatever problem is at hand. The common notion (or is it just in physics?) that a physicist can turn his or her hand to anything is a bit too complacent for comfort, but it is nonetheless true that a ‘physics way of thinking’ is a potential asset for any science.
Monday, January 13, 2014
A prize for Max von Laue
In my book Serving the Reich, I make some remarks about the potential pitfalls of naming institutions, prizes and so forth after “great” scientists, and I say that, while my three main subjects Max Planck, Werner Heisenberg and Peter Debye are commemorated in this way, Max von Laue is not (“to my knowledge”). This seemed ironic, given that during the Nazi era Laue much more obviously and courageously resisted the regime than did these others.
Crystallographer Udo Heinemann of the Max Delbrück Centre for Molecular Medicine in Berlin has pointed out to me that a Max von Laue prize does in fact exist. It is awarded by the German Crystallographic Society (Deutsche Gesellschaft für Kristallographie, DGK) annually to junior scientists for “outstanding work in the field of crystallography in the broadest sense”, and is worth 1500 euros. I have discussed elsewhere the perils of this “name game”, but given that everyone plays it, I am pleased to see that Laue has not been overlooked. It seems all the more fitting to have this pointed out during the International Year of Crystallography.
Crystallographer Udo Heinemann of the Max Delbrück Centre for Molecular Medicine in Berlin has pointed out to me that a Max von Laue prize does in fact exist. It is awarded by the German Crystallographic Society (Deutsche Gesellschaft für Kristallographie, DGK) annually to junior scientists for “outstanding work in the field of crystallography in the broadest sense”, and is worth 1500 euros. I have discussed elsewhere the perils of this “name game”, but given that everyone plays it, I am pleased to see that Laue has not been overlooked. It seems all the more fitting to have this pointed out during the International Year of Crystallography.
Thursday, January 09, 2014
The cult of the instrument
I have a piece in Aeon about instruments in science. Here’s how it looked at the outset.
_____________________________________________________________
Whenever I visit scientists to discuss their research, there always comes a moment when they say, with pride they can barely conceal, “Do you want a tour of the lab?” It is invariably slightly touching – like Willy Wonka dying to show off his factory. I’m always glad to accept, knowing what lies in store: shelves bright with bottles of coloured liquid and powders, webs of gleaming glass tubing, slabs of perforated steel holding lasers and lenses, cryogenic chambers like ornate bathyspheres whose quartz windows protect slivers of material about to be raked by electron beams.
It’s rarely less than impressive. Even if the kit is off-the-shelf, it will doubtless be wired into a makeshift salmagundi of wires, tubes, cladding, computer-controlled valves and rotors and components with more mysterious functions. Much of the gear, however, is likely to be home-made, custom-built for the research at hand. The typical lab set-up is, among other things, a masterpiece of impromptu engineering – you’d need degrees in electronics and mechanics just to put it all together, never mind how you make sense of the graphs and numbers it produces.
All this usually stays behind the scene in science. Headlines announcing “Scientists have found…” rarely bother to tell you how those discoveries were made. And would you care? The instrumentation of science is so highly specialized that it must often be accepted as a kind of occult machinery for producing knowledge. We figure they must know how it all works.
It makes sense in a way that histories of science tend to focus on the ideas and not the methods – surely what matters most is what was discovered about the workings of the world? But most historians of science today recognize that the relationship of scientists to their instruments is an essential part of the story. It is not simply that the science is dependent on the devices; rather, the devices determine what is known. You explore the things that you have the means to explore, and you plan your questions accordingly. That’s why, when a new instrument comes along – the telescope and the microscope are the most thoroughly investigated examples, but this applies as much today as it did in the seventeenth century – entirely new fields of science can be opened up. Less obviously, such developments demand a fresh negotiation between the scientists and their machines, and it’s not fanciful to see there some of the same characteristics as are found in human relationships. Can you be trusted? What are you trying to tell me? You’ve changed my life! Look, isn’t she beautiful? I’m bored with you, you don’t tell me anything new any more. Sorry, I’m swapping you for a newer model.
That’s why it is possible to speak of interactions between scientists and their instruments that are healthy or dysfunctional. How do we tell one from the other?
The telescope and microscope were celebrated even by their first users as examples of the value of enhancing the powers of human perception. But the most effective, not to mention elegant, scientific instruments serve also as a kind of prosthesis for the mind: they emerge as an extension of the experimenter’s thinking. That is exemplified in the work of the New Zealand physicist Ernest Rutherford, perhaps the finest experimental scientist of the twentieth century. Rutherford famously preferred the sealing-wax-and-string approach to science: it was at a humble benchtop with cheap, improvised and homespun equipment that he discovered the structure of the atom and then split it. This meant that Rutherford would devise his apparatus to tell him precisely what he wanted to know, rather than being limited by someone else’s view of what one needed to know. His experiments thus emerged organically from his ideas: they could almost be seen as theories constructed out of glass and metal foil.
Ernest Rutherford’s working space in the Cavendish Laboratory, Cambridge, in the 1920s.
In one of the finest instances, at Manchester University in 1908 Rutherford and his coworkers figured out that the alpha particles of radioactive decay are the nuclei of helium atoms. If that’s so, then one needs to collect the particles and see if they behave like helium. Rutherford ordered from his glassblower Otto Baumbach a glass capillary tube with extraordinarily thin walls, so that alpha particles emitted from radium could pass right through. Once they had accumulated in an outer chamber, Rutherford connected it up to become a gas-discharge tube, revealing the helium from the fingerprint wavelength of its glow. It was an exceedingly rare example of a piece of apparatus that answers a well defined question – are alpha particles helium? – with a simple yes/no answer, almost literally by whether or not a light switches on.
A more recent example of an instrument embodying the thought behind it is the scanning tunnelling microscope, invented by the late Heinrich Rohrer and Gerd Binnig at IBM’s Zurich research lab in the 1980s. They knew that electrons within the surface of an electrically conducting sample should be able to cross a tiny gap to reach another electrode held just above the surface, thanks to a quantum-mechanical effect called tunnelling. Because tunnelling is acutely sensitive to the width of the gap, a needle-like metal tip moving across the sample, just out of contact, could trace out the sample’s topography. If the movement was fine enough, the map might even show individual atoms and molecules. And so it did.
A ring of iron atoms on the surface of copper, as shown by the scanning tunnelling microscope. The ripples on the surface are electron waves. Image: IBM Almaden Research Center.
Between the basic idea and a working device, however, lay an incredible amount of practical expertise – of sheer craft – allied to rigorous thought. Against all expectation (they were often told the instrument “should not work” on principle), Rohrer and Binnig got it going, invented perhaps the central tool of nanotechnology, and won a Nobel prize in 1986 for their efforts.
So that’s when it goes right. What about when it doesn’t?
Scientific instruments have always been devices of power: those who possess the best can find out more than the others. Galileo recognized this: he conducted a cordial correspondence with Johannes Kepler in Prague, but when Kepler requested the loan of one of Galileo’s telescopes the Italian found excuses, knowing that with one of these instruments Kepler would be an even more serious rival. Instruments, Galileo already knew, confer authority.
But now instruments – newer, bigger, better – have become symbols of prestige as never before. I have several times been invited to admire the most state-of-the-art device in a laboratory purely for its own sake, as though I am being shown a Lamborghini. Historian of medical technology Stuart Blume of the University of Amsterdam has argued that, as science has started to operate according to the rules of a quasi-market, the latest equipment serves as a token of institutional might that enhances one’s competitive position in the marketplace. When I spoke to several chemists recently about their use of second-hand equipment, often acquired from the scientific equivalent of eBay, they all asked to remain anonymous, as though this would mark them out as second-rate scientists.
One of the dysfunctional consequences of this sort of relationship with an instrument is that the machine becomes its own justification, its own measure of worth – a kind of totem rather than a means to and end. A result is then “important” not because of what it tells us but because of how it was obtained. The Hubble Space Telescope is (despite its initial myopia) one of the most glorious instruments ever made, a genuinely new window on the universe. But when it first began to send back images of the cosmos in the mid 1990s, Nature would regularly receive submissions reporting the first “Hubble image” of this or that astrophysical object. The authors would be bemused and affronted when told that what the journal wanted was not the latest pretty picture, but some insight into the process it was observing – a matter that required rather more thought and research.
This kind of instrument-worship is, however, at least relatively harmless in the long run. More problematic is the notion of instrument as “knowledge machine”, an instrument that will churn out new understanding as long as you keep cranking the handle. The European particle-physics centre CERN has flirted with this image for the Large Hadron Collider, which the former director-general Robert Aymar called a “discovery machine.” This idea harks back (usually without knowing it) to a tradition begun by Francis Bacon in his Novum Organum (1620). Here Bacon drew on Atistotle’s notion of an organon, a mechanism for logical deduction. Bacon’s “new organon” was a new method of analysing facts, a systematic procedure (what we would now call an algorithm) for distilling observations of the world into underlying causes and mechanisms. It was a gigantic logic machine, accepting facts at one end and ejecting theorems at the other.
In the event, Bacon’s “organon” was a system so complex and intricate that he never even finished describing it, let alone ever put it into practice. Even if he had, it would have been to not avail, because it is now generally agreed among philosophers and historians of science that this is now how knowledge comes about. The preference of the early experimental scientists, like those who formed the Royal Society, to pile up facts in a Baconian manner while postponing indefinitely the framing of hypotheses to explain them, will get you nowhere. (It’s precisely because they couldn’t in fact restrain their impulse to interpret that men like Isaac Newton and Robert Boyle made any progess.) Unless you begin with some hypothesis, you don’t know which facts you are looking for, and you’re liable to end up with a welter of data, mostly irrelevant and certainly incomprehensible.
This seems obvious, and most scientists would agree. But that doesn’t mean the Baconian “discovery machine” has vanished. As it happens, the LHC doesn’t have this defect after all: the reams of data it has collected are being funnelled towards a very few extremely well defined (even over-refined) hypotheses, in particular the existence of the Higgs particle. But the Baconian impulse is alive and well elsewhere, driven by the allure of “knowledge machines”. The ability to sequence genomes quickly and cheaply will undoubtedly prove valuable for medicine and fundamental genetics, but these experimental techniques have already far outstripped not only our understanding of how genomes operate but our ability to formulate questions about that. As a result, some gene-sequencing projects seem conspicuously to lack a suite of ideas to test. The hope seems to be that, if you have enough data, understanding will somehow fall out of the bottom of the pile. As a result, biologist Robert Weinberg of the Massachusetts Institute of Technology has said, “the dominant position of hypothesis-driven research is under threat.”
And not just in genomics. The United States and Europe have recently announced two immense projects, costing hundreds of millions of dollars, to use the latest imaging technologies to map out the human brain, tracing out every last one of the billions of neural connections. Some neuroscientists are drooling at the thought of all that data. “Think about it,” said one. “The human brain produces in 30 seconds as much data as the Hubble Space Telescope has produced in its lifetime.”
If, however, one wanted to know how cities function, creating a map of every last brick and kerb would be an odd way to go about it. Quite how these brain projects will turn all their data into understanding remains a mystery. One researcher in the European project, simply called the Human Brain Project, inadvertently revealed the paucity of any theoretical framework for navigating this information glut: “It is a chicken and egg situation. Once we know how the brain works, we'll know how to look at the data.” The fact that the Human Brain Project is not quite that clueless hardly mitigates the enormity of this flippant statement. Science has never worked by shooting first and asking questions later, and it never will.
Biology, in which the profusion of evolutionary contingencies makes it particularly hard to formulate broad hypotheses, has long felt the danger of a Baconian retreat to pure data-gathering, substituting instruments for thinking. Austrian biochemist Erwin Chargaff, whose work helped elucidate how DNA stores genetic information, commented on this tendency as early as 1977:
“Now I go through a laboratory… and there they all sit before the same high speed centrifuges or scintillation counters, producing the same superposable graphs. There has been very little room left for the all important play of scientific imagination.”
Thanks to this, Chargaff said, “a pall of monotony has descended on what used to be the liveliest and most attractive of all scientific professions.” Like Chargaff, the pioneer of molecular biology Walter Gilbert saw in this reduction of biology to a set of standardized instrumental procedures repeated ad nauseam an encroachment of corporate strategies into the business of science. It was becoming an industrial process, manufacturing data on the production line: data produced, like consumer goods, because we have the instrumental means to do so, not because anyone knows what to do with it all. Nobel laureate biochemist Otto Loewi saw this happening in the life sciences even in 1954:
“Sometimes one has the impression that in contrast with former times, when one searched for methods in order to solve a problem, frequently nowadays workers look for problems with which they can exploit some special technique.”
High-energy physics now works on a similar industrial scale, with big machines at the centre. It doesn’t suffer the same lack of hypotheses as areas of biology, but arguably it can face the opposite problem: a consensus around a single idea, into which legions of workers burrow single-mindedly. Donald Glaser, the inventor of the bubble chamber, saw this happening in the immediate postwar period, once the Manhattan Project had provided the template:
“I knew that large accelerators were going to be built and they were going to make gobs of strange particles. But I didn’t want to join an army of people working at big machines.”
For Glaser the machines were taking over, and only by getting out of it did he devise his Nobel-prizewinning technique.
The challenge for the scientist, then, particularly in the era of Big Science, is to keep the instrument in its place. The best scientific kit comes from thinking about how to solve a problem. But once they become a part of the standard repertoire, or once they acquire a lumbering momentum of their own, instruments might not assist thinking but start to constrain it. As historians of science Albert van Helden and Thomas Hankins have said, “Because instruments determine what can be done, they also determine to some extent what can be thought.”
_____________________________________________________________
Whenever I visit scientists to discuss their research, there always comes a moment when they say, with pride they can barely conceal, “Do you want a tour of the lab?” It is invariably slightly touching – like Willy Wonka dying to show off his factory. I’m always glad to accept, knowing what lies in store: shelves bright with bottles of coloured liquid and powders, webs of gleaming glass tubing, slabs of perforated steel holding lasers and lenses, cryogenic chambers like ornate bathyspheres whose quartz windows protect slivers of material about to be raked by electron beams.
It’s rarely less than impressive. Even if the kit is off-the-shelf, it will doubtless be wired into a makeshift salmagundi of wires, tubes, cladding, computer-controlled valves and rotors and components with more mysterious functions. Much of the gear, however, is likely to be home-made, custom-built for the research at hand. The typical lab set-up is, among other things, a masterpiece of impromptu engineering – you’d need degrees in electronics and mechanics just to put it all together, never mind how you make sense of the graphs and numbers it produces.
All this usually stays behind the scene in science. Headlines announcing “Scientists have found…” rarely bother to tell you how those discoveries were made. And would you care? The instrumentation of science is so highly specialized that it must often be accepted as a kind of occult machinery for producing knowledge. We figure they must know how it all works.
It makes sense in a way that histories of science tend to focus on the ideas and not the methods – surely what matters most is what was discovered about the workings of the world? But most historians of science today recognize that the relationship of scientists to their instruments is an essential part of the story. It is not simply that the science is dependent on the devices; rather, the devices determine what is known. You explore the things that you have the means to explore, and you plan your questions accordingly. That’s why, when a new instrument comes along – the telescope and the microscope are the most thoroughly investigated examples, but this applies as much today as it did in the seventeenth century – entirely new fields of science can be opened up. Less obviously, such developments demand a fresh negotiation between the scientists and their machines, and it’s not fanciful to see there some of the same characteristics as are found in human relationships. Can you be trusted? What are you trying to tell me? You’ve changed my life! Look, isn’t she beautiful? I’m bored with you, you don’t tell me anything new any more. Sorry, I’m swapping you for a newer model.
That’s why it is possible to speak of interactions between scientists and their instruments that are healthy or dysfunctional. How do we tell one from the other?
The telescope and microscope were celebrated even by their first users as examples of the value of enhancing the powers of human perception. But the most effective, not to mention elegant, scientific instruments serve also as a kind of prosthesis for the mind: they emerge as an extension of the experimenter’s thinking. That is exemplified in the work of the New Zealand physicist Ernest Rutherford, perhaps the finest experimental scientist of the twentieth century. Rutherford famously preferred the sealing-wax-and-string approach to science: it was at a humble benchtop with cheap, improvised and homespun equipment that he discovered the structure of the atom and then split it. This meant that Rutherford would devise his apparatus to tell him precisely what he wanted to know, rather than being limited by someone else’s view of what one needed to know. His experiments thus emerged organically from his ideas: they could almost be seen as theories constructed out of glass and metal foil.
Ernest Rutherford’s working space in the Cavendish Laboratory, Cambridge, in the 1920s.
In one of the finest instances, at Manchester University in 1908 Rutherford and his coworkers figured out that the alpha particles of radioactive decay are the nuclei of helium atoms. If that’s so, then one needs to collect the particles and see if they behave like helium. Rutherford ordered from his glassblower Otto Baumbach a glass capillary tube with extraordinarily thin walls, so that alpha particles emitted from radium could pass right through. Once they had accumulated in an outer chamber, Rutherford connected it up to become a gas-discharge tube, revealing the helium from the fingerprint wavelength of its glow. It was an exceedingly rare example of a piece of apparatus that answers a well defined question – are alpha particles helium? – with a simple yes/no answer, almost literally by whether or not a light switches on.
A more recent example of an instrument embodying the thought behind it is the scanning tunnelling microscope, invented by the late Heinrich Rohrer and Gerd Binnig at IBM’s Zurich research lab in the 1980s. They knew that electrons within the surface of an electrically conducting sample should be able to cross a tiny gap to reach another electrode held just above the surface, thanks to a quantum-mechanical effect called tunnelling. Because tunnelling is acutely sensitive to the width of the gap, a needle-like metal tip moving across the sample, just out of contact, could trace out the sample’s topography. If the movement was fine enough, the map might even show individual atoms and molecules. And so it did.
A ring of iron atoms on the surface of copper, as shown by the scanning tunnelling microscope. The ripples on the surface are electron waves. Image: IBM Almaden Research Center.
Between the basic idea and a working device, however, lay an incredible amount of practical expertise – of sheer craft – allied to rigorous thought. Against all expectation (they were often told the instrument “should not work” on principle), Rohrer and Binnig got it going, invented perhaps the central tool of nanotechnology, and won a Nobel prize in 1986 for their efforts.
So that’s when it goes right. What about when it doesn’t?
Scientific instruments have always been devices of power: those who possess the best can find out more than the others. Galileo recognized this: he conducted a cordial correspondence with Johannes Kepler in Prague, but when Kepler requested the loan of one of Galileo’s telescopes the Italian found excuses, knowing that with one of these instruments Kepler would be an even more serious rival. Instruments, Galileo already knew, confer authority.
But now instruments – newer, bigger, better – have become symbols of prestige as never before. I have several times been invited to admire the most state-of-the-art device in a laboratory purely for its own sake, as though I am being shown a Lamborghini. Historian of medical technology Stuart Blume of the University of Amsterdam has argued that, as science has started to operate according to the rules of a quasi-market, the latest equipment serves as a token of institutional might that enhances one’s competitive position in the marketplace. When I spoke to several chemists recently about their use of second-hand equipment, often acquired from the scientific equivalent of eBay, they all asked to remain anonymous, as though this would mark them out as second-rate scientists.
One of the dysfunctional consequences of this sort of relationship with an instrument is that the machine becomes its own justification, its own measure of worth – a kind of totem rather than a means to and end. A result is then “important” not because of what it tells us but because of how it was obtained. The Hubble Space Telescope is (despite its initial myopia) one of the most glorious instruments ever made, a genuinely new window on the universe. But when it first began to send back images of the cosmos in the mid 1990s, Nature would regularly receive submissions reporting the first “Hubble image” of this or that astrophysical object. The authors would be bemused and affronted when told that what the journal wanted was not the latest pretty picture, but some insight into the process it was observing – a matter that required rather more thought and research.
This kind of instrument-worship is, however, at least relatively harmless in the long run. More problematic is the notion of instrument as “knowledge machine”, an instrument that will churn out new understanding as long as you keep cranking the handle. The European particle-physics centre CERN has flirted with this image for the Large Hadron Collider, which the former director-general Robert Aymar called a “discovery machine.” This idea harks back (usually without knowing it) to a tradition begun by Francis Bacon in his Novum Organum (1620). Here Bacon drew on Atistotle’s notion of an organon, a mechanism for logical deduction. Bacon’s “new organon” was a new method of analysing facts, a systematic procedure (what we would now call an algorithm) for distilling observations of the world into underlying causes and mechanisms. It was a gigantic logic machine, accepting facts at one end and ejecting theorems at the other.
In the event, Bacon’s “organon” was a system so complex and intricate that he never even finished describing it, let alone ever put it into practice. Even if he had, it would have been to not avail, because it is now generally agreed among philosophers and historians of science that this is now how knowledge comes about. The preference of the early experimental scientists, like those who formed the Royal Society, to pile up facts in a Baconian manner while postponing indefinitely the framing of hypotheses to explain them, will get you nowhere. (It’s precisely because they couldn’t in fact restrain their impulse to interpret that men like Isaac Newton and Robert Boyle made any progess.) Unless you begin with some hypothesis, you don’t know which facts you are looking for, and you’re liable to end up with a welter of data, mostly irrelevant and certainly incomprehensible.
This seems obvious, and most scientists would agree. But that doesn’t mean the Baconian “discovery machine” has vanished. As it happens, the LHC doesn’t have this defect after all: the reams of data it has collected are being funnelled towards a very few extremely well defined (even over-refined) hypotheses, in particular the existence of the Higgs particle. But the Baconian impulse is alive and well elsewhere, driven by the allure of “knowledge machines”. The ability to sequence genomes quickly and cheaply will undoubtedly prove valuable for medicine and fundamental genetics, but these experimental techniques have already far outstripped not only our understanding of how genomes operate but our ability to formulate questions about that. As a result, some gene-sequencing projects seem conspicuously to lack a suite of ideas to test. The hope seems to be that, if you have enough data, understanding will somehow fall out of the bottom of the pile. As a result, biologist Robert Weinberg of the Massachusetts Institute of Technology has said, “the dominant position of hypothesis-driven research is under threat.”
And not just in genomics. The United States and Europe have recently announced two immense projects, costing hundreds of millions of dollars, to use the latest imaging technologies to map out the human brain, tracing out every last one of the billions of neural connections. Some neuroscientists are drooling at the thought of all that data. “Think about it,” said one. “The human brain produces in 30 seconds as much data as the Hubble Space Telescope has produced in its lifetime.”
If, however, one wanted to know how cities function, creating a map of every last brick and kerb would be an odd way to go about it. Quite how these brain projects will turn all their data into understanding remains a mystery. One researcher in the European project, simply called the Human Brain Project, inadvertently revealed the paucity of any theoretical framework for navigating this information glut: “It is a chicken and egg situation. Once we know how the brain works, we'll know how to look at the data.” The fact that the Human Brain Project is not quite that clueless hardly mitigates the enormity of this flippant statement. Science has never worked by shooting first and asking questions later, and it never will.
Biology, in which the profusion of evolutionary contingencies makes it particularly hard to formulate broad hypotheses, has long felt the danger of a Baconian retreat to pure data-gathering, substituting instruments for thinking. Austrian biochemist Erwin Chargaff, whose work helped elucidate how DNA stores genetic information, commented on this tendency as early as 1977:
“Now I go through a laboratory… and there they all sit before the same high speed centrifuges or scintillation counters, producing the same superposable graphs. There has been very little room left for the all important play of scientific imagination.”
Thanks to this, Chargaff said, “a pall of monotony has descended on what used to be the liveliest and most attractive of all scientific professions.” Like Chargaff, the pioneer of molecular biology Walter Gilbert saw in this reduction of biology to a set of standardized instrumental procedures repeated ad nauseam an encroachment of corporate strategies into the business of science. It was becoming an industrial process, manufacturing data on the production line: data produced, like consumer goods, because we have the instrumental means to do so, not because anyone knows what to do with it all. Nobel laureate biochemist Otto Loewi saw this happening in the life sciences even in 1954:
“Sometimes one has the impression that in contrast with former times, when one searched for methods in order to solve a problem, frequently nowadays workers look for problems with which they can exploit some special technique.”
High-energy physics now works on a similar industrial scale, with big machines at the centre. It doesn’t suffer the same lack of hypotheses as areas of biology, but arguably it can face the opposite problem: a consensus around a single idea, into which legions of workers burrow single-mindedly. Donald Glaser, the inventor of the bubble chamber, saw this happening in the immediate postwar period, once the Manhattan Project had provided the template:
“I knew that large accelerators were going to be built and they were going to make gobs of strange particles. But I didn’t want to join an army of people working at big machines.”
For Glaser the machines were taking over, and only by getting out of it did he devise his Nobel-prizewinning technique.
The challenge for the scientist, then, particularly in the era of Big Science, is to keep the instrument in its place. The best scientific kit comes from thinking about how to solve a problem. But once they become a part of the standard repertoire, or once they acquire a lumbering momentum of their own, instruments might not assist thinking but start to constrain it. As historians of science Albert van Helden and Thomas Hankins have said, “Because instruments determine what can be done, they also determine to some extent what can be thought.”
Wednesday, January 08, 2014
A splash of colour
More supermarket science for the rather sweet lifestyle magazine The Simple Things. This time it’s a little discourse on colour. Just in case you should happen to pick this up at the checkout and wonder about the first paragraph, this is, for the record, what the piece looked like at the outset.
__________________________________________________________________
Every culture has been entranced by rainbows. The Babylonians kept records of the most spectacular ones, and in Judaeo-Christian tradition the rainbow symbolises the covenant between God and the world. Australian Aborigines honour the Rainbow Serpent; for the Vikings the coloured arch was a bridge to Asgard. All this reflects astonishment at a vision in the sky that seems to be made of pure colour.
It was suspected for a long time that the rainbow holds the key to what colour itself is. Islamic philosophers in the early Middle Ages knew that you could make a kind of artificial rainbow by passing sunlight through glass or water to produce a spectrum, with its sequence of bright colours from red and yellow to blue and violet. The connection was first fully explained by Isaac Newton in the seventeenth century, who showed that “white” sunlight actually contained all the colours of the spectrum and that a glass prism could tease them apart. He said that rainbows are made when water droplets in the atmosphere act like little prisms.
So for Newton, colour was all about light, which he imagined as a stream of tiny particles that strike our eye and cause vibrations of its nerves. Vibrations of different “bigness”, he said, create sensations of different colours. That’s not so different from the modern view, although we now regard light as a wave, not a particle. Little protein molecules in our retina absorb light waves of different wavelength, triggering signals along the optical nerve that our brain interprets as colours. The longer the wavelength, the further towards the red end of the spectrum the colour is.
But is colour really so simple? The odd thing about Newton’s theory is that it implied that, if you mix all the colours of the rainbow, you should get white, whereas painters knew very well that this just makes a murky brown. What’s more, it was well known that a colour can look different in different light (at dusk, say), or depending on what other colours are next to it.
The puzzle about mixing was solved in the nineteenth century, when the Scottish scientist James Clerk Maxwell showed that mixing light is not like mixing paint. Pigments and dyes are coloured because they absorb some parts of the spectrum – the colour we see is what’s left, which is reflected to our eyes. So if you mix them, you absorb more and more colours until there’s virtually none left, and the mixture looks black. But if you mix coloured light, you’re adding rather than taking away. As you can see from looking at television pixels close up, red, blue and green light are enough in combination to look white from far enough away.
Even then, colour – like taste, smell, and music – is ultimately something made in the mind. That’s why colours that are “the same” according to their wavelengths of light can look quite different depending on what’s around them. As the philosopher and writer Johann Wolfgang von Goethe stressed in the early nineteenth century, colour is partly a psychological thing too.
What’s more, there are lots of ways to produce it. Most of the colour we see in nature is made by light-absorbing pigments. Chlorophyll molecules in grass and leaves, for example, absorb red and blue light, reflecting the yellow and green. But the blue of the sky comes from the way light bounces off molecules in the air: the blue light is scattered most strongly, and so seems to come from all over the sky. And some of nature’s most wonderful colour displays are produced in a similar way – not by absorbing light but by scattering it.
Take the blue Morpho butterfly, which seems positively to glow in South American forests as if it is lit up, so that it can be seen from a quarter of a mile away. Its wing scales are covered with microscopic bristles of cuticle-like material, each bearing a stack of shelf-like corrugations. Light waves bouncing off this stack interfere with one another so that some colours disappear and others are enhanced. These interference effects from tiny stacks or layers of material also produce the bright hues of the peacock’s tail and other bird plumage, and the iridescent shells of beetles. The colours are iridescent because the precise wavelength of light picked out by the interference depends on the angle you’re viewing from.
You get a similar bright spectrum of “interference colour” when light is reflected from the tiny dimples in CDs. In fact, technologists are now borrowing such colour-making tricks from nature to control light for fibre-optic telecommunications or to make iridescent paints. We are learning that there are many ways to make colour – and many ways to enjoy it.
__________________________________________________________________
Every culture has been entranced by rainbows. The Babylonians kept records of the most spectacular ones, and in Judaeo-Christian tradition the rainbow symbolises the covenant between God and the world. Australian Aborigines honour the Rainbow Serpent; for the Vikings the coloured arch was a bridge to Asgard. All this reflects astonishment at a vision in the sky that seems to be made of pure colour.
It was suspected for a long time that the rainbow holds the key to what colour itself is. Islamic philosophers in the early Middle Ages knew that you could make a kind of artificial rainbow by passing sunlight through glass or water to produce a spectrum, with its sequence of bright colours from red and yellow to blue and violet. The connection was first fully explained by Isaac Newton in the seventeenth century, who showed that “white” sunlight actually contained all the colours of the spectrum and that a glass prism could tease them apart. He said that rainbows are made when water droplets in the atmosphere act like little prisms.
So for Newton, colour was all about light, which he imagined as a stream of tiny particles that strike our eye and cause vibrations of its nerves. Vibrations of different “bigness”, he said, create sensations of different colours. That’s not so different from the modern view, although we now regard light as a wave, not a particle. Little protein molecules in our retina absorb light waves of different wavelength, triggering signals along the optical nerve that our brain interprets as colours. The longer the wavelength, the further towards the red end of the spectrum the colour is.
But is colour really so simple? The odd thing about Newton’s theory is that it implied that, if you mix all the colours of the rainbow, you should get white, whereas painters knew very well that this just makes a murky brown. What’s more, it was well known that a colour can look different in different light (at dusk, say), or depending on what other colours are next to it.
The puzzle about mixing was solved in the nineteenth century, when the Scottish scientist James Clerk Maxwell showed that mixing light is not like mixing paint. Pigments and dyes are coloured because they absorb some parts of the spectrum – the colour we see is what’s left, which is reflected to our eyes. So if you mix them, you absorb more and more colours until there’s virtually none left, and the mixture looks black. But if you mix coloured light, you’re adding rather than taking away. As you can see from looking at television pixels close up, red, blue and green light are enough in combination to look white from far enough away.
Even then, colour – like taste, smell, and music – is ultimately something made in the mind. That’s why colours that are “the same” according to their wavelengths of light can look quite different depending on what’s around them. As the philosopher and writer Johann Wolfgang von Goethe stressed in the early nineteenth century, colour is partly a psychological thing too.
What’s more, there are lots of ways to produce it. Most of the colour we see in nature is made by light-absorbing pigments. Chlorophyll molecules in grass and leaves, for example, absorb red and blue light, reflecting the yellow and green. But the blue of the sky comes from the way light bounces off molecules in the air: the blue light is scattered most strongly, and so seems to come from all over the sky. And some of nature’s most wonderful colour displays are produced in a similar way – not by absorbing light but by scattering it.
Take the blue Morpho butterfly, which seems positively to glow in South American forests as if it is lit up, so that it can be seen from a quarter of a mile away. Its wing scales are covered with microscopic bristles of cuticle-like material, each bearing a stack of shelf-like corrugations. Light waves bouncing off this stack interfere with one another so that some colours disappear and others are enhanced. These interference effects from tiny stacks or layers of material also produce the bright hues of the peacock’s tail and other bird plumage, and the iridescent shells of beetles. The colours are iridescent because the precise wavelength of light picked out by the interference depends on the angle you’re viewing from.
You get a similar bright spectrum of “interference colour” when light is reflected from the tiny dimples in CDs. In fact, technologists are now borrowing such colour-making tricks from nature to control light for fibre-optic telecommunications or to make iridescent paints. We are learning that there are many ways to make colour – and many ways to enjoy it.
Friday, January 03, 2014
Chemistry with muons
This is my Crucible column for the January issue of Chemistry World.
_______________________________________________________________
The periodic table seems constantly on the verge of expansion. There are of course new superheavy elements being added, literally atom by atom, to its nether reaches by the accelerator-driven synthesis of new nuclei. There’s also talk of systematic organization of new pseudo-atomic building blocks, whether these are polyatomic ‘superatoms’ [1] or nanoparticles assigned a particular ‘valence’ via DNA-based linkers [2]. But one could be forgiven for assuming that the main body of the table that adorns all chemistry lecture theatres will remain largely unchanged, give or take a few arguments over where to put hydrogen.
Yet even that can’t be taken for granted. A preprint [3] by quantum chemists Mohammad Goli and Shant Shahbazian at Shahid Beheshti University in Iran posits two new light elements – although these should formally be considered isotopes. They are muonium (Mu), in which an electron orbits a positively charged muon (μ+), and muonic helium (Heμ), in which an electron orbits a ‘nucleus’ consisting of an alpha particle and a negative muon – the latter in a very tight orbit close to the true nucleus.
Both of these ‘atoms’ can be considered analogues of hydrogen, with a single electron orbiting a nucleus of charge +1. They have, however, quite different masses. Since the muon – a lepton, being a ‘heavy’ cousin of the electron (or of its antiparticle the positron) – has a mass of 0.11 amu, muonium has about a tenth the mass of 1H, while muonic helium has a mass of 4.11 amu.
They have both been made in particle accelerators via high-energy collisions that generate muons, which can then be captured by helium or can themselves capture an electron. Some of these facilities, such as the TRIUMF accelerator in Vancouver, can generate beams of muons which can be thermalized by collisions with a gas, reducing the particle energies sufficiently to make muonic atoms capable of undergoing chemical reactions. True, the muons last for only around 2.2×10**-6 seconds, but that’s a lifetime, so to speak, compared with some superheavy artificial elements. Indeed, their chemistry has been explored already [4]: their reaction rates with molecular hydrogen not only confirm their hydrogen-like behaviour but show isotope effects that are consistent with quantum-chemical theory.
So undoubtedly Mu and Heμ have a chemistry. It seems only reasonable, then, to find a place for them in the periodic table. Indeed, Dick Zare of Stanford University, who probably known more about the classic H+H2 reaction than anyone else, is said to have once commented that if muonium was listed in the table then it would be much better known.
The question, however, is whether these exotic atoms truly behave like other atoms when they form molecules. Do they still look basically hydrogen-like in such a situation, despite the fact that, for example, Mu is so light? After all, conventional quantum-chemical methods rely on the Born-Oppenheimer approximation, predicated on the very different masses of electrons and nuclei, to separate out the electronic and nuclear degrees of freedom. Might the muons perhaps ‘leak’ into other atoms, compromising their own atom-like identity? To explore these questions, Goli and Shahbazian have carried out calculations to look at the electronic configurations of Mu and Heμ compounds using the Quantum Theory of Atoms In Molecules (QTAIM) formalism [5], which classifies chemical bonding according to the topology of the electron density distribution. A recent extension of this theory by the same two authors treats the nuclei as well as the electrons as quantum waves, and so is well placed to relax the Born-Oppenheimer approximation [6].
Goli and Shahbazian have calculated the electronic structures for all the various diatomic permutations of Mu and Heμ with the three conventional isotopes of hydrogen. They find that in all cases the muon-containing species are contained within an ‘atomic basin’ containing only a single positively charged particle – that is, they look like real nuclei, and don’t contaminate the other atoms in the union with any ‘sprinkling of muon’. What’s more, Mu and Heμ fit within the trend observed for heavy hydrogen, whereby the atom’s electronegativity increases as its mass increases. This is particularly the case for Mu-H molecules, which are decidedly polar: Muδ+-Hδ-. That in itself forces the issue of whether Mu is really like light hydrogen or needs its own slot in the periodic table: Goli and Shahbazian raise the latter as an option.
The zoo of fundamental particles might provide yet more opportunities for making unusual atoms. Goli and Shahbazian suggest as candidate constituents the positive and negative pions, which are two-quark mesons rather than leptons. But that will stretch experimentalists to the limit: their mean lifetime is just 26 nanoseconds. Still more exotic would be entire nuclei made of antimatter or containing strange quarks (‘strange matter’)[7]. At any rate, it seems clear that there are more things on heaven and earth than are dreamed of in your periodic table.
1. A. W. Castleman Jr & S. N. Khanna, J. Phys. Chem. C 113, 2662-2675 (2009).
2. R. J. Macfarlane et al., Angew. Chem. Int. Ed. 52, 5688-5698 (2013).
3. M. Goli & Sh. Shahbazian, preprint http://www.arxiv.org/abs/1311.6431 (2013).
4. D. G. Fleming et al., J. Chem. Phys. 135, 184310 (2011).
5. R. F. W. Bader, Atoms in Molecules: A Quantum Theory. Oxford University Press, 1990.
6. M. Goli & Sh. Shahbazian, Theor. Chem. Acc. 129, 235-245 (2011).
7. STAR collaboration, Science 328, 58-62 (2010).
_______________________________________________________________
The periodic table seems constantly on the verge of expansion. There are of course new superheavy elements being added, literally atom by atom, to its nether reaches by the accelerator-driven synthesis of new nuclei. There’s also talk of systematic organization of new pseudo-atomic building blocks, whether these are polyatomic ‘superatoms’ [1] or nanoparticles assigned a particular ‘valence’ via DNA-based linkers [2]. But one could be forgiven for assuming that the main body of the table that adorns all chemistry lecture theatres will remain largely unchanged, give or take a few arguments over where to put hydrogen.
Yet even that can’t be taken for granted. A preprint [3] by quantum chemists Mohammad Goli and Shant Shahbazian at Shahid Beheshti University in Iran posits two new light elements – although these should formally be considered isotopes. They are muonium (Mu), in which an electron orbits a positively charged muon (μ+), and muonic helium (Heμ), in which an electron orbits a ‘nucleus’ consisting of an alpha particle and a negative muon – the latter in a very tight orbit close to the true nucleus.
Both of these ‘atoms’ can be considered analogues of hydrogen, with a single electron orbiting a nucleus of charge +1. They have, however, quite different masses. Since the muon – a lepton, being a ‘heavy’ cousin of the electron (or of its antiparticle the positron) – has a mass of 0.11 amu, muonium has about a tenth the mass of 1H, while muonic helium has a mass of 4.11 amu.
They have both been made in particle accelerators via high-energy collisions that generate muons, which can then be captured by helium or can themselves capture an electron. Some of these facilities, such as the TRIUMF accelerator in Vancouver, can generate beams of muons which can be thermalized by collisions with a gas, reducing the particle energies sufficiently to make muonic atoms capable of undergoing chemical reactions. True, the muons last for only around 2.2×10**-6 seconds, but that’s a lifetime, so to speak, compared with some superheavy artificial elements. Indeed, their chemistry has been explored already [4]: their reaction rates with molecular hydrogen not only confirm their hydrogen-like behaviour but show isotope effects that are consistent with quantum-chemical theory.
So undoubtedly Mu and Heμ have a chemistry. It seems only reasonable, then, to find a place for them in the periodic table. Indeed, Dick Zare of Stanford University, who probably known more about the classic H+H2 reaction than anyone else, is said to have once commented that if muonium was listed in the table then it would be much better known.
The question, however, is whether these exotic atoms truly behave like other atoms when they form molecules. Do they still look basically hydrogen-like in such a situation, despite the fact that, for example, Mu is so light? After all, conventional quantum-chemical methods rely on the Born-Oppenheimer approximation, predicated on the very different masses of electrons and nuclei, to separate out the electronic and nuclear degrees of freedom. Might the muons perhaps ‘leak’ into other atoms, compromising their own atom-like identity? To explore these questions, Goli and Shahbazian have carried out calculations to look at the electronic configurations of Mu and Heμ compounds using the Quantum Theory of Atoms In Molecules (QTAIM) formalism [5], which classifies chemical bonding according to the topology of the electron density distribution. A recent extension of this theory by the same two authors treats the nuclei as well as the electrons as quantum waves, and so is well placed to relax the Born-Oppenheimer approximation [6].
Goli and Shahbazian have calculated the electronic structures for all the various diatomic permutations of Mu and Heμ with the three conventional isotopes of hydrogen. They find that in all cases the muon-containing species are contained within an ‘atomic basin’ containing only a single positively charged particle – that is, they look like real nuclei, and don’t contaminate the other atoms in the union with any ‘sprinkling of muon’. What’s more, Mu and Heμ fit within the trend observed for heavy hydrogen, whereby the atom’s electronegativity increases as its mass increases. This is particularly the case for Mu-H molecules, which are decidedly polar: Muδ+-Hδ-. That in itself forces the issue of whether Mu is really like light hydrogen or needs its own slot in the periodic table: Goli and Shahbazian raise the latter as an option.
The zoo of fundamental particles might provide yet more opportunities for making unusual atoms. Goli and Shahbazian suggest as candidate constituents the positive and negative pions, which are two-quark mesons rather than leptons. But that will stretch experimentalists to the limit: their mean lifetime is just 26 nanoseconds. Still more exotic would be entire nuclei made of antimatter or containing strange quarks (‘strange matter’)[7]. At any rate, it seems clear that there are more things on heaven and earth than are dreamed of in your periodic table.
1. A. W. Castleman Jr & S. N. Khanna, J. Phys. Chem. C 113, 2662-2675 (2009).
2. R. J. Macfarlane et al., Angew. Chem. Int. Ed. 52, 5688-5698 (2013).
3. M. Goli & Sh. Shahbazian, preprint http://www.arxiv.org/abs/1311.6431 (2013).
4. D. G. Fleming et al., J. Chem. Phys. 135, 184310 (2011).
5. R. F. W. Bader, Atoms in Molecules: A Quantum Theory. Oxford University Press, 1990.
6. M. Goli & Sh. Shahbazian, Theor. Chem. Acc. 129, 235-245 (2011).
7. STAR collaboration, Science 328, 58-62 (2010).
Chips in space
Here’s the initial version of my latest piece for the Under the Radar column of BBC Future.
____________________________________________________________________
If humans ever voyage to Jupiter, the journey is sure to be arduous and full of danger. But there’s a consolation: chips cooked at the planet’s surface will be crispier.
Perhaps that’s too glib a conclusion to draw from recent work investigating the effect of high gravity on chip frying (that’s French fries or frites outside the UK), not least because gaseous Jupiter of course doesn’t really have a surface and no one plans to go there. But the gastronomic preferences of future astronauts are the genuine motivation for experiments conducted by chemists John Lioumbas and Thodoris Karapantsios of the Aristotle University of Thessaloniki in Greece, and reported in the journal Food Science International. That’s why their work is supported by the European Space Agency.
You see, astronauts sometimes lament the drabness of their pre-prepared space meals, and have even expressed cravings for chips. Some thought has already gone into methods of food preparation in space (if you don’t want potato peelings floating around, it has to be done in a hands-free self-contained system), as well as developing novel sources of fresh food, such as the culturing of artificial meat. But aside from these logistics, there’s also the problem that in zero gravity some of the basic physics of cooking is different.
The wish for decent grub in space is understandable, but also highlights one of the conundrums of human spaceflight. The quest to send humans into space is generally presented in heroic terms as a bold adventure that might bring benefits for all humanity. But once you consider what it really entails, you’re confronted with some pretty prosaic, even bathetic, questions of detail. How will they cope with the boredom and confinement? Will the toilet facilities work? (To judge from the International Space Station, not necessarily.) And will a good fry-up raise their morale? Such questions sit uneasily with the “Columbus” narrative, and arguably might force us to ask whether space is such a good place to put humans anyway.
But back to the deep-fryer. You might wonder why, if we’re talking about chips in space, Lioumbas and Karapantsios are cooking in increased gravity rather than zero gravity. The answer is that they want to map out the whole landscape of how gravity influences the cooking process, to get some idea of the overall trends and patterns as the tug of gravity changes. They are now working on the same questions in microgravity experiments – gravity much weaker than that of the Earth.
For frying and boiling, the key issue is convection. The rate at which foods heat up in water or oil is affected by the way heat circulates in the liquid. This depends on the convection currents created by buoyancy, as hot and therefore less dense liquid rises from the bottom of the pan. This convection won’t happen in zero gravity, because a difference in density doesn’t produce a difference in weight if everything is weightless anyway: there’s no buoyancy. Conversely, in increased gravity convective effects should be more pronounced.
The researchers wanted to know how these differences affect the way chips fry. While achieving low gravity is difficult unless you go into space (or want to brave the free-falling ‘Vomit Comet’ aircraft used by space agencies, which is enough to put anyone off their chips), artificially increasing the force of gravity is relatively easy. You simply attach the apparatus to the arm of a rapidly spinning centrifuge, the rotation of which produces a centrifugal force that mimics gravity.
So that’s what Lioumbas and Karapantsios did. They fixed a deep-fat fryer containing potato sticks in half a litre of hot oil onto the end of the 8m-long arms of the Large Diameter Centrifuge at the European Space Research and Technology Centre in Noordwijk, the Netherlands. This device could generate the equivalent of a gravitational force up to nine times that at the Earth’s surface (that is, 9g).
The researchers monitored the temperature just below the surface of the potatoes, where the crust of the chip forms, and also examined the thickness and profile of the crust under the microscope. Convection currents are created both within the pan as a whole and from the rising of bubbles that grow on the potato surface as the oil begins to boil. As the g-force rises, these bubbles become smaller and more numerous and they rise faster. However, when it reaches 3g, the bubbles are so small that they get stuck to the potato by capillary forces, and so further increases in gravity make little difference.
What’s more, while the crust steadily thickens up to 3g, still stronger gravity has less of an effect on the thickness. Instead, Lioumbas and Karapantsios find that the crust then starts to separate from the softer core of the potato, as superheated steam from the moist potato flesh blows a bubble between the two. But who wishes to eat chips with bubbles in?
So the researchers conclude that if you want chips (or anything else) to deep-fry faster, making them crispy in a shorter time, there’s nothing to be gained, and in fact some disadvantages, from centrifuging to a force greater than 3g. That much is not really a lesson for space cooking, where in general gravity will be much lower than Earth’s (and anyway, on the International Space Station can’t you simply order a takeaway from Pizza Hut?). But it could be worth knowing for the food industry, where centrifugal ‘flash-frying’ might be considered worth a try.
Reference: J. S. Lioumbas & T. D. Karapantsios, Food Research International 55, 110-118 (2014).
____________________________________________________________________
If humans ever voyage to Jupiter, the journey is sure to be arduous and full of danger. But there’s a consolation: chips cooked at the planet’s surface will be crispier.
Perhaps that’s too glib a conclusion to draw from recent work investigating the effect of high gravity on chip frying (that’s French fries or frites outside the UK), not least because gaseous Jupiter of course doesn’t really have a surface and no one plans to go there. But the gastronomic preferences of future astronauts are the genuine motivation for experiments conducted by chemists John Lioumbas and Thodoris Karapantsios of the Aristotle University of Thessaloniki in Greece, and reported in the journal Food Science International. That’s why their work is supported by the European Space Agency.
You see, astronauts sometimes lament the drabness of their pre-prepared space meals, and have even expressed cravings for chips. Some thought has already gone into methods of food preparation in space (if you don’t want potato peelings floating around, it has to be done in a hands-free self-contained system), as well as developing novel sources of fresh food, such as the culturing of artificial meat. But aside from these logistics, there’s also the problem that in zero gravity some of the basic physics of cooking is different.
The wish for decent grub in space is understandable, but also highlights one of the conundrums of human spaceflight. The quest to send humans into space is generally presented in heroic terms as a bold adventure that might bring benefits for all humanity. But once you consider what it really entails, you’re confronted with some pretty prosaic, even bathetic, questions of detail. How will they cope with the boredom and confinement? Will the toilet facilities work? (To judge from the International Space Station, not necessarily.) And will a good fry-up raise their morale? Such questions sit uneasily with the “Columbus” narrative, and arguably might force us to ask whether space is such a good place to put humans anyway.
But back to the deep-fryer. You might wonder why, if we’re talking about chips in space, Lioumbas and Karapantsios are cooking in increased gravity rather than zero gravity. The answer is that they want to map out the whole landscape of how gravity influences the cooking process, to get some idea of the overall trends and patterns as the tug of gravity changes. They are now working on the same questions in microgravity experiments – gravity much weaker than that of the Earth.
For frying and boiling, the key issue is convection. The rate at which foods heat up in water or oil is affected by the way heat circulates in the liquid. This depends on the convection currents created by buoyancy, as hot and therefore less dense liquid rises from the bottom of the pan. This convection won’t happen in zero gravity, because a difference in density doesn’t produce a difference in weight if everything is weightless anyway: there’s no buoyancy. Conversely, in increased gravity convective effects should be more pronounced.
The researchers wanted to know how these differences affect the way chips fry. While achieving low gravity is difficult unless you go into space (or want to brave the free-falling ‘Vomit Comet’ aircraft used by space agencies, which is enough to put anyone off their chips), artificially increasing the force of gravity is relatively easy. You simply attach the apparatus to the arm of a rapidly spinning centrifuge, the rotation of which produces a centrifugal force that mimics gravity.
So that’s what Lioumbas and Karapantsios did. They fixed a deep-fat fryer containing potato sticks in half a litre of hot oil onto the end of the 8m-long arms of the Large Diameter Centrifuge at the European Space Research and Technology Centre in Noordwijk, the Netherlands. This device could generate the equivalent of a gravitational force up to nine times that at the Earth’s surface (that is, 9g).
The researchers monitored the temperature just below the surface of the potatoes, where the crust of the chip forms, and also examined the thickness and profile of the crust under the microscope. Convection currents are created both within the pan as a whole and from the rising of bubbles that grow on the potato surface as the oil begins to boil. As the g-force rises, these bubbles become smaller and more numerous and they rise faster. However, when it reaches 3g, the bubbles are so small that they get stuck to the potato by capillary forces, and so further increases in gravity make little difference.
What’s more, while the crust steadily thickens up to 3g, still stronger gravity has less of an effect on the thickness. Instead, Lioumbas and Karapantsios find that the crust then starts to separate from the softer core of the potato, as superheated steam from the moist potato flesh blows a bubble between the two. But who wishes to eat chips with bubbles in?
So the researchers conclude that if you want chips (or anything else) to deep-fry faster, making them crispy in a shorter time, there’s nothing to be gained, and in fact some disadvantages, from centrifuging to a force greater than 3g. That much is not really a lesson for space cooking, where in general gravity will be much lower than Earth’s (and anyway, on the International Space Station can’t you simply order a takeaway from Pizza Hut?). But it could be worth knowing for the food industry, where centrifugal ‘flash-frying’ might be considered worth a try.
Reference: J. S. Lioumbas & T. D. Karapantsios, Food Research International 55, 110-118 (2014).
Subscribe to:
Posts (Atom)