Here’s my latest Crucible column in Chemistry World.
_____________________________________________________________________
When Jenny Pickworth Glusker of the Fox Chase Cancer Center in Philadelphia delivered a talk on the past, present and future of crystallography at the opening ceremony of the International Year of Crystallography (IYCr) in January, she not only described but personified the traditions of the field. For Glusker worked in the laboratory of Dorothy Hodgkin, who was a PhD student of J. Desmond Bernal, who was a protégé of William Bragg – who of course started it all, after Max von Laue’s seminal discovery of X-ray diffraction in 1912.
This sort of scientific genealogy is properly a source of pride to those concerned, but it is more than that. From her mentor a scientist acquires not just a technical training but a culture – a sense of what matters, in terms of the scientific questions one asks, an approach to answering them, and the attitude one adopts in researching them. It is hard, for example, to imagine anyone emerging from under the wing of Bernal with much regard for rigid disciplinary boundaries. A part of that culture is surely also the sense of moral and ethical responsibilities that a good mentor will supply.
This is one reason why Alexander Petersen of the IMT Lucca Institute for Advanced Studies and colleagues suggest in a preprint that the trend in science towards large teams is not uncomplicated. That trend has been much remarked on, one implication being that the old mechanisms of rewarding scientific achievement – individual prizes, not least the Nobels – are becoming obsolete. Petersen and colleagues confirm, with a number of metrics, the observation that the number of coauthors on papers is rising across the field in the sciences, and that singleton Nobel prizes are now rather rare.
This poses challenges for attribution of credit (at the same time that such credit is becoming more vital to young researchers), not to mention for the task of simply organizing large teams so that they work efficiently. What has been less remarked is that the potential problems arise not just for team members and leaders, but for the whole scientific community and indeed beyond. As teams get larger, they become less transparent. It is harder to monitor who is doing what, and becomes increasingly necessary to take each contribution on trust.
Petersen and colleagues suggest that this trend could make it easier for misconduct to happen unnoticed, and less likely that there will be channels of mentorship to discourage it in the first place. There are ample examples. Some highly respected scientists have had their reputations tarnished, whether fairly or not, by their apparent failure adequately to scrutinize results falsified by junior colleagues: biologist David Baltimore in the case of Thereza Imanishi-Kari in the late 1980s, and physicist Bertram Batlogg in the case of nanotech fraudster Jan-Hendrik Schön in the early 2000s. Both senior figures were busy people in big labs. But such situations have surely got even harder to manage since those days. Poor management procedures were cited as a reason why the young forensic chemist Annie Dookhan was able to falsify perhaps thousands of drug-test results in the Hinton laboratory in Massachusetts, leading to Dookhan’s prison sentence later last year.
These cases may be extreme, but Petersen and colleagues suggest that it is difficult to maintain chains of responsibility and good conduct in large teams. When things go wrong, for example requiring retraction of a publication, it might be all but impossible to trace the blame. Big teams increase the potential for conflicts of interest, say with researchers peer-reviewing a collaborator’s manuscript, at the same time as making them harder to spot. “In this respect,” the authors say, “we have been witnessing the emergence of a conflict between the scientist and the scientific commons.”
Some of these concerns relate to a sense of values. “Many young scientists have likely been ‘lured’ into postdoctoral traps within large projects”, Petersen and colleagues write. “Are the next crop of scientists trained to be leaders or to just fit into a large production line? And once they enter the tenure track, do the lessons they observed reflect positive scientific values? Or do they reflect a system engaged in productivity at the expense of quality… and pathologically competitive attitudes that run counter to socially beneficial progress?”
Such attitudes may be forced onto young researchers by the prevailing culture. At the IYCr ceremony, a panel of young crystallographers debated the challenges they and their peers face, and in a signed declaration from that event [coming to this site soon…] they say that “Young researchers face problems with long working hours, high pressure and expectations to obtain results and to publish papers quickly and in top journals, job insecurity, and large teaching commitments. These pressures are intensifying, and… they hinder the freedom to explore original and innovative directions or to think about long-term research goals.” They can also motivate misconduct: Dookhan admitted, for example, that she faked results because of “her desire to be seen as a particularly hard working and productive.” Large teams and increasing competition might be an inevitable trend in science, but their consequences for mentorship and ethics need to be faced.
Wednesday, April 30, 2014
Tuesday, April 29, 2014
Last of the independents?
Here’s another take on the Lovelock fest at the Science Museum, written for the Prospect blog.
_________________________________________________________
If there’s one thing mavericks share in common, it’s that they contrive or refuse ever to admit that they’re wrong about anything. By this measure, the title of the new exhibition at the Science Museum in London – Lovelock Unlocked: Scientist, Inventor, Maverick – does James Lovelock, the father of the Gaia hypothesis, a disservice. When I spoke to him on the eve of the opening on 9th April, he admitted almost merrily that his earlier, dire warnings about the impending collapse of the population, and perhaps of civilization, because of global warming were over the top. Things look grim, he says, but not that grim. This is because we now understand that there are natural processes and systems, such as the immense capacity of the oceans to absorb heat, that might buffer us against the worst-case scenarios of the effects of increased amounts of greenhouse gases such as carbon dioxide in the atmosphere. The fact is, Lovelock explained, it is really no big deal for him to admit to a mistake, for who is going to reprimand him, an independent scientist beholden to no one?
But perhaps Lovelock is also so ready to admit to the occasional error – here more of judgement and foresight rather than science – because time has shown him to be right about a good deal else. And he’s no stranger to accusations that he is wrong, which is where that “maverick” label comes from: by some standards, all this means is that some famous people have disagreed with you. In the early days of the Gaia hypothesis that Lovelock cooked up with microbiologist Lynn Margulis in the 1970s, evolutionary biologists in particular were queuing up to disagree with him, often in such vituperative terms that the arguments were evidently not about science alone. John Maynard Smith, an architect of neo-Darwinism, denounced Gaia as an “evil religion”, thinking that Lovelock’s talk of “goals” and “purposes” – which seemed unexceptional to him as an engineer – transgressed the central (and perfectly true, as far as all evidence indicates) tenet of evolution that it has no direction or aim. Lovelock delights now in Maynard Smith’s admission that he’d not actually read Lovelock’s books or papers but just relied on second-hand descriptions. But Maynard Smith also told Lovelock in an affable letter in 1993 that neo-Darwinists responded as they did not just because they saw Gaia as “a loose and unjustifiable extension of evolutionary thinking” but because they too “have felt themselves to be a persecuted group.” That is understandable (particularly across the Atlantic), but it also explains the extreme conservatism that still characterizes some neo-Darwinists.
So is Lovelock right or wrong? An exhibition like this invites us to see him as the outsider whose theories have now been vindicated and accepted by the scientific establishment. But the real story is much more interesting than that tired trope. While the objections of the biologists were not without force (even if some were ultimately semantic), the Gaia hypothesis is not really a theory that can be proved or disproved. It is a way of thinking about the issues – in this case, the issues of how our planetary environment came into being and maintains itself. Lovelock was not the first to suggest that these processes might involve interactions between different parts of what we now call the “earth system” – the circulation of the oceans and global air temperatures, say – but no one previously had started to put the whole picture together in an explanation of how the earth “self-regulates” its climate. Inevitably, some parts of that picture were seen clearly, others less so. And in any event, what the Gaia hypothesis consists of has evolved and mutated too much over the years for it to be regarded, like special relativity, as an idea that sprang fully formed from its creator’s mind, ripe for experimental testing.
Yet what really marked out Lovelock’s idea as original was the role he asserted for life on earth – the biosphere – as a literally vital component of how our planet achieves homeostasis (relative stability of climate) in the face of changing circumstances. This was, of course, precisely why he was deemed to be trespassing on the territory of biologists, with neither permission nor the proper training (although Lovelock did begin his career in the Medical Research Council). He argued that biological processes such as plant growth (which withdraws CO2 from the atmosphere) and bacterial dissolution of rocks have a key role in the way chemical elements are cycled between the seas, air, soil and stone, and ice – and in consequence, how climate is determined and changed. There is no real debate now that Lovelock was right about this, although he feels that the acceptance of the earth-sciences community has come only grudgingly and on condition that the disturbingly personified Gaia hypothesis be recast as “earth systems science”.
As an example of how living organisms affect the “inorganic” features of the climate system, in the early 1990s Lovelock teamed up with atmospheric scientists Robert Charlson, Meinrat Andreae and Stephen Warren to develop a hypothesis involving the ‘sulphur cycle’ – reactions and processes involving sulphur compounds. They pointed out that marine plankton give off a sulphur-containing gas called DMS as part of their metabolism, and that this is transformed in the atmosphere to sulphate, which clusters into a sort of dust of tiny salt-like particles that can seed the formation of cloud droplets. Because clouds reflect sunlight, they can alter the climate. In the so-called CLAW hypothesis, the plankton act as a negative-feedback thermostat that regulates a steady climate. If the seas warm, the plankton produce more DMS, there’s more sulphate to feed cloud formation, and so less sunlight gets through and the oceans cool down. It now seems that the idea doesn’t work – all of these things happen to some degree, but the feedbacks are more complex and subtle. Yet even so, this idea illustrates the virtue of the Gaia hypothesis in stimulating an interesting proposal, based on well established principles, that was deemed worth testing carefully (the apparent nail in the coffin came only in 2011).
Ironically – although here I risk drawing more ire from biologists – this aspect makes Gaia not unlike the notion of Darwinian evolution. That’s to say, it describes something that evidently happens in the world, and indeed you really can’t think properly about the process concerned (climate, evolution) except in this light. But that doesn’t mean that the idea explains all that happens, or that what it predicts will always be what you find. Indeed, the essence of the Gaia hypothesis – that all these various influences on climate combine to keep it steady and self-regulating in the way our bodies stabilize their temperature – still lacks any proof, and some earth-systems scientists say that the hypothesis just doesn’t fit with the evidence.
But maybe it’s right to judge Gaia more by her fertility and productiveness than by some putative test of right or wrong. As Vivienne Westwood, a Lovelock supporter, said at the gala organized to launch the Science Museum’s project, it isn’t a question of whether someone should be regarded as a scientist or an artist, but of whether or not they are imaginative. Scientists might sniff that one can imagine endless false theories, but real imagination in science is about offering a new way to think about a problem.
However, the most interesting parts of this exhibition, as of Lovelock’s new book A Rough Ride to the Future (Allen Lane, 2014) don’t have anything to do with Gaia or climate change. They are about Lovelock the inventor. This is what makes Lovelock’s career not merely productive and interesting but remarkable. He has invented over a hundred useful devices, including the electron capture detector that enabled him to detect small traces of chlorofluorocarbon (CFC) gases in the atmosphere in the late 1960s, leading to a realization that these industrial reagents (used as refrigerants) were gradually accumulating and could, once they reached the frigid stratosphere over the poles, undergo chemical reactions that destroy ozone, the planet’s protective screen against harmful ultraviolet radiation from the sun. Lovelock also claims to have invented the first microwave oven, which he used to defrost hamsters frozen for experiments during his early stint at the Medical Research Council. Several of these contraptions are displayed in the exhibition, and they have a wonderful Heath Robinson quality that belies their precision and artistry – it was the unprecedented sensitivity of the ECD that revealed the tiny but potentially damaging traces of CFCs. Lovelock fashioned all of these instruments himself by hand, many in the private laboratory that he set up in Launceston on the Cornish border, and proceeds from their sales or patents allowed him to become an independent scientist. This practical side of Lovelock’s imagination is not by any means a sideline: it was because he understood the scientific principles that they worked so well, and the positive feedback between making and thinking is very apparent in his work. It’s not necessary for a good scientist to be a good inventor or technician, neither do inventors need to have a strong grasp of scientific theory – witness the sometimes shaky ideas of Thomas Edison and Nikola Tesla. But those who possess both can do and see things that other cannot. That is as true of Lovelock’s scientific heroes Michael Faraday and Alan Turing as it is of the man himself.
At one point, after visiting Lovelock’s home to look at the paper archives, the Science Museum curators were so taken with the creative chaos of his laboratory that they harboured a desire to acquire the whole thing for the project too. But there was a problem with that. “Unfortunately”, project leader Alexandra Johnson told me, “because Jim works by himself and doesn’t have to abide by health and safety regulations, there were too many hazards. There was radiation, mercury, asbestos, Semtex, just about anything you could possibly imagine. There was a particularly dubious cupboard which he used for storing chemicals in, and the lab had been flooded. We opened this cupboard and very quickly shut it again, because we were not quite sure what was going on in there.”
So instead they acquired some selected items, such as the lathe Lovelock used to make his instruments, and the “Mars jar” – a repurposed Kilner jar used to simulate the Martian atmosphere and test the detectors he developed for NASA’s Viking lander missions in the 1970s.
As far as Gaia is concerned, Johnson says that an attraction for the Science Museum is that “it’s still work in progress. There’s still a lot of conversation and dispute and debate happening around it. It is rare that we are able to show this way that scientific ideas get argued and fought over.”
The "Mars jar" in Lovelock Unlocked.
_________________________________________________________
If there’s one thing mavericks share in common, it’s that they contrive or refuse ever to admit that they’re wrong about anything. By this measure, the title of the new exhibition at the Science Museum in London – Lovelock Unlocked: Scientist, Inventor, Maverick – does James Lovelock, the father of the Gaia hypothesis, a disservice. When I spoke to him on the eve of the opening on 9th April, he admitted almost merrily that his earlier, dire warnings about the impending collapse of the population, and perhaps of civilization, because of global warming were over the top. Things look grim, he says, but not that grim. This is because we now understand that there are natural processes and systems, such as the immense capacity of the oceans to absorb heat, that might buffer us against the worst-case scenarios of the effects of increased amounts of greenhouse gases such as carbon dioxide in the atmosphere. The fact is, Lovelock explained, it is really no big deal for him to admit to a mistake, for who is going to reprimand him, an independent scientist beholden to no one?
But perhaps Lovelock is also so ready to admit to the occasional error – here more of judgement and foresight rather than science – because time has shown him to be right about a good deal else. And he’s no stranger to accusations that he is wrong, which is where that “maverick” label comes from: by some standards, all this means is that some famous people have disagreed with you. In the early days of the Gaia hypothesis that Lovelock cooked up with microbiologist Lynn Margulis in the 1970s, evolutionary biologists in particular were queuing up to disagree with him, often in such vituperative terms that the arguments were evidently not about science alone. John Maynard Smith, an architect of neo-Darwinism, denounced Gaia as an “evil religion”, thinking that Lovelock’s talk of “goals” and “purposes” – which seemed unexceptional to him as an engineer – transgressed the central (and perfectly true, as far as all evidence indicates) tenet of evolution that it has no direction or aim. Lovelock delights now in Maynard Smith’s admission that he’d not actually read Lovelock’s books or papers but just relied on second-hand descriptions. But Maynard Smith also told Lovelock in an affable letter in 1993 that neo-Darwinists responded as they did not just because they saw Gaia as “a loose and unjustifiable extension of evolutionary thinking” but because they too “have felt themselves to be a persecuted group.” That is understandable (particularly across the Atlantic), but it also explains the extreme conservatism that still characterizes some neo-Darwinists.
So is Lovelock right or wrong? An exhibition like this invites us to see him as the outsider whose theories have now been vindicated and accepted by the scientific establishment. But the real story is much more interesting than that tired trope. While the objections of the biologists were not without force (even if some were ultimately semantic), the Gaia hypothesis is not really a theory that can be proved or disproved. It is a way of thinking about the issues – in this case, the issues of how our planetary environment came into being and maintains itself. Lovelock was not the first to suggest that these processes might involve interactions between different parts of what we now call the “earth system” – the circulation of the oceans and global air temperatures, say – but no one previously had started to put the whole picture together in an explanation of how the earth “self-regulates” its climate. Inevitably, some parts of that picture were seen clearly, others less so. And in any event, what the Gaia hypothesis consists of has evolved and mutated too much over the years for it to be regarded, like special relativity, as an idea that sprang fully formed from its creator’s mind, ripe for experimental testing.
Yet what really marked out Lovelock’s idea as original was the role he asserted for life on earth – the biosphere – as a literally vital component of how our planet achieves homeostasis (relative stability of climate) in the face of changing circumstances. This was, of course, precisely why he was deemed to be trespassing on the territory of biologists, with neither permission nor the proper training (although Lovelock did begin his career in the Medical Research Council). He argued that biological processes such as plant growth (which withdraws CO2 from the atmosphere) and bacterial dissolution of rocks have a key role in the way chemical elements are cycled between the seas, air, soil and stone, and ice – and in consequence, how climate is determined and changed. There is no real debate now that Lovelock was right about this, although he feels that the acceptance of the earth-sciences community has come only grudgingly and on condition that the disturbingly personified Gaia hypothesis be recast as “earth systems science”.
As an example of how living organisms affect the “inorganic” features of the climate system, in the early 1990s Lovelock teamed up with atmospheric scientists Robert Charlson, Meinrat Andreae and Stephen Warren to develop a hypothesis involving the ‘sulphur cycle’ – reactions and processes involving sulphur compounds. They pointed out that marine plankton give off a sulphur-containing gas called DMS as part of their metabolism, and that this is transformed in the atmosphere to sulphate, which clusters into a sort of dust of tiny salt-like particles that can seed the formation of cloud droplets. Because clouds reflect sunlight, they can alter the climate. In the so-called CLAW hypothesis, the plankton act as a negative-feedback thermostat that regulates a steady climate. If the seas warm, the plankton produce more DMS, there’s more sulphate to feed cloud formation, and so less sunlight gets through and the oceans cool down. It now seems that the idea doesn’t work – all of these things happen to some degree, but the feedbacks are more complex and subtle. Yet even so, this idea illustrates the virtue of the Gaia hypothesis in stimulating an interesting proposal, based on well established principles, that was deemed worth testing carefully (the apparent nail in the coffin came only in 2011).
Ironically – although here I risk drawing more ire from biologists – this aspect makes Gaia not unlike the notion of Darwinian evolution. That’s to say, it describes something that evidently happens in the world, and indeed you really can’t think properly about the process concerned (climate, evolution) except in this light. But that doesn’t mean that the idea explains all that happens, or that what it predicts will always be what you find. Indeed, the essence of the Gaia hypothesis – that all these various influences on climate combine to keep it steady and self-regulating in the way our bodies stabilize their temperature – still lacks any proof, and some earth-systems scientists say that the hypothesis just doesn’t fit with the evidence.
But maybe it’s right to judge Gaia more by her fertility and productiveness than by some putative test of right or wrong. As Vivienne Westwood, a Lovelock supporter, said at the gala organized to launch the Science Museum’s project, it isn’t a question of whether someone should be regarded as a scientist or an artist, but of whether or not they are imaginative. Scientists might sniff that one can imagine endless false theories, but real imagination in science is about offering a new way to think about a problem.
However, the most interesting parts of this exhibition, as of Lovelock’s new book A Rough Ride to the Future (Allen Lane, 2014) don’t have anything to do with Gaia or climate change. They are about Lovelock the inventor. This is what makes Lovelock’s career not merely productive and interesting but remarkable. He has invented over a hundred useful devices, including the electron capture detector that enabled him to detect small traces of chlorofluorocarbon (CFC) gases in the atmosphere in the late 1960s, leading to a realization that these industrial reagents (used as refrigerants) were gradually accumulating and could, once they reached the frigid stratosphere over the poles, undergo chemical reactions that destroy ozone, the planet’s protective screen against harmful ultraviolet radiation from the sun. Lovelock also claims to have invented the first microwave oven, which he used to defrost hamsters frozen for experiments during his early stint at the Medical Research Council. Several of these contraptions are displayed in the exhibition, and they have a wonderful Heath Robinson quality that belies their precision and artistry – it was the unprecedented sensitivity of the ECD that revealed the tiny but potentially damaging traces of CFCs. Lovelock fashioned all of these instruments himself by hand, many in the private laboratory that he set up in Launceston on the Cornish border, and proceeds from their sales or patents allowed him to become an independent scientist. This practical side of Lovelock’s imagination is not by any means a sideline: it was because he understood the scientific principles that they worked so well, and the positive feedback between making and thinking is very apparent in his work. It’s not necessary for a good scientist to be a good inventor or technician, neither do inventors need to have a strong grasp of scientific theory – witness the sometimes shaky ideas of Thomas Edison and Nikola Tesla. But those who possess both can do and see things that other cannot. That is as true of Lovelock’s scientific heroes Michael Faraday and Alan Turing as it is of the man himself.
At one point, after visiting Lovelock’s home to look at the paper archives, the Science Museum curators were so taken with the creative chaos of his laboratory that they harboured a desire to acquire the whole thing for the project too. But there was a problem with that. “Unfortunately”, project leader Alexandra Johnson told me, “because Jim works by himself and doesn’t have to abide by health and safety regulations, there were too many hazards. There was radiation, mercury, asbestos, Semtex, just about anything you could possibly imagine. There was a particularly dubious cupboard which he used for storing chemicals in, and the lab had been flooded. We opened this cupboard and very quickly shut it again, because we were not quite sure what was going on in there.”
So instead they acquired some selected items, such as the lathe Lovelock used to make his instruments, and the “Mars jar” – a repurposed Kilner jar used to simulate the Martian atmosphere and test the detectors he developed for NASA’s Viking lander missions in the 1970s.
As far as Gaia is concerned, Johnson says that an attraction for the Science Museum is that “it’s still work in progress. There’s still a lot of conversation and dispute and debate happening around it. It is rare that we are able to show this way that scientific ideas get argued and fought over.”
The "Mars jar" in Lovelock Unlocked.
Monday, April 28, 2014
Small is... sort of cute
This little story went on the Guardian site on Friday. The technology isn’t new, but it was a very cute way to introduce the commercialization of it.
__________________________________________________________________
It looks much like any other cover of the children’s magazine National Geographic Kids. Cuddly animals: check. Free Sea Turtle poster: check. Story about rescued hippos: check. Only the lack of colour and the slight graininess makes you think it might be something other than the real thing. But the real reason for these imperfections is that this magazine cover is so small that a single human red blood cell would cover most of it. It measures just 11 by 14 thousandths of a millimetre, and is totally invisible to the naked eye.
This is officially the Smallest Magazine Cover in the World, having been certified as such today by the Guinness Book of Records at the US National Science and Engineering Festival in Washington DC. The image is carved out of a lump of plastic using, as a chisel, a tiny silicon needle 100,000 times sharper than the sharpest pencil tip. The contrast reflects the topography of the surface: the higher it is, the lighter it appears.
The technique was developed over the past five years by physicist Armin Knoll and his colleagues at IBM’s research laboratory in the suburb of Ruschlikon in Zurich, Switzerland. The needle, attached to a bendy silicon strip that scans across the sample surface, is electrically heated so that when it is brought close to the specially developed plastic, the material just evaporates. In this way the researchers can remove blobs of material just five nanometres (millionths of a millimetre) across, paring away the surface pixel by pixel like a milling machine.
By reducing the heat, the tip can be used to take a snapshot of the carved structure it has produced. Surrounded by plastic in a valley, the tip radiates away more heat than if it hovers above a peak, and this heat flow therefore traces out the surface contours.
The IBM team first reported the method in 2010, and I saw it in action in Ruschlikon two years ago. It was a strange experience. Because I was once an editor at the science journal Nature, Knoll and his colleagues decided to write out the journal’s logo for me. On the display screen the letters took shape one by one, each perfectly formed in Times New Roman as if by a rather slow printer. It was hard to believe that each of them was about the height of a single bacterium.
IBM has licensed this technology to a start-up company in Zurich called SwissLitho, founded in 2012 by former IBM scientists Philip Paul and Felix Holzner. The company has developed it into a commercial machine that they call the NanoFrazor, costing around half a million euros. McGill University in Montreal, Canada, has bought the first of them. “It’s a cool tool”, says McGill physicist Peter Grutter. He says that, quite apart from the ability to make tiny structures for electronics, part of the attraction is that, unlike other nanopatterning methods, it’s very easy to find where you are on a surface and so to take images of large areas or to go back and overlay a second pattern.
The National Geographic Kids cover was made by Knoll’s team after running a readers’ poll to select their favourite image. Although Holzner expects the instrument to be mainly used as a research tool for universities like McGill, he suspects that novelty applications like this might prove popular too. It could be used to add security tags to artworks, passports and personalized Swiss watches that would be virtually impossible to forge. Some companies have already used other nanopatterning methods to write the entire Bible on a crucifix for especially devout customers, and even to engrave tiny patterns on the surface of chocolate that scatter light to create different colours. It seems that the Swiss reputation for both precision engineering and fancy confectionery is as secure as ever.
Sunday, April 27, 2014
My unexpected internal monologue this week...
[The scene: the foyer of a university theatre. Conference delegates are standing around chatting.]
Look, Michael Frayn is standing over by the coffee on his own! If I don’t go and speak to him, I’ll kick myself afterwards. I mean, I had to stop reading The Tin Men and Towards the End of the Morning in public because I kept embarrassing myself by laughing out loud… And then Copenhagen… Come on, I have to. Don’t know what I’ll say, but anything…
OK, I don’t think that was too great a faux pas to ask what he is working on now. I mean, he said “Nothing!”, but not angrily, and now he’s gone and asked what I am working on! Michael Frayn is interested in that! And I didn’t think he’d even know who I was at all! So yes Michael, I see it as a kind of cultural history and – uh OK, now I get it, this is his very gracious way of deflecting the question, because he doesn’t really want to say what he’s working on. Ah well, keep it going… You see, it’s a book all about the stories we tell when –
That bloke out of the window there looks quite like my neighbour Geoff.
Focus, you fool. You can’t start looking over the shoulder of Michael Frayn, as if you’re hoping to catch the eye of someone else at the meeting who might be more interesting to talk to. This is Michael Frayn! Very funny books! Copenhagen! So, this cultural history that goes back to Plato and –
No, he really looks a lot like my neighbour.
Don’t keep looking, you idiot. Look, you have come to Lincoln for the day. Lincoln is a little town, OK so a city really, with cathedral and all, and the cathedral is fabulous, and the dock front around the campus is nice, but it’s not exactly a place people come to, is it? You have to change at Peterborough, for God’s sake. So you glimpsed a bloke with receding hair and a beard, like 80 percent of all male university lecturers. It’s not actually going to be Geoff, is it? I know he turned out to be in Spain the other week at the same time as you, but is he really going to be strolling through the Lincoln campus just as you glance out of the window? Do you think he is stalking you or something?
So where were we? Oh, Michael is talking to someone else now. Well, I would only have blurted out some silly question about Heisenberg.
[Yes, it was my neighbour Geoff.]
Look, Michael Frayn is standing over by the coffee on his own! If I don’t go and speak to him, I’ll kick myself afterwards. I mean, I had to stop reading The Tin Men and Towards the End of the Morning in public because I kept embarrassing myself by laughing out loud… And then Copenhagen… Come on, I have to. Don’t know what I’ll say, but anything…
OK, I don’t think that was too great a faux pas to ask what he is working on now. I mean, he said “Nothing!”, but not angrily, and now he’s gone and asked what I am working on! Michael Frayn is interested in that! And I didn’t think he’d even know who I was at all! So yes Michael, I see it as a kind of cultural history and – uh OK, now I get it, this is his very gracious way of deflecting the question, because he doesn’t really want to say what he’s working on. Ah well, keep it going… You see, it’s a book all about the stories we tell when –
That bloke out of the window there looks quite like my neighbour Geoff.
Focus, you fool. You can’t start looking over the shoulder of Michael Frayn, as if you’re hoping to catch the eye of someone else at the meeting who might be more interesting to talk to. This is Michael Frayn! Very funny books! Copenhagen! So, this cultural history that goes back to Plato and –
No, he really looks a lot like my neighbour.
Don’t keep looking, you idiot. Look, you have come to Lincoln for the day. Lincoln is a little town, OK so a city really, with cathedral and all, and the cathedral is fabulous, and the dock front around the campus is nice, but it’s not exactly a place people come to, is it? You have to change at Peterborough, for God’s sake. So you glimpsed a bloke with receding hair and a beard, like 80 percent of all male university lecturers. It’s not actually going to be Geoff, is it? I know he turned out to be in Spain the other week at the same time as you, but is he really going to be strolling through the Lincoln campus just as you glance out of the window? Do you think he is stalking you or something?
So where were we? Oh, Michael is talking to someone else now. Well, I would only have blurted out some silly question about Heisenberg.
[Yes, it was my neighbour Geoff.]
Saturday, April 26, 2014
Criticality and phase transitions in biology
My piece just published in New Scientist on phase transitions in biology has had one of the most difficult gestations I’ve ever encountered. No one’s fault, it is just that it’s a very tough job finding the right way to tell a story like this. For one thing, what I came to realise during the editorial process is that, if you talk about criticality, people outside condensed-matter physics are likely to imagine you’re talking about self-organized criticality, and that they don’t generally know that critical points have a long, long history going way back beyond this. Neither, it seems, is the connection between the notion of criticality, with its scale-free phenomena, and phase transitions well known. The real point of the ideas I discuss in this piece is not that there’s something wonderful about being poised at a critical point, on the edge of order and chaos etc., but that it can be useful for a biological system to situate itself near to some phase transition and to draw on the fluctuations and sensitivity to external conditions that this engenders. It doesn’t have to be a critical transition – I am coupling together here the current discussions of near-critical biology with work on first-order phase transitions in protein hydration, where again the value seems to be that one can draw on large fluctuations to attain a big response to a small stimulus. This latter material didn’t make the final cut in New Scientist, and I can see that it made for an even more complicated story. But I do think that there is important common ground between the two ideas. What’s more, no one previously has made the link to Eigen’s ideas about natural selection coming from a phase transition – a notion that he has set out in full in his immense, dense but fascinating recent book, which I reviewed here.
Anyway, this version is based on my original draft, but with some of the later material mixed in. Hopefully it will give some indication of the bigger picture.
__________________________________________________________________________
It’s not the midges that were the problem, says Andrea Cavagna, but the kids. You’d think his efforts to record the movements of midge swarms in the public parks of Rome near sunset would be fraught with risks of being eaten alive by the little beasts – but these were a non-biting variety. Keeping away the children who gathered to watch what these folks were up to with their video cameras, generators and thickets of cabling was another matter. That, and the problem of finding a parking space in central Rome.
It’s not easy, he realised, for a physicist to turn field biologist.
The reason why Cavagna, based at Sapienza University in Rome, and his colleagues went midge-hunting sounds strange, perhaps even bizarre. The researchers wanted to know if midges behave like magnets. More specifically, if they act like magnets close to the point where heat flips them between a magnetic and non-magnetic state: a so-called critical phase transition.
Cavagna is one of a small and diverse group of scientists who have begun to suspect that critical phase transitions play vital roles in a wide variety of biological systems. Not only might they underpin the swarming of midges and the flocking of birds, but they might enable neurons in the brain to encode a picture of our environment, some protein molecules to fold up and bind their target molecules, and cell membranes to attract molecules that trigger cell-to-cell messaging. They might even explain how evolution itself works.
It’s good to be critical
Staying alive might seem to be a question of keeping calm and carrying on in the face of whatever comes along. But it’s often important to be able to respond and adapt to challenges rather than stoically riding them out: if you’re a small creature about to be eaten by a big one, you’d better get out of there. The trick is to keep your options open, maintaining easy access to a wide range of actions. It’s a delicate balance: you need stability, but also responsiveness.
In 2010 two physicists suggested how this might be possible. Thierry Mora (now at the École Normale Superieure in Paris) and Bill Bialek at Princeton University argued that many biological systems, from flocking birds to neural networks, might be “poised close to a critical point”. The idea drew on a well-established notion from statistical physics: the critical phase transition, where a system of many interacting components switches suddenly from one global state of organization to another, typically from an orderly to a disorderly state. The magnetic transition of iron, where it changes from having the magnetic orientation of its atoms random and disorderly to all lined up as the material is cooled, is the classic example. The switch happens abruptly at the so-called critical point – for iron, at a temperature of 1,043 Kelvin.
What have magnets got to do with biology? The point is that the critical transition of a magnet isn’t anything to do with magnetism per se. It is an outcome of the fact that each atom is interacting (here via magnetic forces) with its neighbours, and that they all have to come to some ‘collective decision’ about how to organize themselves. Because of that collective aspect, phase transitions happen all at once when a threshold value of some control parameter such as temperature is surpassed. They occur in all manner of physical systems, from superconductors and the Big Bang to polymer mixtures. So why not in biology?
The proposal of Mora and Bialek didn’t spring from nowhere. It echoes suggestions made in the 1990s that many natural systems, including some in biology (such as mass extinctions), display ‘self-organized criticality' (SOC), meaning that they undergo disruptions and fluctuations at all possible scales of size. The archetypal example of SOC was a pile of sand, which can have avalanches of all sizes as new grains are added at the top of the slope. This wide range of fluctuations scales is just what is found at an ordinary critical point – for example, a magnet at its critical point is a patchwork of domains of all different sizes with different magnetic orientations.
But, says Cavagna, it was never really clear that SOC had a deep connection to the older notion of critical phase transitions. The key feature of SOC is that it is indeed ‘self-organized’, which means that it will return to the critical state after a disturbance like an avalanche. So there’s no actual phase transition at all. “It’s just a point of great instability”, Cavagna says – and not one that is reached, like a true phase transition, by tuning a parameter like temperature. What’s more, says Bialek, “there was a huge amount of ideology about why criticality was a good thing in biology” – but no good argument for why. These two researchers and others are now trying to clarify what advantages criticality might convey on a wide range of biological systems, regardless of whether it is achieved by self-organization, natural selection or something else.
A critical magnet is poised on a knife-edge, where the smallest nudge can tip it into becoming wholly magnetic or non-magnetic. This knife-edge character of traditional (rather than self-organized) critical points means that it is all but impossible for a system to stay there. But the proposal of Mora and Bialek is that biological systems might benefit from operating close to critical points. This could provide access to a wide range of fluctuations involving different configurations of its components. The striking thing about near-criticality is that the rarity of specific, seemingly unlikely configurations is exactly compensated for by the fact that there are many more variants of such states than there are of common ones. “There’s a small number of very common configurations, a large number very rare configurations, and everything in between”, says Mora. “Being close to a critical point means that you're as likely to find yourself in any of these configurations.” As a result, he says, “being critical may confer the necessary flexibility to deal with complex and unpredictable environments.”
Another key feature of a critical system is that it is extremely responsive to disturbances in the environment, which can send rippling effects throughout the whole system. “At the critical point, everything is about to go crazy”, says physicist Jim Sethna of Cornell University. “So you get massively more sensitive behaviour.” That, Sethna says, can help a biological system to adapt very rapidly to change. The sensitivity stems from the long-ranged correlations in the behaviour of the system’s components that develop near criticality: a tweak here has an influence right over there, so that each component can ‘feel’ what all the others are doing.
Crucially, this flexibility and adaptiveness is achieved not by some incredibly complex and fragile set of interactions between the components, but taking advantage of the universal and robust characteristics of all systems made up of many interacting components. If a system evolves to be close to critical, says Sethna, it then has something like a set of general-purpose knobs that can allow it to adapt to environmental changes without having to reconfigure genomes.
Recent work by physicists Amos Maritan and Jayanth Banavar and their coworkers gives a clearer picture of why criticality in particular is useful. They have calculated how a system of agents that can gather information about their environment, and whose fitness depends on their ability to locate the source of environmental stimuli, evolves over time. They found that such a collection of evolving cognitive agents settles naturally into a critical state. “Being poised at criticality provides the system with optimal flexibility and evolutionary advantage to cope with and adapt to a highly variable and complex environment”, says Maritan.
In effect, this critical state allows the system to ‘sense’ what is going on around it: to encode a kind of ‘internal map’ of its environment and circumstances, rather like a river network encoding a map of the surrounding topography. “A key ingredient to the success of a living system is its ability to capture relevant information from the richly varying external world, synthesizing its most prominent features into manageable maps”, says Maritan. If this is indeed a feature of a near-critical state, the activity of neurons would be expected to operate in such a state just as Mora and Bialek proposed, because what our brains ‘show us’ will then be a good approximation to what is really ‘out there’.
There’s now mounting evidence that brains really are organized this way. One signature of criticality would be long-ranged correlations between the ‘spiking’ activity of neurons – something that Bialek and his coworkers have found in their models of neural networks. These correlations mean that the state of each neuron is to some degree encoded in the state of the rest of the network, providing a mechanism for error correction and recovery of lost information.
And it’s not just all theory. Dante Chialvo of the National Council for Scientific and Technological Studies in Buenos Aires, Argentina, and colleagues have shown that dynamics characteristic of a critical state in the activity of the human brain can account for some of the key features seen in MRI brain imaging, such as the coherent operation of many neurons clustered together in space. And Nir Friedman of the University of Illinois at Urbana-Champaign and his coworkers have found that avalanches in the firing of neurons show the same kind of size–probability relationship as those in self-organized critical sand-piles. It’s not hard to imagine that this apparently general operating principle of neural networks might bring some structure to the mass of data soon to emerge from the large-scale projects recently launched in the US and Europe to map out the connectivity of the human brain.
Superfluid starlings
Responsiveness has an obvious utility to a herd or flock of animals looking out for predators: if a few individuals spot one, the rest of them can gain that information almost at once. And they do – just think of schools of fish darting around in unison to avoid sharks.
It was this sort of flocking behaviour that partly stimulated Mora and Bialek’s proposal in the first place. Theoretical modelling of flocking over the past decade or so has shown that coordinated motion requires each animal simply to respond to its nearest neighbours’ movements by trying to align itself. This is similar to the way magnetic atoms get aligned, and in fact some flocking models are directly analogous to models of magnetism. Mora, Bialek, Cavagna and their collaborators have recently shown that the graceful, orderly motion of flocks, familiar from watching starlings at twilight, is most easily maintained if the flock is close to a critical point. Further from this point, a flock might stay coordinated but loses the ability to respond quickly and coherently to outside disturbances such as predators.
In other words, says Cavagna, flocking isn’t just about orderly motion. Too much of it and you end up regimented like a crystal, slow to respond to anything. The responsiveness comes instead from the correlations between individuals – how one affects another.
Fine in theory. But do real flocks work this way? In a happy confluence of ideas and observations, Cavagna and his coworkers in Rome began their studies of flocking in 2010 just as Mora and Bialek were presenting their ideas on biological criticality. The Italian team found that flocks of starlings have scale-free correlations in the velocity fluctuations of individual birds. In other words, if one bird in the flock changes course, others will tend to do so too almost instantaneously, no matter how far apart they are.
Cavagna and colleagues placed video cameras on top of the National Museum of Rome in the city centre, which overlooks a major roosting site for starlings in winter. They filmed the birds during their flocking displays at dusk, and then used computer-vision methods to turn the footage into records of the three-dimensional movements of individual birds in the flock, which typically contained between a hundred and several thousand birds. They analysed this data to figure out how each bird deviated from the average velocity of the entire flock, and to measure the correlations: how closely these deviations for any pair of birds shadow each other as the distance between the pair increases.
“We found that correlation was very strong”, Cavagna says. In other words, the birds seem to be tuned into one another’s movements even over scales beyond which they can see each other. The influence of one bird is transmitted to others far away through neighbour-to-neighbour interactions, in just the same way as the magnetic poles of atoms of iron in a magnet can ‘speak’ indirectly over long distances close to the critical point.
What’s more, these observations showed that realignment of the birds’ orientation as the flock changes direction spreads much faster than the standard theories of collective movement permit. This behaviour can be explained by adding an extra ingredient to the theory: a ‘symmetry rule’ which reflects the fact that all directions of flight are equivalent. With this included, it turns out that the movement of the flock becomes mathematically equivalent to that of a superfluid such as liquid helium, which can flow essentially without losing any energy through viscous drag. In other words, a flock of birds can be considered a kind of living superfluid.
Midges don’t exhibit the orderly swarming motions of birds and fish. Might they, nevertheless, display the long-ranged correlations expected on the disordered side of a critical phase transition? “Some biologists insisted there is no collective behaviour in midges”, Cavagna says, and he expected his observations to confirm that view. But after painstakingly filming the midges swarming around park landmarks, reflected in the setting sun, he and his coworkers couldn’t avoid the conclusion that there were very strong correlations here too.
“It’s physically exhausting work”, Cavagna says: lugging all the equipment into a park, filming for several hours, then immediately going back to the lab well after dusk to download the data. “Still, at least it was summer, and the Roman parks are lovely.” Filming birds is harder, he says, since they only flock in the cold winter.
But why would evolution tune midges to behave that way, given that predation isn’t an issue for them? Cavagna thinks that this might be looking at the question the wrong way. Perhaps they can’t help being near-critical. The researchers found that the reach of the correlations was always about the same size as the swarm: the bigger the swarm, the longer the correlations. So maybe the swarm size isn't an adaptation, but is a side-effect of some other factor that determines how the midges interact. This factor - the range of neighbouring midge interactions, say - sets the correlation distance for midge motions, so that if the swarm gets bigger than that size, it will automatically shed midges.
Quick drying
The idea that biology makes use of phase transitions and their associated correlations and fluctuations could go far deeper than these large-scale networks and communities, and might be applied even at the level of individual cells and molecules. Protein molecules, for example, often carry out their functions as enzymes by switching from one shape to another. That needs to happen easily when the right signal is given, for example when another molecule binds to the protein to activate it. These conformational changes are, like phase transitions, cooperative, meaning that they involve interactions between all the component parts. Tweak this bit of a protein, and the whole thing tips into a new shape.
Cooperative transitions have also long been thought to govern the way protein chains fold up into their functional shapes in the first place. But recently David Chandler at the University of Berkeley at California and his coworkers have argued that both this process and the way several protein molecules stick together into many-component assemblies could be controlled by a transition that occurs not in the protein itself but in the water that surrounds it. They believe there may be an abrupt ‘drying transition’ in which all the water suddenly exits from the space between two water-repelling parts of proteins. Chandler argues that these drying transitions, which have been seen in computer simulations of some proteins, draw on the strong fluctuations that exist in the water, whereby the water molecules organize themselves into ever-changing regions of high or low density – not unlike a midge swarm, in fact. These fluctuations make it easier for the gap between the protein segments to tip over from a ‘wet’ to a ‘dry’ state, just as they make it easier for a critical magnet to tip over into a magnetic or non-magnetic state. Not all, or even most, proteins seem to fold or aggregate via these drying transitions. But Chandler and colleagues argue that most of them may be fine-tuned by evolution to be close to such a transition, some lying on one side of that boundary and some on the other.
Drying transitions have also been found in computer simulations of the docking of small molecules into the ‘binding cavities’ of the enzymes they activate. Some proteins in thermophilic organisms, which thrive in hot environments, have cavities lined with water-repelling chemical groups that seem poised right on the brink of expelling the water and becoming dry at the organism’s normal working temperature. The docking of the ‘plug’ into its ‘socket’ would be made easier by this ease of emptying. Meanwhile, some protein channels that sit in cell walls and regulate the flow of other molecules or ions in and out are also poised to undergo drying transitions within their conduit pores, so that they can be easily switched from an ‘open’ state (where the water-filled pore lets dissolved substances pass) to a ‘closed’ state (where the pore is dry and denies passage).
Another benefit of being close to a phase transition has been suggested by Sethna and his colleagues. Some biological membranes are patchworks in which different types of lipid molecule are segregated into liquid-like ‘rafts’, phase-separated like immiscible droplets of oil and water. Because these patches have a wide range of fluctuating sizes, rather like the domains of a near-critical magnet, Sethna’s team argued that they are close to a critical phase transition at which the molecules become fully miscible.
They say that the value here is not in the phase transition itself, but in the domain size fluctuations that accompany it. Such fluctuations in immiscible fluids were shown in the 1980s to give rise to a force analogous to the so-called Casimir force that pulls together two closely spaced metal plates in a vacuum. The normal Casimir force is caused by electromagnetic fluctuations in the vacuum, themselves a consequence of quantum physics: because the size of these fluctuations is restricted between the plates, this produces a pressure that draws them together. Likewise, constraints on the ‘near-critical’ fluctuations of lipid patches between protein molecules embedded in the membrane give rise to a ‘critical Casimir’ attraction that might help molecules to bind together and trigger chemical reactions involved in cell signalling. In effect, says Sethna, it means that proteins at the membrane surface can talk to each other via the lipid rafts. “Here again criticality allows the system to access structures over a wide range of scales”, says Mora.
The physics of evolution
Phase transitions and criticality might turn out to be important in the operation of gene networks, which currently seem absurdly baroque and yet somehow generate stable and robust organisms. Bialek and coworkers recently reported an indication of criticality in the gene regulatory network that determines the spatial patterning of the fruit fly embryo – the so-called gap gene network. They found long-ranged correlations in the fluctuations of gene expression levels at well-separated parts of the embryo. It’s possible that these critical-like fluctuations might help to improve the signal-to-noise ratio of the information transmission in the regulatory network.
Mora and Bialek have suggested that phase transitions in the ‘information space’ that relates a protein’s structure to its shape and function through the collective interactions of its chemical building blocks might account for the appearance of distinct ‘families’ of protein structures. This would imply that the evolution of protein sequences (and hence gene sequences) is significantly constrained by the limited number of ‘stable states’ in sequence space – in other words, that nature’s profusion is regulated by an order even deeper than natural selection.
In fact, not only does evolution seem likely to make use of phase transitions – it might actually be one. Chemist Manfred Eigen, who won the 1967 Nobel Prize for his work on fast chemical reactions, has argued that natural selection appears in a system of self-replicating, information-bearing entities as an abrupt phase transition at certain threshold values of the rates of replication and mutation. In other words, it is not just ‘something that happens’ in reproducing systems, but is a physical law that arises from the way information itself is organized. In Eigen’s theory, neutral selection – in which mutations get fixed in a population even though they have no adaptive benefit – injects fluctuations analogous to those at a critical point. These are essential to prevent natural selection from getting ‘stuck’ in minor valleys of the evolutionary landscape – or as a physicist might say, to prevent the system settling into a metastable phase, which is provisionally stable but not the optimal arrangement of the components. That would fit with the recent suggestion of evolutionary biologist John Tyler Bonner at Princeton University that the random fluctuations of neutral evolution could account for the immense variety of forms found in organisms such as diatoms.
Criticality and the critics
“I knew from the beginning that I wanted to do something in between physics and biology”, says Bialek. The question is, he says, “can you talk about these things that biologists usually study in the way that physicists do?” He suspected “that there’s some collection of phenomenon that people didn’t realise were related to each other, or some part of the biological world that nobody has looked at from a physicists’ point of view” – in other words, the big question was “whether aspects of particular [biological] models can be derived from some more general principle.” If Bialek and Mora are right, criticality could emerge as one such general principle.
But these ideas have yet to be embraced by most biologists, whose agenda is often now dominated by fine details rather than a search for over-arching principles. Getting these ideas a hearing in biology is likely to be a struggle. “There’s a big difference in culture”, says Sethna. “Biologists tend to be skeptical of anything that involves a lot of math.” In an effort to bridge this ‘two cultures’ divide, in 2010 Bialek spearheaded an interdisciplinary centre called the Initiative for the Theoretical Sciences at the City University of New York, where he is now director. Here physicists can discuss these ideas with neuroscientists, ecologists and other biologists – Cavagna was recruited as a visiting professor last year, and has been collaborating with Bialek and Mora to refine the understanding of critical flocking. But it will take time and patience, both to figure out how widely phase transitions and criticality really are used in biology, and to persuade life scientists that, as Sethna puts it, cells, and perhaps proteins, animals and entire ecosystems, “do a lot of interesting physics.”
Anyway, this version is based on my original draft, but with some of the later material mixed in. Hopefully it will give some indication of the bigger picture.
__________________________________________________________________________
It’s not the midges that were the problem, says Andrea Cavagna, but the kids. You’d think his efforts to record the movements of midge swarms in the public parks of Rome near sunset would be fraught with risks of being eaten alive by the little beasts – but these were a non-biting variety. Keeping away the children who gathered to watch what these folks were up to with their video cameras, generators and thickets of cabling was another matter. That, and the problem of finding a parking space in central Rome.
It’s not easy, he realised, for a physicist to turn field biologist.
The reason why Cavagna, based at Sapienza University in Rome, and his colleagues went midge-hunting sounds strange, perhaps even bizarre. The researchers wanted to know if midges behave like magnets. More specifically, if they act like magnets close to the point where heat flips them between a magnetic and non-magnetic state: a so-called critical phase transition.
Cavagna is one of a small and diverse group of scientists who have begun to suspect that critical phase transitions play vital roles in a wide variety of biological systems. Not only might they underpin the swarming of midges and the flocking of birds, but they might enable neurons in the brain to encode a picture of our environment, some protein molecules to fold up and bind their target molecules, and cell membranes to attract molecules that trigger cell-to-cell messaging. They might even explain how evolution itself works.
It’s good to be critical
Staying alive might seem to be a question of keeping calm and carrying on in the face of whatever comes along. But it’s often important to be able to respond and adapt to challenges rather than stoically riding them out: if you’re a small creature about to be eaten by a big one, you’d better get out of there. The trick is to keep your options open, maintaining easy access to a wide range of actions. It’s a delicate balance: you need stability, but also responsiveness.
In 2010 two physicists suggested how this might be possible. Thierry Mora (now at the École Normale Superieure in Paris) and Bill Bialek at Princeton University argued that many biological systems, from flocking birds to neural networks, might be “poised close to a critical point”. The idea drew on a well-established notion from statistical physics: the critical phase transition, where a system of many interacting components switches suddenly from one global state of organization to another, typically from an orderly to a disorderly state. The magnetic transition of iron, where it changes from having the magnetic orientation of its atoms random and disorderly to all lined up as the material is cooled, is the classic example. The switch happens abruptly at the so-called critical point – for iron, at a temperature of 1,043 Kelvin.
What have magnets got to do with biology? The point is that the critical transition of a magnet isn’t anything to do with magnetism per se. It is an outcome of the fact that each atom is interacting (here via magnetic forces) with its neighbours, and that they all have to come to some ‘collective decision’ about how to organize themselves. Because of that collective aspect, phase transitions happen all at once when a threshold value of some control parameter such as temperature is surpassed. They occur in all manner of physical systems, from superconductors and the Big Bang to polymer mixtures. So why not in biology?
The proposal of Mora and Bialek didn’t spring from nowhere. It echoes suggestions made in the 1990s that many natural systems, including some in biology (such as mass extinctions), display ‘self-organized criticality' (SOC), meaning that they undergo disruptions and fluctuations at all possible scales of size. The archetypal example of SOC was a pile of sand, which can have avalanches of all sizes as new grains are added at the top of the slope. This wide range of fluctuations scales is just what is found at an ordinary critical point – for example, a magnet at its critical point is a patchwork of domains of all different sizes with different magnetic orientations.
But, says Cavagna, it was never really clear that SOC had a deep connection to the older notion of critical phase transitions. The key feature of SOC is that it is indeed ‘self-organized’, which means that it will return to the critical state after a disturbance like an avalanche. So there’s no actual phase transition at all. “It’s just a point of great instability”, Cavagna says – and not one that is reached, like a true phase transition, by tuning a parameter like temperature. What’s more, says Bialek, “there was a huge amount of ideology about why criticality was a good thing in biology” – but no good argument for why. These two researchers and others are now trying to clarify what advantages criticality might convey on a wide range of biological systems, regardless of whether it is achieved by self-organization, natural selection or something else.
A critical magnet is poised on a knife-edge, where the smallest nudge can tip it into becoming wholly magnetic or non-magnetic. This knife-edge character of traditional (rather than self-organized) critical points means that it is all but impossible for a system to stay there. But the proposal of Mora and Bialek is that biological systems might benefit from operating close to critical points. This could provide access to a wide range of fluctuations involving different configurations of its components. The striking thing about near-criticality is that the rarity of specific, seemingly unlikely configurations is exactly compensated for by the fact that there are many more variants of such states than there are of common ones. “There’s a small number of very common configurations, a large number very rare configurations, and everything in between”, says Mora. “Being close to a critical point means that you're as likely to find yourself in any of these configurations.” As a result, he says, “being critical may confer the necessary flexibility to deal with complex and unpredictable environments.”
Another key feature of a critical system is that it is extremely responsive to disturbances in the environment, which can send rippling effects throughout the whole system. “At the critical point, everything is about to go crazy”, says physicist Jim Sethna of Cornell University. “So you get massively more sensitive behaviour.” That, Sethna says, can help a biological system to adapt very rapidly to change. The sensitivity stems from the long-ranged correlations in the behaviour of the system’s components that develop near criticality: a tweak here has an influence right over there, so that each component can ‘feel’ what all the others are doing.
Crucially, this flexibility and adaptiveness is achieved not by some incredibly complex and fragile set of interactions between the components, but taking advantage of the universal and robust characteristics of all systems made up of many interacting components. If a system evolves to be close to critical, says Sethna, it then has something like a set of general-purpose knobs that can allow it to adapt to environmental changes without having to reconfigure genomes.
Recent work by physicists Amos Maritan and Jayanth Banavar and their coworkers gives a clearer picture of why criticality in particular is useful. They have calculated how a system of agents that can gather information about their environment, and whose fitness depends on their ability to locate the source of environmental stimuli, evolves over time. They found that such a collection of evolving cognitive agents settles naturally into a critical state. “Being poised at criticality provides the system with optimal flexibility and evolutionary advantage to cope with and adapt to a highly variable and complex environment”, says Maritan.
In effect, this critical state allows the system to ‘sense’ what is going on around it: to encode a kind of ‘internal map’ of its environment and circumstances, rather like a river network encoding a map of the surrounding topography. “A key ingredient to the success of a living system is its ability to capture relevant information from the richly varying external world, synthesizing its most prominent features into manageable maps”, says Maritan. If this is indeed a feature of a near-critical state, the activity of neurons would be expected to operate in such a state just as Mora and Bialek proposed, because what our brains ‘show us’ will then be a good approximation to what is really ‘out there’.
There’s now mounting evidence that brains really are organized this way. One signature of criticality would be long-ranged correlations between the ‘spiking’ activity of neurons – something that Bialek and his coworkers have found in their models of neural networks. These correlations mean that the state of each neuron is to some degree encoded in the state of the rest of the network, providing a mechanism for error correction and recovery of lost information.
And it’s not just all theory. Dante Chialvo of the National Council for Scientific and Technological Studies in Buenos Aires, Argentina, and colleagues have shown that dynamics characteristic of a critical state in the activity of the human brain can account for some of the key features seen in MRI brain imaging, such as the coherent operation of many neurons clustered together in space. And Nir Friedman of the University of Illinois at Urbana-Champaign and his coworkers have found that avalanches in the firing of neurons show the same kind of size–probability relationship as those in self-organized critical sand-piles. It’s not hard to imagine that this apparently general operating principle of neural networks might bring some structure to the mass of data soon to emerge from the large-scale projects recently launched in the US and Europe to map out the connectivity of the human brain.
Superfluid starlings
Responsiveness has an obvious utility to a herd or flock of animals looking out for predators: if a few individuals spot one, the rest of them can gain that information almost at once. And they do – just think of schools of fish darting around in unison to avoid sharks.
It was this sort of flocking behaviour that partly stimulated Mora and Bialek’s proposal in the first place. Theoretical modelling of flocking over the past decade or so has shown that coordinated motion requires each animal simply to respond to its nearest neighbours’ movements by trying to align itself. This is similar to the way magnetic atoms get aligned, and in fact some flocking models are directly analogous to models of magnetism. Mora, Bialek, Cavagna and their collaborators have recently shown that the graceful, orderly motion of flocks, familiar from watching starlings at twilight, is most easily maintained if the flock is close to a critical point. Further from this point, a flock might stay coordinated but loses the ability to respond quickly and coherently to outside disturbances such as predators.
In other words, says Cavagna, flocking isn’t just about orderly motion. Too much of it and you end up regimented like a crystal, slow to respond to anything. The responsiveness comes instead from the correlations between individuals – how one affects another.
Fine in theory. But do real flocks work this way? In a happy confluence of ideas and observations, Cavagna and his coworkers in Rome began their studies of flocking in 2010 just as Mora and Bialek were presenting their ideas on biological criticality. The Italian team found that flocks of starlings have scale-free correlations in the velocity fluctuations of individual birds. In other words, if one bird in the flock changes course, others will tend to do so too almost instantaneously, no matter how far apart they are.
Cavagna and colleagues placed video cameras on top of the National Museum of Rome in the city centre, which overlooks a major roosting site for starlings in winter. They filmed the birds during their flocking displays at dusk, and then used computer-vision methods to turn the footage into records of the three-dimensional movements of individual birds in the flock, which typically contained between a hundred and several thousand birds. They analysed this data to figure out how each bird deviated from the average velocity of the entire flock, and to measure the correlations: how closely these deviations for any pair of birds shadow each other as the distance between the pair increases.
“We found that correlation was very strong”, Cavagna says. In other words, the birds seem to be tuned into one another’s movements even over scales beyond which they can see each other. The influence of one bird is transmitted to others far away through neighbour-to-neighbour interactions, in just the same way as the magnetic poles of atoms of iron in a magnet can ‘speak’ indirectly over long distances close to the critical point.
What’s more, these observations showed that realignment of the birds’ orientation as the flock changes direction spreads much faster than the standard theories of collective movement permit. This behaviour can be explained by adding an extra ingredient to the theory: a ‘symmetry rule’ which reflects the fact that all directions of flight are equivalent. With this included, it turns out that the movement of the flock becomes mathematically equivalent to that of a superfluid such as liquid helium, which can flow essentially without losing any energy through viscous drag. In other words, a flock of birds can be considered a kind of living superfluid.
Midges don’t exhibit the orderly swarming motions of birds and fish. Might they, nevertheless, display the long-ranged correlations expected on the disordered side of a critical phase transition? “Some biologists insisted there is no collective behaviour in midges”, Cavagna says, and he expected his observations to confirm that view. But after painstakingly filming the midges swarming around park landmarks, reflected in the setting sun, he and his coworkers couldn’t avoid the conclusion that there were very strong correlations here too.
“It’s physically exhausting work”, Cavagna says: lugging all the equipment into a park, filming for several hours, then immediately going back to the lab well after dusk to download the data. “Still, at least it was summer, and the Roman parks are lovely.” Filming birds is harder, he says, since they only flock in the cold winter.
But why would evolution tune midges to behave that way, given that predation isn’t an issue for them? Cavagna thinks that this might be looking at the question the wrong way. Perhaps they can’t help being near-critical. The researchers found that the reach of the correlations was always about the same size as the swarm: the bigger the swarm, the longer the correlations. So maybe the swarm size isn't an adaptation, but is a side-effect of some other factor that determines how the midges interact. This factor - the range of neighbouring midge interactions, say - sets the correlation distance for midge motions, so that if the swarm gets bigger than that size, it will automatically shed midges.
Quick drying
The idea that biology makes use of phase transitions and their associated correlations and fluctuations could go far deeper than these large-scale networks and communities, and might be applied even at the level of individual cells and molecules. Protein molecules, for example, often carry out their functions as enzymes by switching from one shape to another. That needs to happen easily when the right signal is given, for example when another molecule binds to the protein to activate it. These conformational changes are, like phase transitions, cooperative, meaning that they involve interactions between all the component parts. Tweak this bit of a protein, and the whole thing tips into a new shape.
Cooperative transitions have also long been thought to govern the way protein chains fold up into their functional shapes in the first place. But recently David Chandler at the University of Berkeley at California and his coworkers have argued that both this process and the way several protein molecules stick together into many-component assemblies could be controlled by a transition that occurs not in the protein itself but in the water that surrounds it. They believe there may be an abrupt ‘drying transition’ in which all the water suddenly exits from the space between two water-repelling parts of proteins. Chandler argues that these drying transitions, which have been seen in computer simulations of some proteins, draw on the strong fluctuations that exist in the water, whereby the water molecules organize themselves into ever-changing regions of high or low density – not unlike a midge swarm, in fact. These fluctuations make it easier for the gap between the protein segments to tip over from a ‘wet’ to a ‘dry’ state, just as they make it easier for a critical magnet to tip over into a magnetic or non-magnetic state. Not all, or even most, proteins seem to fold or aggregate via these drying transitions. But Chandler and colleagues argue that most of them may be fine-tuned by evolution to be close to such a transition, some lying on one side of that boundary and some on the other.
Drying transitions have also been found in computer simulations of the docking of small molecules into the ‘binding cavities’ of the enzymes they activate. Some proteins in thermophilic organisms, which thrive in hot environments, have cavities lined with water-repelling chemical groups that seem poised right on the brink of expelling the water and becoming dry at the organism’s normal working temperature. The docking of the ‘plug’ into its ‘socket’ would be made easier by this ease of emptying. Meanwhile, some protein channels that sit in cell walls and regulate the flow of other molecules or ions in and out are also poised to undergo drying transitions within their conduit pores, so that they can be easily switched from an ‘open’ state (where the water-filled pore lets dissolved substances pass) to a ‘closed’ state (where the pore is dry and denies passage).
Another benefit of being close to a phase transition has been suggested by Sethna and his colleagues. Some biological membranes are patchworks in which different types of lipid molecule are segregated into liquid-like ‘rafts’, phase-separated like immiscible droplets of oil and water. Because these patches have a wide range of fluctuating sizes, rather like the domains of a near-critical magnet, Sethna’s team argued that they are close to a critical phase transition at which the molecules become fully miscible.
They say that the value here is not in the phase transition itself, but in the domain size fluctuations that accompany it. Such fluctuations in immiscible fluids were shown in the 1980s to give rise to a force analogous to the so-called Casimir force that pulls together two closely spaced metal plates in a vacuum. The normal Casimir force is caused by electromagnetic fluctuations in the vacuum, themselves a consequence of quantum physics: because the size of these fluctuations is restricted between the plates, this produces a pressure that draws them together. Likewise, constraints on the ‘near-critical’ fluctuations of lipid patches between protein molecules embedded in the membrane give rise to a ‘critical Casimir’ attraction that might help molecules to bind together and trigger chemical reactions involved in cell signalling. In effect, says Sethna, it means that proteins at the membrane surface can talk to each other via the lipid rafts. “Here again criticality allows the system to access structures over a wide range of scales”, says Mora.
The physics of evolution
Phase transitions and criticality might turn out to be important in the operation of gene networks, which currently seem absurdly baroque and yet somehow generate stable and robust organisms. Bialek and coworkers recently reported an indication of criticality in the gene regulatory network that determines the spatial patterning of the fruit fly embryo – the so-called gap gene network. They found long-ranged correlations in the fluctuations of gene expression levels at well-separated parts of the embryo. It’s possible that these critical-like fluctuations might help to improve the signal-to-noise ratio of the information transmission in the regulatory network.
Mora and Bialek have suggested that phase transitions in the ‘information space’ that relates a protein’s structure to its shape and function through the collective interactions of its chemical building blocks might account for the appearance of distinct ‘families’ of protein structures. This would imply that the evolution of protein sequences (and hence gene sequences) is significantly constrained by the limited number of ‘stable states’ in sequence space – in other words, that nature’s profusion is regulated by an order even deeper than natural selection.
In fact, not only does evolution seem likely to make use of phase transitions – it might actually be one. Chemist Manfred Eigen, who won the 1967 Nobel Prize for his work on fast chemical reactions, has argued that natural selection appears in a system of self-replicating, information-bearing entities as an abrupt phase transition at certain threshold values of the rates of replication and mutation. In other words, it is not just ‘something that happens’ in reproducing systems, but is a physical law that arises from the way information itself is organized. In Eigen’s theory, neutral selection – in which mutations get fixed in a population even though they have no adaptive benefit – injects fluctuations analogous to those at a critical point. These are essential to prevent natural selection from getting ‘stuck’ in minor valleys of the evolutionary landscape – or as a physicist might say, to prevent the system settling into a metastable phase, which is provisionally stable but not the optimal arrangement of the components. That would fit with the recent suggestion of evolutionary biologist John Tyler Bonner at Princeton University that the random fluctuations of neutral evolution could account for the immense variety of forms found in organisms such as diatoms.
Criticality and the critics
“I knew from the beginning that I wanted to do something in between physics and biology”, says Bialek. The question is, he says, “can you talk about these things that biologists usually study in the way that physicists do?” He suspected “that there’s some collection of phenomenon that people didn’t realise were related to each other, or some part of the biological world that nobody has looked at from a physicists’ point of view” – in other words, the big question was “whether aspects of particular [biological] models can be derived from some more general principle.” If Bialek and Mora are right, criticality could emerge as one such general principle.
But these ideas have yet to be embraced by most biologists, whose agenda is often now dominated by fine details rather than a search for over-arching principles. Getting these ideas a hearing in biology is likely to be a struggle. “There’s a big difference in culture”, says Sethna. “Biologists tend to be skeptical of anything that involves a lot of math.” In an effort to bridge this ‘two cultures’ divide, in 2010 Bialek spearheaded an interdisciplinary centre called the Initiative for the Theoretical Sciences at the City University of New York, where he is now director. Here physicists can discuss these ideas with neuroscientists, ecologists and other biologists – Cavagna was recruited as a visiting professor last year, and has been collaborating with Bialek and Mora to refine the understanding of critical flocking. But it will take time and patience, both to figure out how widely phase transitions and criticality really are used in biology, and to persuade life scientists that, as Sethna puts it, cells, and perhaps proteins, animals and entire ecosystems, “do a lot of interesting physics.”
Friday, April 25, 2014
Theatre of the Invisible
I gave this talk yesterday at the meeting Performing Science: Dialogues Across Cultures at the University of Lincoln. It seemed brief enough to put up here.
_____________________________________________________
The actor David Garrick had a set-piece during his performances of Hamlet, the role for which he was most famous, that electrified London theatre audiences in the eighteenth century. It came when the ghost enters at the start of the play. According to the St. James Chronicle in 1772, “As no Writer in any Age penned a Ghost like Shakespeare, so, in our Time, no Actor ever saw a Ghost like Garrick.” The German scientist Georg Christoph Lichtenberg wrote that “His whole demeanour is so expressive of terror that it made my flesh creep even before he began to speak.”
Garrick is shown in the midst of this tour-de-force in a contemporaneous print (Figure 1). Doesn’t it seem here as if his hair is actually rising from his scalp? And in fact, it really is. But not even Garrick could raise his hair at will. He achieved the spine-tingling effect (which goes by the splendid name of horripilation) with the aid of a London wig-maker named Perkins, who created a mechanical wig powered by hydraulics.
Figure 1 David Garrick as Hamlet, on seeing his father’s ghost in Act I. Mezzotint after a painting by Benjamin Wilson, 1756.
This wasn’t just a cheap trick. Garrick’s approach to what was then seen as naturalistic performance was informed by a Cartesian view of human physiology, in which the body was regarded as a kind of hydraulic mechanism driven by fluids called animal spirits that were pumped around the organs and limbs. Within this view, an artificial hydraulic wig was little different from the way real horripilation was thought to work by a rush of fluids to the head. Like all emotion, it was simply a matter of biomechanics.
But there is another defence of Garrick’s potentially absurd ‘fright wig’: he needed all the help he could get, because he’d set himself the task of conjuring the illusion of the ghost by gesture alone. Whereas previously the dead king was generally played by an actor, Garrick insisted that he should be invisible: a disembodied voice whose presence was seen only by the actors. But theatrical invisibility is a difficult trick – as film makers later discovered, it needs visible signifiers to sustain the illusion.
Garrick’s choice represented a decision not just about staging but about what the ghost in Hamlet – on which the plot of course turns – truly means. It is a statement about how the entire play should be interpreted. Because we’re then forced to ask: is this a real spirit, or just a figment of Hamlet’s tortured mind?
Partly this is a question about the significance of ghosts in Shakespeare’s time. But I want to locate this issue of the visibility of his ghosts within a wider debate about appearance, illusion and spectacle in theatre. Because it is my contention that, from the eighteenth to the early twentieth centuries in particular, science – and particularly optical science – became strongly linked to theatre, stage magic and the advent of cinema, in ways that were as much thematic as they were instrumental.
Ghosts were a common, even clichéd sight on the Elizabethan stage. They served as narrators, popping up to fill in a bit of back-story. As such, they were no cause for alarm in either implication or appearance, being represented by a sort of Jack-in-the-box puppet, or else by an actor with whitened face, dressed in clothes made of furry leather. They were a device borrowed from the plays of Seneca, which supplied a model for the revival of tragedy during the Renaissance. The Senecan ghost typically appeared in the prologue, calling for an act of revenge that motivated the play’s tragic plot.
But the ghost in Hamlet is no glove puppet. He’s made to sound hardly less terrible to the audience than he is to Hamlet and his friends: the sight “harrows me with fear and wonder”, gasps Horatio. That’s what Shakespeare did to the theatrical ghost: he made it real, humanized, haunting and disquieting. His spirits are really spooky, and in some ways they represent a supernatural stage presence that has never been equalled.
The Senecan ghost is merely a “bit of dramatic machinery”. But ghosts in Shakespeare, and in some of the Jacobean plays that came after, leave the audience guessing. Indeed, they leave the characters guessing: what sort of apparition is this? This is a question about what ghosts meant in the popular superstition of the time. The answer wasn’t simple, but we can at least say that it was determined largely by your religion. Catholics believed that the souls of the dead reside for a time in Purgatory before being admitted (if they warrant it) to heaven. This gave souls a period in which to haunt the living. But Protestants rejected the idea of Purgatory – which makes it puzzling how a dead soul can feature in what is undoubtedly a Protestant play. Might, then, the ghost be a demon masquerading as the king, to provoke Hamlet into acts of slaughter and, indirectly, Ophelia into sinful suicide?
This was the choice, it seems: ghosts were either dead souls, or they were demons – or maybe angels. All were real entities; as the Shakespeare scholar Robert Hunter West has said, when these plays were first performed “Englishmen were seriously aware in a way that we are not of an invisible world about them.” Around this time there was a vigorous debate about the meaning and status of ghosts, and several learned books were published that attempted to provide them with a taxonomy.
One of the most influential was by the theologian Noel Taillepied, called A Treatise of Ghosts. Taillepied claimed that the souls of the departed may be returned to earth by God to deliver a message. Shakespearian ghosts indeed do always have motives and messages to impart, and sometimes only the intended recipients can see them, or at least hear them. The notion of a ghost who, like Banquo in Macbeth, haunts the guilty party alone was well established in folk tradition. If we are inclined to attribute this now to the fevered imaginings of a guilty conscience, we shouldn’t imagine that Shakespeare was in contrast blindly literal – the powers of invocation and agency attributed to the imagination in the late Renaissance leave no clear distinction between a ghost being a projection of the mind and an objective phenomenon.
Ghosts didn’t, as one might expect, go out of fashion with the alleged rationalism of the Enlightenment. Certainly in popular superstition they remained as present as ever, as the famous Cock Lane Ghost of London in the mid-eighteenth century attested. That case ended in a prosecution for fraud, after investigation by a committee that included Samuel Johnson. But Johnson himself remained a firm believer in ghosts, even if not in this particular one.
What changes in our perceptions of the spirit world is not the question of whether it exists but of what it means. In the nineteenth century, the rise of spiritualism saw ghosts become sources not so much of terror as of consolation: mediums offered the opportunity to speak with the souls of the departed loved ones. And what is most striking in this period, certainly for the purposes of this meeting, is how ideas about invisible beings and unseen spirit worlds co-evolve with the development of science and technology, and also with the traditions of the theatre.
For one thing, spiritualist séances were undoubtedly pieces of theatre in themselves, designed to astonish and confound their audiences and prepared with a great deal of stagecraft (Figure 2). Here is an account by William Crookes, one of the many scientists who tried to subject spiritualism to scientific investigation, of a séance conducted in 1871 by the famous medium Douglas Home:
"At first we had rough manifestations, chairs knocked about, the table floated 6 inches from the ground and then dashed down, loud and unpleasant noises bawling in our ears and altogether phenomena of a low class. After a time it was suggested that we should sing, and as the only thing known to all the company, we struck up ‘For he’s a jolly good fellow’. The chairs, tables and things on it kept up a sort of anvil accompaniment to this. After that D. D. Home gave us a solo – rather a sacred piece – and almost before a dozen words were uttered Mr Herne was carried right up, floated across the table and dropped with a crash of pictures and ornaments at the other end of the room. My brother Walter, who was holding one hand, stuck to him as long as he could, but he says Herne was dragged out of his hand as he went across the table."
The group was subsequently treated to accordions playing themselves, floating lights, books dashed about and disembodied hands stroking their faces. The effect must surely have been overwhelming – both exciting and frightening, and doubtless calculated to inhibit objective assessment.
Figure 2 Victorian séances involved many strange goings-on that relied on carefully prepared and executed illusionistic trickery.
And as Crookes’ case shows, many scientists were taken in by all this – not simply because they were credulous, but because they surely wanted to believe. And also because some of them felt that they had more reason than ever to do so. The invention of the telegraph in the 1830s and 40s showed that it was possible to send messages instantly over immense distances, even spanning the Atlantic once the cables had been laid in the 1860s. With the appearance of the telephone a decade later, it became possible to hear voices directly over such a distance. And in the 1890s, the development of radio broadcasting by Marconi and Oliver Lodge meant that these signals didn’t even need a wire to convey them – they could be sent through the invisible ether. Many scientists figured that, if it was possible to hear the voice of someone who wasn’t physically present, it was not so hard to imagine that one might also hear the voices of those who were not even alive. Spiritualism was even sometimes called celestial telegraphy, and wireless broadcasting led people to suspect that the ether was a vast, invisible sea filled with all manner of voices, coming from who knew where. Rudyard Kipling made this analogy in his 1902 short story “Wireless”, in which some early radio hams pick up random messages from ships offshore while in the same building a man feverish from consumption acts as a human receiver for snatches of poetry by Keats that he picks up from some unknown and perhaps long dead source.
These speculations got another boost from the discovery of X-rays in 1895 (Figure 3) – an invisible form of radiation like light, but of a shorter wavelength. Perhaps thoughts might be transferred from person to person, or from the dead to the living, by similar invisible rays sent through the ether?
Figure 3 The X-ray image taken by Wilhelm Röntgen of his wife’s hand, c.1895.
And as this image shows, the technology of photography, devised in the 1830s, could make these invisible rays visible – this is the rather spooky image taken by the discoverer of X-rays, Wilhelm Rontgen, of his wife’s hand, and when she saw it she is said to have exclaimed “I have seen my death!” From its earliest days, photography seemed to be as much about revealing the invisible as documenting the visible. Because the surface of glass plates used to hold the emulsion could preserve faint images of an earlier exposure, some early photographers found that ghostly figures sometimes appeared in their images when the plates were reused. It was soon decided that these were spirits, and ghost photography because a lucrative business in the late nineteenth century. One of the first entrepreneurs of this business was an American named William Mumler, who set up a ‘spirit photography’ business in Boston and New York (Figure 4).
Figure 4 Abraham Lincoln’s shade consoling his widow, in a “spirit photograph” taken by William Mumler. The Lincolns were enthusiasts of Spiritualism, and were said to have conducted séances in the White House.
Even when scientists explained how such double exposures were easy to fake, it did little to diminish the popularity of the genre, for in its mysterious ability to capture the instant and to solidify intangible light photography seemed virtually a supernatural medium itself. Didn’t it, after all, convey a weird kind of immortality – and paradoxically, by doing so, remind the sitter that death awaits us all?
It’s quite natural that one of the first uses of photography would be to make invisible beings visible. For optical technology has always been closely allied with magic, and also with the theatre. It was long thought capable of revealing what went otherwise unseen, particularly spirits, souls and demons. The camera obscura, the forerunner of the photographic camera, in which natural scenes are projected through a small opening into a darkened space (Figure 5), was known since at least the eleventh century, and was popularized in the sixteenth century manual of natural magic by the Italian Giambattista della Porta (who was also a popular dramatist). By the early seventeenth century mountebanks were using such devices to astonish audiences.
Figure 5 The camera obscura, as depicted in Athanasius Kircher’s Great Art of Light and Shadow (1646).
Looking-glasses that produce figures “at a distance in the air” also featured in the magic lantern, an early form of projector that became a stalwart device of optical natural magic. It was described by the Jesuit inventor and mystical philosopher Athanasius Kircher in 1646: light is passed through an image painted onto glass and then through a lens before falling onto a screen (Figure 6). By the time Kircher was writing, magic lanterns were becoming commercialized. The Danish mathematician Thomas Walgensten traveled across Europe selling these lanterns and using them purportedly to summon ghosts.
Figure 6 The magic lantern, as shown by Kircher.
The magical stage spectacles of the late eighteenth century straddled this ambiguous boundary. The German illusionist Johann Georg Schröpfer held séances in his Leipzig coffee shop in which he used the magic lantern, projected onto smoke, to summon ghosts. Schröpfer’s performances were perhaps the first ‘entertainment séances’, and his techniques were copied by the German Paul Philidor, whose popular public displays in the early 1790s were unashamedly eye-catching and became known as “phantasmagoria” (Figure 7). Subsequently, Étienne Gaspard Robertson used magic-lantern back-projection in his “Fantascope” shows, in which, by mounting the device on wheels, he could make the projection grow rapidly larger or smaller so that ghouls and demons might seem to rush upon the terrified audience.
Figure 7 An advertising bill for the Phantasmagoria show of Paul Philidor in 1801.
Robertson explicit sought to scare his public with visions of ghosts and devils (Figure 8): he was in effect producing the first horror films. He was in fact a professor of physics with a special interest in optics, who realised the commercial potential of optical trickery when he attended one of Philidor’s extravaganzas. And although he made no pretence of possessing magical abilities, he exploited his specialist knowledge while artfully keeping his audiences guessing about what they were seeing.
Figure 8 The light show of Étienne Gaspard Robertson amazes and terrifies an audience in the early nineteenth century.
The most famous illusionistic ghost of the stage also comes from this collusion of science demonstration and pure theatre. In the mid-nineteenth century, the Royal Polytechnic Institute in London put on magic and séance shows to show how paranormal activities could be faked. One of the lecturers was the chemist and science popularizer John Henry Pepper, who later set up his own “Theatre of Popular Science and Entertainment” at the Egyptian Hall in London. Pepper collaborated with the engineer Henry Dircks in the late 1850s to create a technique for projecting the reflection of a hidden actor onto a huge, slanted sheet of glass: a semi-transparent apparition perfect for depicting ghosts (Figure 9). Plays featuring ‘Pepper’s ghost’, including Hamlet, Macbeth and A Christmas Carol, became sensations throughout Europe and the US.
Figure 9 Pepper’s ghost.
The Egyptian Hall was the centre of theatrical magic and scientific illusion in the nineteenth century. Perhaps the most famous residency was that of John Nevil Maskelyne, a watchmaker who began the foremost dynasty of British stage magicians (and who was, incidentally, the inventor of the pay toilet) (Figure 10). In 1905 Maskelyne and a group of other British magicians founded the Magic Circle, dedicated to the art of stage magic and illusion. Like many of these stage magicians, Maskelyne was also a debunker of spiritualists and mystics claiming special powers.
Figure 10 A playbill for the illusion and magic show of John Nevil Maskelyne in the late nineteenth century.
This role of illusionism is clear from Albert Allis Hopkins’ now classic 1898 manual of magic, in which the American amateur magician Henry Ridgely Evans proclaimed that “Science has laughed away sorcery, witchcraft, and necromancy.” Hopkins shows how stage magicians of the Victorian era made avid use of the newest scientific discoveries. He said that X-rays, discovered only two years before the book was published, “are now competing with the most noted mediums in the domain of the marvellous.” Hopkins describes a trick in which a man dining alone is suddenly cast into darkness, whereupon he vanishes and the audience sees, seated across the table, a glowing skeleton, lit up by a hidden X-ray generator (Figure 11).
Figure 11 A glowing, macabre dinner guest is conjured up using X-rays (from the generator on the right) to stimulate luminescence from a skeleton painted in a phosphorescent material, as depicted in Albert Hopkins’ 1898 book of stage magic.
The elaborate illusionism of the theatrical light-show found a new home in the early days of cinematography. In the late 1880s Thomas Edison began to create a kind of electrical magic lantern called the Kinetoscope that projected a series of still images in rapid succession to create the illusion of movement. In 1894 he opened a Kinetoscope parlour in New York, where for a few cents one could watch the first motion pictures, each lasting a minute or so. Meanwhile, the Lumière brothers turned the magic lantern into a portable, manually operated movie projector called the Cinématographe that threw the image onto a screen. A Parisian audience watched the first public screening in 1895.
In the audience for that premiere was the Frenchman George Méliès, who had developed his own form of illusionistic magic at the Paris theatre he owned. He promptly bought a movie camera and started making films himself. Many of these used his existing stage tricks, supplemented by the new illusionistic possibilities that cinematography offered. He made 78 films in 1896 alone, and over 500 during the next two decades. Several of them were ghost films, sometimes aimed more at slapstick than chills (Figure 12).
Figure 12 A scene from George Méliès’ comedy The Apparition, or Mr Jones’ Experience with a Ghost (1903).
Given this genealogy of cinema, it is no surprise that marvels soon took over. Films of ghostly and supernatural phenomena weren’t simply an early genre of cinema – they were its natural subject, for the motion picture should properly be seen not so much as “celluloid theatre” but as celluloid magic. Jacques Derrida seemed to discern this when in 1982 he called cinema “the art of ghosts, a battle of phantoms.”
What ought we to conclude from all of this?
First, that the first marriage of science and theatre happened in the arena of the magical and the illusory, and in particular in the disputed area where science and folk belief have vied for authority over the invisible.
Second, that science and technology have long had a performative aspect that was particularly prominent in the late eighteenth and the nineteenth centuries, and which involved a delicate interplay between explanation, mystification and spectacle, of the kind that I sense still persists in the Royal Institution Christmas Lectures.
Third, cinema should perhaps be a stronger part of this discourse, in the sense that its relationship to theatre, particularly in terms of its genesis, becomes much clearer once we acknowledge the close associations with optical technologies and illusionism.
And finally, I think, we should be reminded here of the role of imagination, which, both in science and in theatre, is needed to span the gulf of what isn’t known or cannot be expressed. Imagination is rarely spoken of today in science, but in a famous 1870 essay “Scientific Use of the Imagination”, John Tyndall argued that via the imagination “we can lighten the darkness which surrounds the world of our senses.” It is in its capacity to permit and depict imaginative leaps that theatre can help to illuminate and perhaps even extend some of the meanings of science.
The actor David Garrick had a set-piece during his performances of Hamlet, the role for which he was most famous, that electrified London theatre audiences in the eighteenth century. It came when the ghost enters at the start of the play. According to the St. James Chronicle in 1772, “As no Writer in any Age penned a Ghost like Shakespeare, so, in our Time, no Actor ever saw a Ghost like Garrick.” The German scientist Georg Christoph Lichtenberg wrote that “His whole demeanour is so expressive of terror that it made my flesh creep even before he began to speak.”
Garrick is shown in the midst of this tour-de-force in a contemporaneous print (Figure 1). Doesn’t it seem here as if his hair is actually rising from his scalp? And in fact, it really is. But not even Garrick could raise his hair at will. He achieved the spine-tingling effect (which goes by the splendid name of horripilation) with the aid of a London wig-maker named Perkins, who created a mechanical wig powered by hydraulics.
Figure 1 David Garrick as Hamlet, on seeing his father’s ghost in Act I. Mezzotint after a painting by Benjamin Wilson, 1756.
This wasn’t just a cheap trick. Garrick’s approach to what was then seen as naturalistic performance was informed by a Cartesian view of human physiology, in which the body was regarded as a kind of hydraulic mechanism driven by fluids called animal spirits that were pumped around the organs and limbs. Within this view, an artificial hydraulic wig was little different from the way real horripilation was thought to work by a rush of fluids to the head. Like all emotion, it was simply a matter of biomechanics.
But there is another defence of Garrick’s potentially absurd ‘fright wig’: he needed all the help he could get, because he’d set himself the task of conjuring the illusion of the ghost by gesture alone. Whereas previously the dead king was generally played by an actor, Garrick insisted that he should be invisible: a disembodied voice whose presence was seen only by the actors. But theatrical invisibility is a difficult trick – as film makers later discovered, it needs visible signifiers to sustain the illusion.
Garrick’s choice represented a decision not just about staging but about what the ghost in Hamlet – on which the plot of course turns – truly means. It is a statement about how the entire play should be interpreted. Because we’re then forced to ask: is this a real spirit, or just a figment of Hamlet’s tortured mind?
Partly this is a question about the significance of ghosts in Shakespeare’s time. But I want to locate this issue of the visibility of his ghosts within a wider debate about appearance, illusion and spectacle in theatre. Because it is my contention that, from the eighteenth to the early twentieth centuries in particular, science – and particularly optical science – became strongly linked to theatre, stage magic and the advent of cinema, in ways that were as much thematic as they were instrumental.
Ghosts were a common, even clichéd sight on the Elizabethan stage. They served as narrators, popping up to fill in a bit of back-story. As such, they were no cause for alarm in either implication or appearance, being represented by a sort of Jack-in-the-box puppet, or else by an actor with whitened face, dressed in clothes made of furry leather. They were a device borrowed from the plays of Seneca, which supplied a model for the revival of tragedy during the Renaissance. The Senecan ghost typically appeared in the prologue, calling for an act of revenge that motivated the play’s tragic plot.
But the ghost in Hamlet is no glove puppet. He’s made to sound hardly less terrible to the audience than he is to Hamlet and his friends: the sight “harrows me with fear and wonder”, gasps Horatio. That’s what Shakespeare did to the theatrical ghost: he made it real, humanized, haunting and disquieting. His spirits are really spooky, and in some ways they represent a supernatural stage presence that has never been equalled.
The Senecan ghost is merely a “bit of dramatic machinery”. But ghosts in Shakespeare, and in some of the Jacobean plays that came after, leave the audience guessing. Indeed, they leave the characters guessing: what sort of apparition is this? This is a question about what ghosts meant in the popular superstition of the time. The answer wasn’t simple, but we can at least say that it was determined largely by your religion. Catholics believed that the souls of the dead reside for a time in Purgatory before being admitted (if they warrant it) to heaven. This gave souls a period in which to haunt the living. But Protestants rejected the idea of Purgatory – which makes it puzzling how a dead soul can feature in what is undoubtedly a Protestant play. Might, then, the ghost be a demon masquerading as the king, to provoke Hamlet into acts of slaughter and, indirectly, Ophelia into sinful suicide?
This was the choice, it seems: ghosts were either dead souls, or they were demons – or maybe angels. All were real entities; as the Shakespeare scholar Robert Hunter West has said, when these plays were first performed “Englishmen were seriously aware in a way that we are not of an invisible world about them.” Around this time there was a vigorous debate about the meaning and status of ghosts, and several learned books were published that attempted to provide them with a taxonomy.
One of the most influential was by the theologian Noel Taillepied, called A Treatise of Ghosts. Taillepied claimed that the souls of the departed may be returned to earth by God to deliver a message. Shakespearian ghosts indeed do always have motives and messages to impart, and sometimes only the intended recipients can see them, or at least hear them. The notion of a ghost who, like Banquo in Macbeth, haunts the guilty party alone was well established in folk tradition. If we are inclined to attribute this now to the fevered imaginings of a guilty conscience, we shouldn’t imagine that Shakespeare was in contrast blindly literal – the powers of invocation and agency attributed to the imagination in the late Renaissance leave no clear distinction between a ghost being a projection of the mind and an objective phenomenon.
Ghosts didn’t, as one might expect, go out of fashion with the alleged rationalism of the Enlightenment. Certainly in popular superstition they remained as present as ever, as the famous Cock Lane Ghost of London in the mid-eighteenth century attested. That case ended in a prosecution for fraud, after investigation by a committee that included Samuel Johnson. But Johnson himself remained a firm believer in ghosts, even if not in this particular one.
What changes in our perceptions of the spirit world is not the question of whether it exists but of what it means. In the nineteenth century, the rise of spiritualism saw ghosts become sources not so much of terror as of consolation: mediums offered the opportunity to speak with the souls of the departed loved ones. And what is most striking in this period, certainly for the purposes of this meeting, is how ideas about invisible beings and unseen spirit worlds co-evolve with the development of science and technology, and also with the traditions of the theatre.
For one thing, spiritualist séances were undoubtedly pieces of theatre in themselves, designed to astonish and confound their audiences and prepared with a great deal of stagecraft (Figure 2). Here is an account by William Crookes, one of the many scientists who tried to subject spiritualism to scientific investigation, of a séance conducted in 1871 by the famous medium Douglas Home:
"At first we had rough manifestations, chairs knocked about, the table floated 6 inches from the ground and then dashed down, loud and unpleasant noises bawling in our ears and altogether phenomena of a low class. After a time it was suggested that we should sing, and as the only thing known to all the company, we struck up ‘For he’s a jolly good fellow’. The chairs, tables and things on it kept up a sort of anvil accompaniment to this. After that D. D. Home gave us a solo – rather a sacred piece – and almost before a dozen words were uttered Mr Herne was carried right up, floated across the table and dropped with a crash of pictures and ornaments at the other end of the room. My brother Walter, who was holding one hand, stuck to him as long as he could, but he says Herne was dragged out of his hand as he went across the table."
The group was subsequently treated to accordions playing themselves, floating lights, books dashed about and disembodied hands stroking their faces. The effect must surely have been overwhelming – both exciting and frightening, and doubtless calculated to inhibit objective assessment.
Figure 2 Victorian séances involved many strange goings-on that relied on carefully prepared and executed illusionistic trickery.
And as Crookes’ case shows, many scientists were taken in by all this – not simply because they were credulous, but because they surely wanted to believe. And also because some of them felt that they had more reason than ever to do so. The invention of the telegraph in the 1830s and 40s showed that it was possible to send messages instantly over immense distances, even spanning the Atlantic once the cables had been laid in the 1860s. With the appearance of the telephone a decade later, it became possible to hear voices directly over such a distance. And in the 1890s, the development of radio broadcasting by Marconi and Oliver Lodge meant that these signals didn’t even need a wire to convey them – they could be sent through the invisible ether. Many scientists figured that, if it was possible to hear the voice of someone who wasn’t physically present, it was not so hard to imagine that one might also hear the voices of those who were not even alive. Spiritualism was even sometimes called celestial telegraphy, and wireless broadcasting led people to suspect that the ether was a vast, invisible sea filled with all manner of voices, coming from who knew where. Rudyard Kipling made this analogy in his 1902 short story “Wireless”, in which some early radio hams pick up random messages from ships offshore while in the same building a man feverish from consumption acts as a human receiver for snatches of poetry by Keats that he picks up from some unknown and perhaps long dead source.
These speculations got another boost from the discovery of X-rays in 1895 (Figure 3) – an invisible form of radiation like light, but of a shorter wavelength. Perhaps thoughts might be transferred from person to person, or from the dead to the living, by similar invisible rays sent through the ether?
Figure 3 The X-ray image taken by Wilhelm Röntgen of his wife’s hand, c.1895.
And as this image shows, the technology of photography, devised in the 1830s, could make these invisible rays visible – this is the rather spooky image taken by the discoverer of X-rays, Wilhelm Rontgen, of his wife’s hand, and when she saw it she is said to have exclaimed “I have seen my death!” From its earliest days, photography seemed to be as much about revealing the invisible as documenting the visible. Because the surface of glass plates used to hold the emulsion could preserve faint images of an earlier exposure, some early photographers found that ghostly figures sometimes appeared in their images when the plates were reused. It was soon decided that these were spirits, and ghost photography because a lucrative business in the late nineteenth century. One of the first entrepreneurs of this business was an American named William Mumler, who set up a ‘spirit photography’ business in Boston and New York (Figure 4).
Figure 4 Abraham Lincoln’s shade consoling his widow, in a “spirit photograph” taken by William Mumler. The Lincolns were enthusiasts of Spiritualism, and were said to have conducted séances in the White House.
Even when scientists explained how such double exposures were easy to fake, it did little to diminish the popularity of the genre, for in its mysterious ability to capture the instant and to solidify intangible light photography seemed virtually a supernatural medium itself. Didn’t it, after all, convey a weird kind of immortality – and paradoxically, by doing so, remind the sitter that death awaits us all?
It’s quite natural that one of the first uses of photography would be to make invisible beings visible. For optical technology has always been closely allied with magic, and also with the theatre. It was long thought capable of revealing what went otherwise unseen, particularly spirits, souls and demons. The camera obscura, the forerunner of the photographic camera, in which natural scenes are projected through a small opening into a darkened space (Figure 5), was known since at least the eleventh century, and was popularized in the sixteenth century manual of natural magic by the Italian Giambattista della Porta (who was also a popular dramatist). By the early seventeenth century mountebanks were using such devices to astonish audiences.
Figure 5 The camera obscura, as depicted in Athanasius Kircher’s Great Art of Light and Shadow (1646).
Looking-glasses that produce figures “at a distance in the air” also featured in the magic lantern, an early form of projector that became a stalwart device of optical natural magic. It was described by the Jesuit inventor and mystical philosopher Athanasius Kircher in 1646: light is passed through an image painted onto glass and then through a lens before falling onto a screen (Figure 6). By the time Kircher was writing, magic lanterns were becoming commercialized. The Danish mathematician Thomas Walgensten traveled across Europe selling these lanterns and using them purportedly to summon ghosts.
Figure 6 The magic lantern, as shown by Kircher.
The magical stage spectacles of the late eighteenth century straddled this ambiguous boundary. The German illusionist Johann Georg Schröpfer held séances in his Leipzig coffee shop in which he used the magic lantern, projected onto smoke, to summon ghosts. Schröpfer’s performances were perhaps the first ‘entertainment séances’, and his techniques were copied by the German Paul Philidor, whose popular public displays in the early 1790s were unashamedly eye-catching and became known as “phantasmagoria” (Figure 7). Subsequently, Étienne Gaspard Robertson used magic-lantern back-projection in his “Fantascope” shows, in which, by mounting the device on wheels, he could make the projection grow rapidly larger or smaller so that ghouls and demons might seem to rush upon the terrified audience.
Figure 7 An advertising bill for the Phantasmagoria show of Paul Philidor in 1801.
Robertson explicit sought to scare his public with visions of ghosts and devils (Figure 8): he was in effect producing the first horror films. He was in fact a professor of physics with a special interest in optics, who realised the commercial potential of optical trickery when he attended one of Philidor’s extravaganzas. And although he made no pretence of possessing magical abilities, he exploited his specialist knowledge while artfully keeping his audiences guessing about what they were seeing.
Figure 8 The light show of Étienne Gaspard Robertson amazes and terrifies an audience in the early nineteenth century.
The most famous illusionistic ghost of the stage also comes from this collusion of science demonstration and pure theatre. In the mid-nineteenth century, the Royal Polytechnic Institute in London put on magic and séance shows to show how paranormal activities could be faked. One of the lecturers was the chemist and science popularizer John Henry Pepper, who later set up his own “Theatre of Popular Science and Entertainment” at the Egyptian Hall in London. Pepper collaborated with the engineer Henry Dircks in the late 1850s to create a technique for projecting the reflection of a hidden actor onto a huge, slanted sheet of glass: a semi-transparent apparition perfect for depicting ghosts (Figure 9). Plays featuring ‘Pepper’s ghost’, including Hamlet, Macbeth and A Christmas Carol, became sensations throughout Europe and the US.
Figure 9 Pepper’s ghost.
The Egyptian Hall was the centre of theatrical magic and scientific illusion in the nineteenth century. Perhaps the most famous residency was that of John Nevil Maskelyne, a watchmaker who began the foremost dynasty of British stage magicians (and who was, incidentally, the inventor of the pay toilet) (Figure 10). In 1905 Maskelyne and a group of other British magicians founded the Magic Circle, dedicated to the art of stage magic and illusion. Like many of these stage magicians, Maskelyne was also a debunker of spiritualists and mystics claiming special powers.
Figure 10 A playbill for the illusion and magic show of John Nevil Maskelyne in the late nineteenth century.
This role of illusionism is clear from Albert Allis Hopkins’ now classic 1898 manual of magic, in which the American amateur magician Henry Ridgely Evans proclaimed that “Science has laughed away sorcery, witchcraft, and necromancy.” Hopkins shows how stage magicians of the Victorian era made avid use of the newest scientific discoveries. He said that X-rays, discovered only two years before the book was published, “are now competing with the most noted mediums in the domain of the marvellous.” Hopkins describes a trick in which a man dining alone is suddenly cast into darkness, whereupon he vanishes and the audience sees, seated across the table, a glowing skeleton, lit up by a hidden X-ray generator (Figure 11).
Figure 11 A glowing, macabre dinner guest is conjured up using X-rays (from the generator on the right) to stimulate luminescence from a skeleton painted in a phosphorescent material, as depicted in Albert Hopkins’ 1898 book of stage magic.
The elaborate illusionism of the theatrical light-show found a new home in the early days of cinematography. In the late 1880s Thomas Edison began to create a kind of electrical magic lantern called the Kinetoscope that projected a series of still images in rapid succession to create the illusion of movement. In 1894 he opened a Kinetoscope parlour in New York, where for a few cents one could watch the first motion pictures, each lasting a minute or so. Meanwhile, the Lumière brothers turned the magic lantern into a portable, manually operated movie projector called the Cinématographe that threw the image onto a screen. A Parisian audience watched the first public screening in 1895.
In the audience for that premiere was the Frenchman George Méliès, who had developed his own form of illusionistic magic at the Paris theatre he owned. He promptly bought a movie camera and started making films himself. Many of these used his existing stage tricks, supplemented by the new illusionistic possibilities that cinematography offered. He made 78 films in 1896 alone, and over 500 during the next two decades. Several of them were ghost films, sometimes aimed more at slapstick than chills (Figure 12).
Figure 12 A scene from George Méliès’ comedy The Apparition, or Mr Jones’ Experience with a Ghost (1903).
Given this genealogy of cinema, it is no surprise that marvels soon took over. Films of ghostly and supernatural phenomena weren’t simply an early genre of cinema – they were its natural subject, for the motion picture should properly be seen not so much as “celluloid theatre” but as celluloid magic. Jacques Derrida seemed to discern this when in 1982 he called cinema “the art of ghosts, a battle of phantoms.”
What ought we to conclude from all of this?
First, that the first marriage of science and theatre happened in the arena of the magical and the illusory, and in particular in the disputed area where science and folk belief have vied for authority over the invisible.
Second, that science and technology have long had a performative aspect that was particularly prominent in the late eighteenth and the nineteenth centuries, and which involved a delicate interplay between explanation, mystification and spectacle, of the kind that I sense still persists in the Royal Institution Christmas Lectures.
Third, cinema should perhaps be a stronger part of this discourse, in the sense that its relationship to theatre, particularly in terms of its genesis, becomes much clearer once we acknowledge the close associations with optical technologies and illusionism.
And finally, I think, we should be reminded here of the role of imagination, which, both in science and in theatre, is needed to span the gulf of what isn’t known or cannot be expressed. Imagination is rarely spoken of today in science, but in a famous 1870 essay “Scientific Use of the Imagination”, John Tyndall argued that via the imagination “we can lighten the darkness which surrounds the world of our senses.” It is in its capacity to permit and depict imaginative leaps that theatre can help to illuminate and perhaps even extend some of the meanings of science.
Wednesday, April 23, 2014
Is music just about sex?
This piece (after editing) has just gone live on BBC Future. I don’t want to knock this PRSB paper, which reports intriguing findings. But please, journalists, a bit of proportion, even (especially?) with this steamy subject matter. For one thing, what exactly is it you will be imagining if I were to say to you “I’m going to play you a piece of music, and I want to imagine you having sex with the composer…”?
____________________________________________________________________
Humans have made music for more than 40,000 years – the age of the earliest known instruments, flutes made from hollow animal bones. But no one knows why. Of all the theories that have been proposed, one of the most enduring and alluring comes from Charles Darwin, who suggested that it’s all about sex. “Musical notes and rhythm”, he wrote in The Descent of Man (1871), “were first acquired by the male and female progenitors of mankind for the sake of charming the opposite sex.”
Darwin’s idea was motivated partly by analogy with bird song, which does indeed often function to attract mates. But not only is there still debate about whether bird song qualifies as “music” in the same sense as human song, but there has been little reason to suppose that humans too use music primarily for courtship.
Now psychologist Benjamin Charlton of the University of Sussex in Brighton, England, offers some evidence to support this sexual-selection hypothesis. He has found that women’s sexual preferences for composers changes during their menstrual cycle, and that they prefer composers of more complex music – who might be construed as more capable mates – at the most fertile point of the cycle [1].
OK, don’t all shout at once – yes, there is a lot to argue over here. But let’s start at the beginning: what’s so special about music?
In answering that question, two things stand out. First, there are no cultures known that lack music – even if they lack a written language. It is as close to a universal human trait as you could hope for. Second, music – unlike, say, cooking, farming, talking, raising a family – doesn’t obviously have any benefit. Of course, it does have a benefit: we love it, it makes us joyful or transports us into tears, rapture and dance. But there’s no obvious, tangible result of music that we can definitely link to any evolutionary advantage.
It’s no wonder, then, that the question of the origins of music has excited such passionate debate. There is evidently something here that is crucial to human existence – we seemingly can’t do without music – but it’s awfully hard to say why, not least because music began way before recorded history. There is no shortage of ideas [2]. Some think that music began as a way of fostering social cohesion, a ‘tribal’ role that still persists today. Others say that it began in the sing-song of mother-to-infant communication, an exaggeration of tones called “motherese” that people all over the world practice. Others think that music and language were once merged into a composite form of communication dubbed “musilanguage”, from which music split as a vehicle of the emotions while language became all about semantic meaning.
But Darwin’s notion of music as an agent of sexual selection remains a favourite, not least because it has his name attached. Darwin regarded sexual selection as an adjunct of natural selection: it was “survival of the sexiest”, regardless of whether the sexual attributes had any other survival benefits. In this view, skill at singing and making music functioned like the peacock’s tail: useless, even an impediment, but attention-catching.
But it’s conceivable that such sexual displays do offer honest clues about the bearer’s “good genes”. The male peacock might be saying “I’m so ripped that I can survive even when encumbered with this absurd thing.” Likewise, a musician able to make complex and beautiful music might be displaying his or her (but usually his) superior skills of cognition, dexterity, stamina and all-round fabulousness. Falling for a musician then makes good evolutionary sense.
The link between sex and music might seem indisputable. Rock musicians have gaggles of sexually available fans at the height of their fertility, and no one made the guitar more explicitly phallic than Jimi Hendrix. (This is no modern phenomenon – Franz Liszt’s recitals set women swooning too.) There’s some anecdotal reason to think that music production declines after sex – Miles Davis attested that musicians are often celibate before big concerts, to retain their ‘edge’. And in case you’re thinking that being a musician didn’t do much for the survival prospects of Hendrix, Jim Morrison or Kurt Cobain, bear in mind that – as Darwin himself pointed out – some male birds drop dead from exhaustion when singing in the breeding season. It’s worth the risk for the sake of becoming a sexual beacon (and after all, Hendrix did father three children).
A sexual-selection origin of music might also help to explain the apparent impulse towards diversity, creativity and novelty, for many male songbirds also develop large repertoires and variety in an effort to produce the most alluring mating signal. And doesn’t the excess of the peacock’s tail – the result of a well-attested runaway tendency in selection of sexual characteristics – seem to speak to the towering stacks of amplifiers and speakers, the pyrotechnics, the outrageous costumes? In short, mightn’t it explain the phenomenon that is Kiss?
But this is part of the problem with Darwin’s idea: it is just too alluring, inviting “evidence by anecdote”. These aren’t much more than Just So stories, and culturally specific ones at that. Songs in pre-literate cultures are by no means the tribal equivalents of ‘Let’s Spend the Night Together’: those of the Australian Aborigines, for example, express the singer’s feelings as a member of the community. Most Western music in the Middle Ages was practised by (supposedly) celibate monks. And in some African societies, musicians are regarded as lazy and unreliable, and so poor marriage material. (Hmm… Pete Doherty, anyone?)
Besides, hard scientific evidence for sexual selection in music is been scant and equivocal. For example, one study in 2000 reported that, in classical concerts, there were significantly more women in the seats nearer the (predominantly male) orchestras than in the back rows – a genteel form, it was suggested, of the female hysteria that greeted the Beatles in concert [3].
If women do pick sexual partners on the basis of creative or artistic traits, one would expect changes in their preferences during peak fertility (irrespective of whether baby-making is actually on the agenda). A study in 2006 did find that men apparently showing higher “creative intelligence” were favoured at this time [4]. Charlton reasoned that the complexity of a male composer’s music might be considered an indicator of his creativity and capacity for learning complex behaviour, and so this too might affect female sexual choice. He has previously found that ovulation doesn’t seem to affect women’s preferences for complexity of music per se [5]. But what about the composers themselves?
Charlton recruited a group of 1,465 adult women participants for his web-based survey, and divided them into those at low and high risk of conception at the time of testing, based on what they reported about their reproductive cycle. He played them several short melodies, composed for the experiment, of varying degrees of complexity. First he asked some of the participants to rate the melodic complexity, to ensure that they could do this reliably. Then he asked a different group which of the supposed (male) composers of a pair of melodies of different complexity they would prefer as a short- or long-term sexual partner. A significant number showed a greater preference for the “more complex” composer – but only in the high-conception-risk group, and only as a short-term partner (implying sex right now, when the chance of conception is high).
Now, numbers are numbers: it seems that something connected to the reproductive cycle was indeed changing preferences in that situation. But what? The findings, Charlton says, “support the contention that women use (or ancestrally used) the ability of male composers to create complex music as criteria for male choice.” That would in turn suggest that musical complexity itself arose from an “arms race” in which male musicians increasingly strove to prove their prowess and woo a mate. Charlton suggests that future work might examine whether the sexual preferences also work for a reversal of sexes, with women making the music. It would be interesting to find out, although there is no reason to suppose that sexual selection is gender-symmetrical, and in fact in general it is not. The fact that most music is produced by men [6] is actually what you’d expect for sexually selective trait [7] (although that’s not to say that it’s an explanation).
Yet while Charlton’s findings are intriguing, there are many reasons not to jump to conclusions. For example, the most complex music, according to some measures [8], is Indonesian gamelan, which is among the most social, devotional and non-sexualized of all world music.
There is little reason to think that music has displayed a steady trend towards greater complexity. And it is very hard to untangle a listener’s preferences for a composer from their preferences for the actual music. The latter in general shows a peak of ‘preferred complexity’, beyond which preference declines (as the Beatles’ music got steadily more complex, their sales declined [9]). And this is even before we get into the murky issue of how cultural overlays will colour the assumptions that women might make about fictitious composers, based on a tiny snippet of ‘their’ tunes. More work required, then – or in other words, if music be the food of love, play on!
References
1. B. D. Charlton, Proceedings of the Royal Society B, advance online publication (2014). [Here] [Be patient - this might take a while to become live on the PRSB site...]
2. N. L. Wallin, B. Merker & S. Brown (eds), The Origins of Music. MIT Press, Cambridge, 2000.
3. V. A. Sluming & J. T.Manning, Evolution and Human Behavior 21, 1-9 (2000). [Here]
4. M. Haselton & G. Miller, Human Nature 17, 50-73 (2006). [Here]
5. B. D. Charlton, P. Filippi & W. T. Fitch, PLoS ONE 7, e35626 (2012). [Here]
6. G. F. Miller, in ref. 1, p. 329-360. [preprint available here]
7. D. M. Buss & P. Schmitt, Psychological Review 100, 204-232 (1993). [Here]
8. H. D. Jennings, P. Ch. Ivanov, A. M. Martins, P. C. da Silva & G. M. Viswanathan, http://arxiv.org/abs/cond-mat/0312380 (2003).
9. T. Eerola, T. & A. C. North, in Proceedings of the 6th International Conference of Music Perception and Cognition, eds C. Woods, G. Luck, R. Brochard, F. Seddon & J. Sloboda. Keele University, 2000.
____________________________________________________________________
Humans have made music for more than 40,000 years – the age of the earliest known instruments, flutes made from hollow animal bones. But no one knows why. Of all the theories that have been proposed, one of the most enduring and alluring comes from Charles Darwin, who suggested that it’s all about sex. “Musical notes and rhythm”, he wrote in The Descent of Man (1871), “were first acquired by the male and female progenitors of mankind for the sake of charming the opposite sex.”
Darwin’s idea was motivated partly by analogy with bird song, which does indeed often function to attract mates. But not only is there still debate about whether bird song qualifies as “music” in the same sense as human song, but there has been little reason to suppose that humans too use music primarily for courtship.
Now psychologist Benjamin Charlton of the University of Sussex in Brighton, England, offers some evidence to support this sexual-selection hypothesis. He has found that women’s sexual preferences for composers changes during their menstrual cycle, and that they prefer composers of more complex music – who might be construed as more capable mates – at the most fertile point of the cycle [1].
OK, don’t all shout at once – yes, there is a lot to argue over here. But let’s start at the beginning: what’s so special about music?
In answering that question, two things stand out. First, there are no cultures known that lack music – even if they lack a written language. It is as close to a universal human trait as you could hope for. Second, music – unlike, say, cooking, farming, talking, raising a family – doesn’t obviously have any benefit. Of course, it does have a benefit: we love it, it makes us joyful or transports us into tears, rapture and dance. But there’s no obvious, tangible result of music that we can definitely link to any evolutionary advantage.
It’s no wonder, then, that the question of the origins of music has excited such passionate debate. There is evidently something here that is crucial to human existence – we seemingly can’t do without music – but it’s awfully hard to say why, not least because music began way before recorded history. There is no shortage of ideas [2]. Some think that music began as a way of fostering social cohesion, a ‘tribal’ role that still persists today. Others say that it began in the sing-song of mother-to-infant communication, an exaggeration of tones called “motherese” that people all over the world practice. Others think that music and language were once merged into a composite form of communication dubbed “musilanguage”, from which music split as a vehicle of the emotions while language became all about semantic meaning.
But Darwin’s notion of music as an agent of sexual selection remains a favourite, not least because it has his name attached. Darwin regarded sexual selection as an adjunct of natural selection: it was “survival of the sexiest”, regardless of whether the sexual attributes had any other survival benefits. In this view, skill at singing and making music functioned like the peacock’s tail: useless, even an impediment, but attention-catching.
But it’s conceivable that such sexual displays do offer honest clues about the bearer’s “good genes”. The male peacock might be saying “I’m so ripped that I can survive even when encumbered with this absurd thing.” Likewise, a musician able to make complex and beautiful music might be displaying his or her (but usually his) superior skills of cognition, dexterity, stamina and all-round fabulousness. Falling for a musician then makes good evolutionary sense.
The link between sex and music might seem indisputable. Rock musicians have gaggles of sexually available fans at the height of their fertility, and no one made the guitar more explicitly phallic than Jimi Hendrix. (This is no modern phenomenon – Franz Liszt’s recitals set women swooning too.) There’s some anecdotal reason to think that music production declines after sex – Miles Davis attested that musicians are often celibate before big concerts, to retain their ‘edge’. And in case you’re thinking that being a musician didn’t do much for the survival prospects of Hendrix, Jim Morrison or Kurt Cobain, bear in mind that – as Darwin himself pointed out – some male birds drop dead from exhaustion when singing in the breeding season. It’s worth the risk for the sake of becoming a sexual beacon (and after all, Hendrix did father three children).
A sexual-selection origin of music might also help to explain the apparent impulse towards diversity, creativity and novelty, for many male songbirds also develop large repertoires and variety in an effort to produce the most alluring mating signal. And doesn’t the excess of the peacock’s tail – the result of a well-attested runaway tendency in selection of sexual characteristics – seem to speak to the towering stacks of amplifiers and speakers, the pyrotechnics, the outrageous costumes? In short, mightn’t it explain the phenomenon that is Kiss?
But this is part of the problem with Darwin’s idea: it is just too alluring, inviting “evidence by anecdote”. These aren’t much more than Just So stories, and culturally specific ones at that. Songs in pre-literate cultures are by no means the tribal equivalents of ‘Let’s Spend the Night Together’: those of the Australian Aborigines, for example, express the singer’s feelings as a member of the community. Most Western music in the Middle Ages was practised by (supposedly) celibate monks. And in some African societies, musicians are regarded as lazy and unreliable, and so poor marriage material. (Hmm… Pete Doherty, anyone?)
Besides, hard scientific evidence for sexual selection in music is been scant and equivocal. For example, one study in 2000 reported that, in classical concerts, there were significantly more women in the seats nearer the (predominantly male) orchestras than in the back rows – a genteel form, it was suggested, of the female hysteria that greeted the Beatles in concert [3].
If women do pick sexual partners on the basis of creative or artistic traits, one would expect changes in their preferences during peak fertility (irrespective of whether baby-making is actually on the agenda). A study in 2006 did find that men apparently showing higher “creative intelligence” were favoured at this time [4]. Charlton reasoned that the complexity of a male composer’s music might be considered an indicator of his creativity and capacity for learning complex behaviour, and so this too might affect female sexual choice. He has previously found that ovulation doesn’t seem to affect women’s preferences for complexity of music per se [5]. But what about the composers themselves?
Charlton recruited a group of 1,465 adult women participants for his web-based survey, and divided them into those at low and high risk of conception at the time of testing, based on what they reported about their reproductive cycle. He played them several short melodies, composed for the experiment, of varying degrees of complexity. First he asked some of the participants to rate the melodic complexity, to ensure that they could do this reliably. Then he asked a different group which of the supposed (male) composers of a pair of melodies of different complexity they would prefer as a short- or long-term sexual partner. A significant number showed a greater preference for the “more complex” composer – but only in the high-conception-risk group, and only as a short-term partner (implying sex right now, when the chance of conception is high).
Now, numbers are numbers: it seems that something connected to the reproductive cycle was indeed changing preferences in that situation. But what? The findings, Charlton says, “support the contention that women use (or ancestrally used) the ability of male composers to create complex music as criteria for male choice.” That would in turn suggest that musical complexity itself arose from an “arms race” in which male musicians increasingly strove to prove their prowess and woo a mate. Charlton suggests that future work might examine whether the sexual preferences also work for a reversal of sexes, with women making the music. It would be interesting to find out, although there is no reason to suppose that sexual selection is gender-symmetrical, and in fact in general it is not. The fact that most music is produced by men [6] is actually what you’d expect for sexually selective trait [7] (although that’s not to say that it’s an explanation).
Yet while Charlton’s findings are intriguing, there are many reasons not to jump to conclusions. For example, the most complex music, according to some measures [8], is Indonesian gamelan, which is among the most social, devotional and non-sexualized of all world music.
There is little reason to think that music has displayed a steady trend towards greater complexity. And it is very hard to untangle a listener’s preferences for a composer from their preferences for the actual music. The latter in general shows a peak of ‘preferred complexity’, beyond which preference declines (as the Beatles’ music got steadily more complex, their sales declined [9]). And this is even before we get into the murky issue of how cultural overlays will colour the assumptions that women might make about fictitious composers, based on a tiny snippet of ‘their’ tunes. More work required, then – or in other words, if music be the food of love, play on!
References
1. B. D. Charlton, Proceedings of the Royal Society B, advance online publication (2014). [Here] [Be patient - this might take a while to become live on the PRSB site...]
2. N. L. Wallin, B. Merker & S. Brown (eds), The Origins of Music. MIT Press, Cambridge, 2000.
3. V. A. Sluming & J. T.Manning, Evolution and Human Behavior 21, 1-9 (2000). [Here]
4. M. Haselton & G. Miller, Human Nature 17, 50-73 (2006). [Here]
5. B. D. Charlton, P. Filippi & W. T. Fitch, PLoS ONE 7, e35626 (2012). [Here]
6. G. F. Miller, in ref. 1, p. 329-360. [preprint available here]
7. D. M. Buss & P. Schmitt, Psychological Review 100, 204-232 (1993). [Here]
8. H. D. Jennings, P. Ch. Ivanov, A. M. Martins, P. C. da Silva & G. M. Viswanathan, http://arxiv.org/abs/cond-mat/0312380 (2003).
9. T. Eerola, T. & A. C. North, in Proceedings of the 6th International Conference of Music Perception and Cognition, eds C. Woods, G. Luck, R. Brochard, F. Seddon & J. Sloboda. Keele University, 2000.
Friday, April 18, 2014
Whatever happened to beautiful instruments?
Have scientific instruments lost their soul? In preparing a schools talk for next week on beautiful experiments, I have been perusing the images online at the very fabulous Museo Galileo in Florence, where I once spent a very happy afternoon. Here are just a few of the very lovely instruments and apparatus that scientists used to use, which are far more beautiful than they really had any call to be. These days scientists have to make do with stuff like this:
which no doubt does the job, but does it inspire you? Below is what I’d like to see return – not the devices themselves, but the spirit in which they were made. Why shouldn’t labs be beautiful?
which no doubt does the job, but does it inspire you? Below is what I’d like to see return – not the devices themselves, but the spirit in which they were made. Why shouldn’t labs be beautiful?
Thursday, April 17, 2014
Hey hey mama
It gladdens my heart to see Jimmy Page with his double-neck guitar on the pages of a science magazine, even in Italian. So it is with the March-April issue of Sapere, where the second of my “music instinct” columns has now appeared. Here it is.
____________________________________________________________
Attempts to explain how music moves us generally have only one big idea on which to draw. But it’s a good one, and is surely a big part of the answer. When in 1956 the musicologist and composer Leonard Meyer published his book Meaning and Emotion in Music, he was one of the first people to move beyond the cool, formal analysis of musical structure and try to get at why music can make us dance, jump for joy, or break down in tears.
Meyer suggested that it’s all to do with setting up expectations and then violating or postponing their resolution. We think the music is going to do one thing, but it does another – or perhaps it does what we expect, but not quite when we expect it. The unexpected creates a feeling of tension, which might be experienced as excitement. And if that tension is then released, say by the final closing chord of a piece, we feel all the more satisfaction from the delayed gratification. Even the simple rallentando slowing at the end of a Chopin prelude will work that magic.
I’ll give several examples in the forthcoming columns of how this violation of expectation can be played with to raise the emotional temperature, sometimes with exquisite results. Here I want to look at rhythm. This is one of the easiest ways to set up an expectation, because we expect rhythm almost by definition to be repetitive and predictable.
So when it isn’t, we get a thrilling shock. The classic example is Stravinsky’s Rite of Spring, in particular the “Dance of the Adolescents” section. A repeated chord beats away in an insistent pulse – but with an emphasis that shifts with every bar, first on the second beat of the bar, then the first, first again, then second… We never guess when it is coming, so each time it delivers an electrifying jolt.
These unexpected emphases enliven all sorts of music – in jazz, they appear as syncopation, where the beat seems to jump in early and make the rhythm swing. But there are other ways of playing with rhythmic expectation too. Take Led Zeppelin’s song “Black Dog”, where the instrumental riff sounds easy until you try to play it. What’s going on – have they added an extra beat or something? But no, John Bonham’s drums are still ticking away four beats to the bar. The surprising complexity comes from the fact that the guitar riff doesn’t actually fit into this four-beat bar – it has an extra half note. So as it is repeated, it begins and ends in a different place in each bar. The result of these imperfectly overlapping rhythmic structures is disorientating where you think it should be simple. That way, it forces us to pay attention and gives the song a kind of coiled tension and urgency. Stravinsky, I like to think, would have approved.
____________________________________________________________
Attempts to explain how music moves us generally have only one big idea on which to draw. But it’s a good one, and is surely a big part of the answer. When in 1956 the musicologist and composer Leonard Meyer published his book Meaning and Emotion in Music, he was one of the first people to move beyond the cool, formal analysis of musical structure and try to get at why music can make us dance, jump for joy, or break down in tears.
Meyer suggested that it’s all to do with setting up expectations and then violating or postponing their resolution. We think the music is going to do one thing, but it does another – or perhaps it does what we expect, but not quite when we expect it. The unexpected creates a feeling of tension, which might be experienced as excitement. And if that tension is then released, say by the final closing chord of a piece, we feel all the more satisfaction from the delayed gratification. Even the simple rallentando slowing at the end of a Chopin prelude will work that magic.
I’ll give several examples in the forthcoming columns of how this violation of expectation can be played with to raise the emotional temperature, sometimes with exquisite results. Here I want to look at rhythm. This is one of the easiest ways to set up an expectation, because we expect rhythm almost by definition to be repetitive and predictable.
So when it isn’t, we get a thrilling shock. The classic example is Stravinsky’s Rite of Spring, in particular the “Dance of the Adolescents” section. A repeated chord beats away in an insistent pulse – but with an emphasis that shifts with every bar, first on the second beat of the bar, then the first, first again, then second… We never guess when it is coming, so each time it delivers an electrifying jolt.
These unexpected emphases enliven all sorts of music – in jazz, they appear as syncopation, where the beat seems to jump in early and make the rhythm swing. But there are other ways of playing with rhythmic expectation too. Take Led Zeppelin’s song “Black Dog”, where the instrumental riff sounds easy until you try to play it. What’s going on – have they added an extra beat or something? But no, John Bonham’s drums are still ticking away four beats to the bar. The surprising complexity comes from the fact that the guitar riff doesn’t actually fit into this four-beat bar – it has an extra half note. So as it is repeated, it begins and ends in a different place in each bar. The result of these imperfectly overlapping rhythmic structures is disorientating where you think it should be simple. That way, it forces us to pay attention and gives the song a kind of coiled tension and urgency. Stravinsky, I like to think, would have approved.
Subscribe to:
Posts (Atom)