Saturday, May 26, 2012
Some books to browse
The Browser has run an interview in which I recommend five books connected to Curiosity. (I cheat – one is a multi-volume series.) And I just discovered that my short talk on patterns at the NY ‘Wonder Cabinet’ Survival of the Beautiful is now online, along with a little post-talk interview.
Friday, May 25, 2012
Buckled up
I have written a story for Physical Review Focus, of which the pre-edited version is below. There’s more on this topic in my book Shapes, and out of sheer Friday-night generosity I reproduce some of it below too.
__________________________________________________________________
Some of nature’s most delicate forms and patterns, such as the fluted head of a daffodil and the convoluted labyrinths of fingerprints, are created by buckling and wrinkling. These deformations are a response to internal stress, for example as a sheet of soft tissue grows while being constrained at its boundaries. Old paint wrinkles as the paint film swells in some places but stays pinned to the surface below in others.
Because buckling and wrinkling patterns can be highly regular and predictable, they could provide a way of creating complex structures by spontaneous self-organization. In a paper in Physical Review Letters [1], Nicholas Fang at the Massachusetts Institute of Technology in Cambridge and coworkers describe a way of controlling the buckling shapes in small tubes of soft material, and show that they can explain theoretically how the pattern evolves as the tube dimensions are altered.
“These patterns are lovely to look at”, says Michael Marder, a specialist on nonlinear dynamics at the University of Texas at Austin, “and if the ability to control patterns is not yet at the level of control that is likely to interest engineers, it’s a promising step forward.”
“Mechanical buckling has long been suggested as a means of pattern formation in biological tissues”, says mathematician Alan Newell of the University of Arizona, who has previously advanced this as an explanation for the spiral phyllotaxis patterns of leaves and florets. “What’s good about this work is that they do a precise experiment and their results tend to agree with simple theories.”
To ‘grow’ a deformable material so as to induce buckling, Fang’s team use a polymer gel that swells when it absorbs water. They explored tubular geometries not only because these are conceptually simple but because they are relevant to some natural buckling structures, such as the bronchial passage, which may become swollen and wrinkled in asthmatics.
The researchers used a microfabrication technique to make short tubes with diameters (D) of several mm, and walls of various thickness (t) and length (h). The tubes are fixed at one end to a solid substrate, creating the constraint that drives buckling. To induce swelling that begins at the free end of the tube, the researchers inverted the tubes in oil and let the ends poke into a layer of water below.
Swelling deformed the tubes into truncated cone shapes, which might then buckle into a many-pointed star-shaped cross-section. In general, the shorter the tubes (the smaller the ratio h/D), the more wrinkles there were around the tube circumference. Surprisingly, the wall thickness had relatively little influence: tubes with the same h/D tended to have a similar buckled shape regardless of the wall thickness.
To understand that, Fang and colleagues used a simple model to calculate the shape that minimizes the total elastic energy of a tube. Buckling costs elastic energy around the circumference, but it can also reduce the elastic energy due to outward bending of the tube as it swells. For a given set of parameters, the two contributions balance to minimize the total energy for a particular number of buckles – which turns out to depend only on h/D and not wall thickness. The experimental results mapped well onto these theoretical predictions of the most stable mode of deformation.
Fang, who usually works on photonic structures for controlling the flow of light, hopes that these regular buckles and wrinkles might offer ways of channeling and directing light and sound waves by scattering. “The basic idea is that the wrinkled structure could absorb or scatter the light field or acoustic waves in a directional way”, he says. “We’re currently testing such structures for applications in ultrasound-mediated drug delivery.”
The results may have implications for natural systems too. Fang says it’s no coincidence that the buckled gel rings resemble slices of bell pepper, for example. “Bell peppers can be considered as a tubular structure that grow under constraints from the ends”, he says. “Often we find a slice of slender peppers display a triangle shape and that of short and squat peppers appear in square or even star-like. Our model suggests that these patterns are determined by the ratio of length to diameter.” The team also thinks that the results might elucidate the buckling patterns of corals and brain tissue.
Xi Chen of Columbia University in New York, who has studied the buckling pattern on the surfaces of fruits and vegetables, is not yet convinced by this leap. “It’s not yet clear where the rather strict constraint on swelling – the key for obtaining the shapes described in their paper – comes from in nature. It’s interesting work but there’s still a large gap before it could be applied directly to natural systems.”
Newell raises a more general note of caution: similarities of form and pattern between an experiment and a theory are just suggestive and not conclusive proof that one explains the other. “To say that the pattern you observe is indeed the one produced by the mechanism you suggest requires one to test the dependence of the pattern on the parameters relevant to the model”, he says. “In this case, the authors test the h/D ratio dependence but it would also have been good to see the dependence of the outcomes on various of the elastic parameters.”
Reference
1. Howon Lee, Jiaping Zhang, Hanqing Jiang, and Nicholas X. Fang, "Prescribed Pattern Transformation in Swelling Gel Tubes by Elastic Instability", Phys. Rev. Lett. 108, 214304 (2012).
From Shapes:
Buckling might conceivably also explain the surface patterning of some fruits and vegetables, such as pumpkins, gourds, melons and tomatoes. These have soft, pulpy flesh confined by a tougher, stiffer skin. Some fruits have smooth surfaces that simply inflate like balloons as they grow, but others are marked by ribs, ridges or bulges that divide them into segments (Fig. 1a). According to Xi Chen of Columbia University in New York, working in collaboration with Zexian Cao in Beijing and others, these shapes could be the result of buckling.
Fig. 1 Real (a) and modelled (b) buckled fruit shapes
This is a familiar process in laminates that consist of a skin and core with different stiffness: think, for example, of the wrinkling of a paint film stuck to wood that swells and shrinks. Under carefully controlled conditions, this process can generate patterns of striking regularity (Figure 2).
Fig. 2 Wrinkles in a thin metal film attached to a rubbery polymer
Chen and colleagues performed calculations to predict what will happen if the buckling occurs not on a flat surface but on spherical or ovoid ones (spheroids). They found well-defined, symmetrical patterns of creases in a thin, stiff skin covering the object’s surface, which depend on three key factors: the ratio of the skin thickness to the width of the spheroid, the difference in stiffness of the core and skin, and the shape of the spheroid – whether, say, it is elongated (like a melon or cucumber) or flattened (like a pumpkin).
The calculations indicate that, for values of these quantities comparable to those that apply to fruits, the patterns are generally either ribbed – with grooves running from top to bottom – or reticulated (divided into regular arrays of dimples), or, in rare cases, banded around the circumference (Figure 3). Ribs that separate segmented bulges are particularly common in fruit, being seen in pumpkins, some melons, and varieties of tomato such as the striped cavern or beefsteak. The calculations show that spheroids shaped like such fruits may have precisely the same number of ribs as the fruits themselves (Figure 1)
Fig. 3 Buckling shapes on spheroids as a function of geometry
For example, the 10-rib pattern of Korean melons remains the preferred state for a range of spheroids with shapes like those seen naturally. That’s why the shape of a fruit may remain quite stable during its growth (as its precise spheroidal profile changes), whereas differences of, say, skin thickness would generate different features in different fruits with comparable spheroidal forms.
Chen suggests that the same principles might explain the segmented shapes of seed pods, the undulations in nuts such as almonds, wrinkles in butterfly eggs, and even the wrinkle patterns in the skin and trunk of elephants. So far, the idea remains preliminary, however. For one thing, the mechanical behaviour of fruit tissues hasn’t been measured precisely enough to make close comparisons with the calculations. And the theory makes some unrealistic assumptions about the elasticity of fruit skin. So it’s a suggestive argument, but far from proven. Besides, Chen and his colleagues admit that some of the shaping might be influenced by subtle biological factors such as different growth rates in different parts of the plant, or direction-dependent stiffness of the tissues. They argue, however, that the crude mechanical buckling patterns could supply the basic shapes that plants then modify. As such, these patterns would owe nothing to evolutionary fine-tuning, but would be an inevitable as the ripples on a desert floor.
I daresay Figure 2 may have already put you in mind of another familiar pattern too. Don’t those undulating ridges and grooves bring to mind the traceries at the tips of your fingers (Figure 4)? Yes indeed; and Alan Newell of the University of Arizona has proposed that these too might be the product of buckling as the soft tissue is compressed during our early stages of growth.
Fig. 4 A human fingerprint
About ten weeks into the development of a human foetus, a layer of skin called the basal layer starts to grow more quickly than the two layers – the outer epidermis and the inner dermis – between which it is sandwiched. This confinement gives it no option but to buckle and form ridges. Newell and his colleague Michael Kücken have calculated what stress patterns will result in a surface shaped like a fingertip, and how the basal layer may wrinkle up to offer maximum relief from this stress.
The buckling is triggered and guided by tiny bumps called volar pads that start to grow on the foetal fingertips after seven weeks or so. The shape and positions of volar pads seem to be determined in large part by genetics – they are similar in identical twins, for instance. But the buckling that they produce contains an element of chance, since it depends (among other things) on slight non-uniformities in the basal layer. The American anatomist Harold Cummins, who studied volar pads in the early twentieth century, commented presciently on how they influence the wrinkling patterns in ways that cannot be fully foreseen, and which echo universal patterns elsewhere: “The skin possesses the capacity to form ridges, but the alignments of these ridges are as responsive to stresses in growth as are the alignments of sand to sweeping by wind or wave.” Newell and Kücken found that the shape of the volar pads govern the print patterns: if they are highly rounded, the buckling generates concentric whorls, whereas if the pads are flatter, arch-shaped ridges are formed (Figure 5). Both of these are seen in real fingerprints.
Fig. 5 Whorls (top) and arches (bottom) in a model of fingerprint patterns.
__________________________________________________________________
Some of nature’s most delicate forms and patterns, such as the fluted head of a daffodil and the convoluted labyrinths of fingerprints, are created by buckling and wrinkling. These deformations are a response to internal stress, for example as a sheet of soft tissue grows while being constrained at its boundaries. Old paint wrinkles as the paint film swells in some places but stays pinned to the surface below in others.
Because buckling and wrinkling patterns can be highly regular and predictable, they could provide a way of creating complex structures by spontaneous self-organization. In a paper in Physical Review Letters [1], Nicholas Fang at the Massachusetts Institute of Technology in Cambridge and coworkers describe a way of controlling the buckling shapes in small tubes of soft material, and show that they can explain theoretically how the pattern evolves as the tube dimensions are altered.
“These patterns are lovely to look at”, says Michael Marder, a specialist on nonlinear dynamics at the University of Texas at Austin, “and if the ability to control patterns is not yet at the level of control that is likely to interest engineers, it’s a promising step forward.”
“Mechanical buckling has long been suggested as a means of pattern formation in biological tissues”, says mathematician Alan Newell of the University of Arizona, who has previously advanced this as an explanation for the spiral phyllotaxis patterns of leaves and florets. “What’s good about this work is that they do a precise experiment and their results tend to agree with simple theories.”
To ‘grow’ a deformable material so as to induce buckling, Fang’s team use a polymer gel that swells when it absorbs water. They explored tubular geometries not only because these are conceptually simple but because they are relevant to some natural buckling structures, such as the bronchial passage, which may become swollen and wrinkled in asthmatics.
The researchers used a microfabrication technique to make short tubes with diameters (D) of several mm, and walls of various thickness (t) and length (h). The tubes are fixed at one end to a solid substrate, creating the constraint that drives buckling. To induce swelling that begins at the free end of the tube, the researchers inverted the tubes in oil and let the ends poke into a layer of water below.
Swelling deformed the tubes into truncated cone shapes, which might then buckle into a many-pointed star-shaped cross-section. In general, the shorter the tubes (the smaller the ratio h/D), the more wrinkles there were around the tube circumference. Surprisingly, the wall thickness had relatively little influence: tubes with the same h/D tended to have a similar buckled shape regardless of the wall thickness.
To understand that, Fang and colleagues used a simple model to calculate the shape that minimizes the total elastic energy of a tube. Buckling costs elastic energy around the circumference, but it can also reduce the elastic energy due to outward bending of the tube as it swells. For a given set of parameters, the two contributions balance to minimize the total energy for a particular number of buckles – which turns out to depend only on h/D and not wall thickness. The experimental results mapped well onto these theoretical predictions of the most stable mode of deformation.
Fang, who usually works on photonic structures for controlling the flow of light, hopes that these regular buckles and wrinkles might offer ways of channeling and directing light and sound waves by scattering. “The basic idea is that the wrinkled structure could absorb or scatter the light field or acoustic waves in a directional way”, he says. “We’re currently testing such structures for applications in ultrasound-mediated drug delivery.”
The results may have implications for natural systems too. Fang says it’s no coincidence that the buckled gel rings resemble slices of bell pepper, for example. “Bell peppers can be considered as a tubular structure that grow under constraints from the ends”, he says. “Often we find a slice of slender peppers display a triangle shape and that of short and squat peppers appear in square or even star-like. Our model suggests that these patterns are determined by the ratio of length to diameter.” The team also thinks that the results might elucidate the buckling patterns of corals and brain tissue.
Xi Chen of Columbia University in New York, who has studied the buckling pattern on the surfaces of fruits and vegetables, is not yet convinced by this leap. “It’s not yet clear where the rather strict constraint on swelling – the key for obtaining the shapes described in their paper – comes from in nature. It’s interesting work but there’s still a large gap before it could be applied directly to natural systems.”
Newell raises a more general note of caution: similarities of form and pattern between an experiment and a theory are just suggestive and not conclusive proof that one explains the other. “To say that the pattern you observe is indeed the one produced by the mechanism you suggest requires one to test the dependence of the pattern on the parameters relevant to the model”, he says. “In this case, the authors test the h/D ratio dependence but it would also have been good to see the dependence of the outcomes on various of the elastic parameters.”
Reference
1. Howon Lee, Jiaping Zhang, Hanqing Jiang, and Nicholas X. Fang, "Prescribed Pattern Transformation in Swelling Gel Tubes by Elastic Instability", Phys. Rev. Lett. 108, 214304 (2012).
From Shapes:
Buckling might conceivably also explain the surface patterning of some fruits and vegetables, such as pumpkins, gourds, melons and tomatoes. These have soft, pulpy flesh confined by a tougher, stiffer skin. Some fruits have smooth surfaces that simply inflate like balloons as they grow, but others are marked by ribs, ridges or bulges that divide them into segments (Fig. 1a). According to Xi Chen of Columbia University in New York, working in collaboration with Zexian Cao in Beijing and others, these shapes could be the result of buckling.
Fig. 1 Real (a) and modelled (b) buckled fruit shapes
This is a familiar process in laminates that consist of a skin and core with different stiffness: think, for example, of the wrinkling of a paint film stuck to wood that swells and shrinks. Under carefully controlled conditions, this process can generate patterns of striking regularity (Figure 2).
Fig. 2 Wrinkles in a thin metal film attached to a rubbery polymer
Chen and colleagues performed calculations to predict what will happen if the buckling occurs not on a flat surface but on spherical or ovoid ones (spheroids). They found well-defined, symmetrical patterns of creases in a thin, stiff skin covering the object’s surface, which depend on three key factors: the ratio of the skin thickness to the width of the spheroid, the difference in stiffness of the core and skin, and the shape of the spheroid – whether, say, it is elongated (like a melon or cucumber) or flattened (like a pumpkin).
The calculations indicate that, for values of these quantities comparable to those that apply to fruits, the patterns are generally either ribbed – with grooves running from top to bottom – or reticulated (divided into regular arrays of dimples), or, in rare cases, banded around the circumference (Figure 3). Ribs that separate segmented bulges are particularly common in fruit, being seen in pumpkins, some melons, and varieties of tomato such as the striped cavern or beefsteak. The calculations show that spheroids shaped like such fruits may have precisely the same number of ribs as the fruits themselves (Figure 1)
Fig. 3 Buckling shapes on spheroids as a function of geometry
For example, the 10-rib pattern of Korean melons remains the preferred state for a range of spheroids with shapes like those seen naturally. That’s why the shape of a fruit may remain quite stable during its growth (as its precise spheroidal profile changes), whereas differences of, say, skin thickness would generate different features in different fruits with comparable spheroidal forms.
Chen suggests that the same principles might explain the segmented shapes of seed pods, the undulations in nuts such as almonds, wrinkles in butterfly eggs, and even the wrinkle patterns in the skin and trunk of elephants. So far, the idea remains preliminary, however. For one thing, the mechanical behaviour of fruit tissues hasn’t been measured precisely enough to make close comparisons with the calculations. And the theory makes some unrealistic assumptions about the elasticity of fruit skin. So it’s a suggestive argument, but far from proven. Besides, Chen and his colleagues admit that some of the shaping might be influenced by subtle biological factors such as different growth rates in different parts of the plant, or direction-dependent stiffness of the tissues. They argue, however, that the crude mechanical buckling patterns could supply the basic shapes that plants then modify. As such, these patterns would owe nothing to evolutionary fine-tuning, but would be an inevitable as the ripples on a desert floor.
I daresay Figure 2 may have already put you in mind of another familiar pattern too. Don’t those undulating ridges and grooves bring to mind the traceries at the tips of your fingers (Figure 4)? Yes indeed; and Alan Newell of the University of Arizona has proposed that these too might be the product of buckling as the soft tissue is compressed during our early stages of growth.
Fig. 4 A human fingerprint
About ten weeks into the development of a human foetus, a layer of skin called the basal layer starts to grow more quickly than the two layers – the outer epidermis and the inner dermis – between which it is sandwiched. This confinement gives it no option but to buckle and form ridges. Newell and his colleague Michael Kücken have calculated what stress patterns will result in a surface shaped like a fingertip, and how the basal layer may wrinkle up to offer maximum relief from this stress.
The buckling is triggered and guided by tiny bumps called volar pads that start to grow on the foetal fingertips after seven weeks or so. The shape and positions of volar pads seem to be determined in large part by genetics – they are similar in identical twins, for instance. But the buckling that they produce contains an element of chance, since it depends (among other things) on slight non-uniformities in the basal layer. The American anatomist Harold Cummins, who studied volar pads in the early twentieth century, commented presciently on how they influence the wrinkling patterns in ways that cannot be fully foreseen, and which echo universal patterns elsewhere: “The skin possesses the capacity to form ridges, but the alignments of these ridges are as responsive to stresses in growth as are the alignments of sand to sweeping by wind or wave.” Newell and Kücken found that the shape of the volar pads govern the print patterns: if they are highly rounded, the buckling generates concentric whorls, whereas if the pads are flatter, arch-shaped ridges are formed (Figure 5). Both of these are seen in real fingerprints.
Fig. 5 Whorls (top) and arches (bottom) in a model of fingerprint patterns.
Wednesday, May 23, 2012
It's not only opposites that attract
My latest news article for Nature is here – it was altered very little in the editing, so I shan’t include the original. Was this problem really not put to bed in the nineteenth century? Apparently not. But an awful lot gets forgotten in the annals of science…
Slippery slopes
I often used to get asked if the image on the cover of the UK version of Critical Mass, shown here, is real (it is). It was a great choice by Heinemann, a perfect visual metaphor for the kind of social group behaviour that the book discusses. Now the science I presented there has been extended to include the very phenomenon depicted. In a paper in Physical Review E, Thomas Holleczek and Gerhard Tröster and of ETH in Zurich present an agent-based model of ‘skiing traffic’ - a modification of the pedestrian and traffic models that motivate the early part of my book, which incorporates the physics of skiing into the rules of motion of the agents. The researchers aim, among other things, to develop a model that can predict congestion on ski slopes, so that appropriate safety measures such as slope widening can be undertaken. It’s a complex problem, because the trajectories of skiers depend on a host of factors: the slope and friction of the snow, for example, may determine how many turns they make. To my eye, the work is still in the preliminary stages: the matching of model predictions with observed data for skier density, speed and number of turns still leaves something to be desired, at least in the cases studied. But already it becomes clear what the most salient factors are that govern these things, and the discrepancies will feed back into better model parameterization. There’s more on the work here.
Monday, May 21, 2012
Last word
One of the sections of New Statesman I most enjoy is This England, which supplies little snippets of the poignantly weird, stupid and ridiculous kinds of behaviour that this scepter’d isle seems to produce in such abundance. I thought had found a candidate in this week’s New Scientist, until I saw that it comes from New South Wales. It is one of the questions for The Last Word, and it made me laugh out loud:
“Could anyone explain why I can hold an electric fence between finger and thumb and feel only a tiny pulse in the finger, yet my wife can touch the same spot and feel a much larger pulse through her whole arm? If I touch my wife with one finger on my other hand as I hold the fence, I feel a solid shock through both arms and across my chest and my wife feels a massive shock leaving her shaking and weak. Footwear type does not seem to play a role.”
That last line is a stroke of sheer genius. Well, can anyone indeed explain it? One is tempted to imagine it has something to do with living in Australia, but somehow I don’t have too much difficulty seeing this sort of thing going on in our own beloved South Wales either.
“Could anyone explain why I can hold an electric fence between finger and thumb and feel only a tiny pulse in the finger, yet my wife can touch the same spot and feel a much larger pulse through her whole arm? If I touch my wife with one finger on my other hand as I hold the fence, I feel a solid shock through both arms and across my chest and my wife feels a massive shock leaving her shaking and weak. Footwear type does not seem to play a role.”
That last line is a stroke of sheer genius. Well, can anyone indeed explain it? One is tempted to imagine it has something to do with living in Australia, but somehow I don’t have too much difficulty seeing this sort of thing going on in our own beloved South Wales either.
Thursday, May 17, 2012
Galileo versus Bacon?
Andrew Robinson gives me a kind review in this week’s New Scientist (not available free online, but Andrew has put it on his website here). But he’s not convinced by aspects of my thesis, specifically with the following quote:
“It is [Francis]Bacon’s picture (derived from the natural magic tradition) and not Galileo’s (which drew as much on scholastic deduction from theorem and axiom as it did on observation), that conditioned the emergence of experimental, empirical science.”
Against this, Andrew contrasts Einstein saying that Galileo was “the father of modern physics – indeed, of modern science altogether” because he was the first to insist that “pure logical thinking cannot yield us any knowledge of the empirical world.”
The first thing to notice is that these two statements of what Galileo actually did are entirely compatible, when read carefully. In that sense, Einstein’s comment does not at all disavow mine.
But more revealing is the fact that Andrew has chosen to bring Einstein’s authority to bear. Now, it happens that I am writing about Einstein at the moment, which is reaffirming my deep respect for his wisdom as well as his scientific acumen. But one thing Einstein is not is a historian of science. And that is important not just because it means Einstein made no deep, careful analysis of the evolution of science but because his position is precisely the one that scientists for the past hundred years or more have loved to assert, making Galileo their idol and the model of the “modern scientist”. Historians of science today adopt a much more nuanced position. Moreover, while it is true that in Einstein’s time there were still some science historians who pushed this Whiggish line that I criticize in my book, scholarship has moved on. In other words, while Einstein is so often considered the ultimate arbiter of all things scientific (a role that Stephen Hawking is unfortunately now often awarded instead), this is one situation in which his opinion is decidedly amateur. (I am an amateur here too, of course, but my view is one that many professionals have already laid out. I don't make any great claim to originality here.)
All the same, there is certainly a debate to be had about the relative influences of the Baconian versus the Galilean (or for that matter, Aristotelian, Cartesian, Newtonian and Boylean) approaches to science. I’d hope my book can help a little to stimulate that discussion, and I’m glad Andrew brings it to the fore.
“It is [Francis]Bacon’s picture (derived from the natural magic tradition) and not Galileo’s (which drew as much on scholastic deduction from theorem and axiom as it did on observation), that conditioned the emergence of experimental, empirical science.”
Against this, Andrew contrasts Einstein saying that Galileo was “the father of modern physics – indeed, of modern science altogether” because he was the first to insist that “pure logical thinking cannot yield us any knowledge of the empirical world.”
The first thing to notice is that these two statements of what Galileo actually did are entirely compatible, when read carefully. In that sense, Einstein’s comment does not at all disavow mine.
But more revealing is the fact that Andrew has chosen to bring Einstein’s authority to bear. Now, it happens that I am writing about Einstein at the moment, which is reaffirming my deep respect for his wisdom as well as his scientific acumen. But one thing Einstein is not is a historian of science. And that is important not just because it means Einstein made no deep, careful analysis of the evolution of science but because his position is precisely the one that scientists for the past hundred years or more have loved to assert, making Galileo their idol and the model of the “modern scientist”. Historians of science today adopt a much more nuanced position. Moreover, while it is true that in Einstein’s time there were still some science historians who pushed this Whiggish line that I criticize in my book, scholarship has moved on. In other words, while Einstein is so often considered the ultimate arbiter of all things scientific (a role that Stephen Hawking is unfortunately now often awarded instead), this is one situation in which his opinion is decidedly amateur. (I am an amateur here too, of course, but my view is one that many professionals have already laid out. I don't make any great claim to originality here.)
All the same, there is certainly a debate to be had about the relative influences of the Baconian versus the Galilean (or for that matter, Aristotelian, Cartesian, Newtonian and Boylean) approaches to science. I’d hope my book can help a little to stimulate that discussion, and I’m glad Andrew brings it to the fore.
Tuesday, May 15, 2012
Curioser...
So: reviews. I read somewhere recently that some writers still feel it the proper thing to do, if not to never read them, then at least not to respond to them. But where’s the fun in that? However, I’m fortunate in having, so far, little to respond to: the Literary Review (not online) and the Daily Telegraph both liked Curiosity. I think it is fair to say that the Scotsman on Sunday and the Sunday Telegraph liked some things about it too, but had reservations about others. I can’t quibble with Doug Johnstone in the former (not least because he was so kind about The Music Instinct), since his reservations are a matter of taste: he wanted less history and more science (which is one reason I’m pleased when I’m not described as a science writer), and found some bits boring. (If you’re looking for science, the late Renaissance court scene isn’t likely to be your thing.) Noel Malcolm in the Sunday Telegraph offers, as one would expect of him, a very well informed opinion. And his point that a key transition is that of objects from being representations of something else to being material things deserving study for their own sake. I have not particularly tackled that issue, and I’m not aware if anyone else has, at least to the point of generating a solid story about it.
Malcolm points out that science historians have been, for some time, saying much of what I’m saying. This is true, and I’m not sure I can see how I could have made it more plain in the book (it is first said in the Preface…) without sounding repetitious. My only real gripe, though, is with his suggestion that Frances Yates is my “leading authority”. The kindest word I can find for this notion is “bizarre”. It is so off the mark that I have to suspect there is some other agenda here (if I have “leading authorities”, they are the likes of Lorraine Daston, Katharine Park, Steven Shapin, Simon Schaeffer, Mary Baine Campbell, Catherine Wilson, Neil Kenny, William Eamon, William Newman, Lisa Jardine…). I know that Yates is now out of favour (although “batty” is putting it a little strongly) – but in any event, she is used here in much the same way as other older historians of science such as Rosemary Syfret, Alistair Crombie, Lynn Thorndike and Margery Purver. Yes, it’s a very odd remark indeed, and I suppose a reminder of the fact that sometimes engaging an expert reviewer has its pitfalls: one can get pulled way off course by the unseen currents of academic battles, antipathies and allegiances.
Malcolm points out that science historians have been, for some time, saying much of what I’m saying. This is true, and I’m not sure I can see how I could have made it more plain in the book (it is first said in the Preface…) without sounding repetitious. My only real gripe, though, is with his suggestion that Frances Yates is my “leading authority”. The kindest word I can find for this notion is “bizarre”. It is so off the mark that I have to suspect there is some other agenda here (if I have “leading authorities”, they are the likes of Lorraine Daston, Katharine Park, Steven Shapin, Simon Schaeffer, Mary Baine Campbell, Catherine Wilson, Neil Kenny, William Eamon, William Newman, Lisa Jardine…). I know that Yates is now out of favour (although “batty” is putting it a little strongly) – but in any event, she is used here in much the same way as other older historians of science such as Rosemary Syfret, Alistair Crombie, Lynn Thorndike and Margery Purver. Yes, it’s a very odd remark indeed, and I suppose a reminder of the fact that sometimes engaging an expert reviewer has its pitfalls: one can get pulled way off course by the unseen currents of academic battles, antipathies and allegiances.
Sunday, May 13, 2012
Science and wonder
This piece appeared in the 30 April issue of the New Statesman, a “science special”, for which I was asked to write about whether “science has lost it sense of wonder.”
___________________________________________________
The day I realised the potential of the internet was infused with wonder. Not wonder at the network itself, however handy it would become for shovelling bits, but at what it revealed, televised live by NASA, as I crowded round a screen with the other staff of Nature magazine on 16 July 1994. That was the day the first piece of Comet Shoemaker-Levy 9 smashed into Jupiter, turning our cynicism about previous astronomical fireworks promised but not delivered into the carping of ungrateful children. There on our cosmic doorstep bloomed a fiery apocalypse that left an Earth-sized hole in the giant planet’s baroquely swirling atmosphere. This was old-style wonder: awe tinged with horror at forces beyond our comprehension.
Aristotle and Plato didn’t agree on much, but they were united in identifying wonder as the origin of their profession: as Aristotle put it, “It was owing to their wonder that men began to philosophize”. This idea appeals to scientists, who frequently enlist wonder as a goad to inquiry. “I think everyone in every culture has felt a sense of awe and wonder looking at the sky”, wrote Carl Sagan, locating in this response the stirrings of a Copernican desire to know who and where we are.
But that’s not the only direction in which wonder may take us. To Thomas Carlyle, wonder sits at the beginning not of science but of religion. That’s is the central tension in forging an alliance of wonder and science: will it make us curious, or induce us to prostrate ourselves in pitiful ignorance?
We had better get to grips with this question before too hastily appropriating wonder to sell science. That’s surely what is going on when pictures from the Hubble Space Telescope are (unconsciously?) cropped and coloured to recall the sublime iconography of Romantic landscape painting, or the Human Genome Project is wrapped in Biblical rhetoric, or the Large Hadron Collider’s proton-smashing is depicted as “replaying the moment of creation”. The point is not that such things are deceitful or improper, but that if we want to take that path, we should first consider the complex evolution of science’s relation to wonder.
For Sagan, wonder is evidently not just an invitation to be curious but a delight: it is wonderful. Maybe the ancients felt this too; the Latin equivalents admiratio and mirabilia seem to have their roots in an Indo-European word for ‘smile’. But this was not the wonder enthusiastically commended by medieval theologians, which was more apt to induce fear, reverence and bewilderment. Wonder was a reminder of God’s infinite, unknowable power – and as such, it was the pious response to nature, as opposed to the sinful prying of ‘curiosity’, damned by Saint Augustine as a ‘lust of the eyes’.
In that case, wonder was a signal to cease questioning and fall to your knees. Historians Lorraine Daston and Katharine Park argue that wonder and curiosity followed mirror-image trajectories between the Middle Ages and the Enlightenment, from good to bad and vice versa, conjoining symbiotically only in the sixteenth and seventeenth centuries – not incidentally, the period in which modern science was born.
It’s no surprise, then, to find the early prophets of science uncertain how to manage this difficult emotion of wonder. Francis Bacon admitted it only as a litmus test of ignorance: wonder signified “broken knowledge”. The implicit aim of Bacon’s scientific programme was to make wonders cease by explaining them, a quest that began with medieval rationalists such as Roger Bacon and Albertus Magnus. That which was understood was no longer wonderful.
Undisciplined wonder was thought to induce stupefaction. Descartes distinguished useful wonder (admiration) from useless (astonishment, literally a ‘turning to stone’ that “makes the whole body remain immobile like a statue”). Useful wonder focused the attention: it was, said Descartes, “a sudden surprise of the soul which makes it tend to consider alternatively those objects which seem to it rare and extraordinary”. If the ‘new philosophers’ of the seventeenth century admitted wonder at all, it was a source of admiration, not debilitating fear. The northern lights might seem “frightful” to the “vulgar Beholder”, said Edmond Halley, but to him they would be “a most agreeable and wish’d for Spectacle”.
Others shifted wonder to the far side of curiosity: something that emerges only after the dour slog of study. In this way, wonder could be dutifully channelled away from the phenomenon itself and turned into esteem for God’s works. “Wonder was the reward rather than the bait for curiosity”, say Daston and Park, “the fruit rather than the seed.” It is only after he has carefully studied the behaviour of ants to understand how elegantly they coordinate their affairs that Dutch naturalist Jan Swammerdam admits to his wonder at how God could have arranged things thus. “Nature is never so wondrous, nor so wondered at, as when she is known”, wrote Bernard Fontenelle, secretary of the French Academy of Sciences. This is a position that most modern scientists, even those of a robustly secular persuasion, are comfortable with: “The science only adds to the excitement and mystery and awe of a flower”, said physicist Richard Feynman.
This kind of wonder is not an essential part of scientific practice, but may constitute a form of post hoc genuflection. It is informed wonder that science generally aims to cultivate today. The medieval alternative, regarded as ignorant, gaping wonder, was and is denounced and ridiculed. That wonder, says social historian Mary Baine Campbell, “is a form of perception now mostly associated with innocence: with children, the uneducated (that is, the poor), women, lunatics, and non-Western cultures… and of course artists.” Since the Enlightenment, Daston and Park concur, uncritical wonder has become “a disreputable passion in workaday science, redolent of the popular, the amateurish, and the childish.” Understanding nature was a serious business, requiring discipline rather than pleasure, diligence rather than delight.
Descartes’ informed, sober wonder re-emerged as an aspect of Romanticism, whether in the Naturphilosophie of Schilling and Goethe or the passion of English Romantics like Coleridge, Shelley and Byron, who had a considerable interest in science. Now it was not God but nature herself who was the object of awe and veneration. While natural theologians such as William Paley discerned God’s handiwork in the minutiae of nature, the grander marvels of the Sublime – wonder’s “elite relative” as Campbell aptly calls it – exposed the puny status of humanity before the ungovernable forces of nature. The divine creator of the Sublime was no intricate craftsman who wrought exquisite marvels, but worked only on a monolithic scale, with massive and inviolable laws. He (if he existed at all) was an architect not of profusion but of a single, awesome order.
Equally vexed during science’s ascension was the question of what was an appropriate object for wonder. The cognates of the Latin mirabilia – marvels and miracles – reveal that wonder was generally reserved for the strange and rare: the glowing stone, the monstrous birth, the fabulous beast. No mere flower would elicit awe like Feynman’s – it would have to be misshapen, or to spring from a stone, or have extraordinary curative powers. This was a problem for early science, because it threatened to misdirect curiosity towards precisely those objects that are the least representative of the natural order. When the early Royal Society sought to amass specimens for its natural history collection, it was frustrated by the inclination of its well-meaning donors throughout the world to donate ‘wonderful’ oddities, thinking that only exotica were worthy gifts. If they sent an egg, it would be a ‘monstrous’ double-shelled one; if a chicken, it had four legs. What they were supposed to do with the four-foot cucumber of one benefactor was anyone’s guess.
This collision of the wondrous with the systematic was evident in botanist Nehemiah Grew’s noble efforts to catalogue the Society’s chaotic collection in the 1680s. What this “inventory of nature” needed, Grew grumbled, were “not only Things strange and rare, but the most known and common amongst us.” By fitting strange objects into his complex classification scheme, Grew was attempting to neutralize their wonder. Underlying that objective was a growing conviction that nature’s order (or was it God’s?) brooked no exceptions. In earlier times, wondrous things took their significance precisely from their departure from the quotidian: monstrous births were portents, as the term itself implied (monstrare: to show). Aristotle had no problem with such departures from regular laws – but precisely because they were exceptions, they were of little interest. Now, in contrast, these wonders became accommodated into the grand system of the world. Far from being aberrations that presaged calamity and change, comets obeyed the same gravitational laws as the planets.
There is perhaps a little irony in the fact that, while attempting to distance themselves from a love of wonders found in the tradition of collectors of curiosities, these early scientists discovered wonders lurking in the most prosaic and unlikely of places, once they were examined closely enough. Robert Hooke’s Micrographia (1665), a gorgeously illustrated book of microscopic observations, was a compendium of marvels equal to any fanciful medieval account of journeys in distant lands. Under the microscope, mould and moss became fantastic gardens, lice and fleas were intricate armoured brutes, and the multifaceted eyes of a fly reflect back ten thousand images of Hooke’s laboratory. Micrographia shows us a determined rationalist struggling to discipline his wonder into a dispassionate record.
Stern and disciplined reason triumphed: it came to seem that science would bleach the world of wonder. Thence the disillusion in Keats’ Lamia:
Do not all charms fly
At the mere touch of cold philosophy?
There was an awful rainbow once in heaven:
We know her woof, her texture; she is given
In the dull catalogue of common things.
But science today appreciates that the link between curiosity and wonder should not and probably cannot be severed, for true curiosity – as opposed, say, to obsessive pedantry, acquisitiveness or problem-solving – grinds to a halt when deprived of wonder’s fuel. You might say that we first emancipated curiosity at the expense of wonder, and then re-admitted wonder to take care of public relations. Yet in the fear of the subjective that characterizes scientific discourse, wonder is one of the casualties; excitement and fervour remain banished from the official records. This does not mean they aren’t present. Indeed, the passions involved in wonder and curiosity, as an aspect of the motivations for research, are a part of the broader moral economy of science that, as Lorraine Daston says, “cannot dictate the products of science in their details [but is] the framework that gives them coherence and value.”
Pretending that science is performed by people who have undergone a Baconian purification of the emotions only deepens the danger that it will seem alien and odd to outsiders, something carried out by people who do not think as they do. Daston believes that we have inherited a “view of intelligence as neatly detached from emotional, moral, and aesthetic impulses, and a related and coeval view of scientific objectivity that brand[s] such impulses as contaminants.” It’s easy to understand the historical motivations of this attitude: the need to distinguish science from credulous ‘enthusiasm’, to develop an authoritative voice, to strip away the pretensions of the mystical Renaissance magus acquiring knowledge by personal revelation. But we no longer need this dissimulation; worse, it becomes a defensive reflex that exposes scientists to the caricature of the emotionally constipated boffin, hiding within thickets of jargon.
They were never really like this, despite their best efforts. Reading Robert Boyle’s account of witnessing phosphorus for the first time, daubed on the finger of a German chemical showman to trace out “Domini” on his sister’s expensive carpet in Pall Mall, you can’t miss the wonder tinged with fear in his account of this “mixture of strangeness, beauty and frightfulness”.
That response to nature’s spectacle remains. It’s easy to mock Brian Cox’s spellbound admiration as he looks heavenward, but the spark in his eyes isn’t just there for the cameras. You only have to point binoculars at the crescent moon on a clear night, seeing as Galileo did the sunlit peaks and shadowed valleys where lunar day becomes night, to see why there is no need to manufacture a sense of wonder about such sights.
Through a frank acknowledgement of wonder – admitting it not just for marketing, but into the very inception of scientific inquiry – it might be possible to weave science back into ordinary experience, to unite the objective with the subjective. Sagan suggested that “By far the best way I know to engage the religious sensibility, the sense of awe, is to look up on a clear night.” Richard Holmes locates in wonder a bridge between the sentiments of the Romantic poets and that of their scientific contemporaries.
Science deserves this poetry, and needs it too. When his telescope showed the Milky Way to be not a cloudy vapour but “unfathomable… swarms of small stars placed exceedingly close together”, Galileo already did better than today’s astronomers in conveying his astonishment and wonder without compromising the clarity of his description. But look at what John Milton, who may have seen the same sight through Galileo’s own telescope when he visited the old man under house arrest in Arcetri, made of this vision in Paradise Lost:
A broad and ample road, whose dust is gold,
And pavement stars, as stars to thee appear
Seen in the galaxy, that milky way
Which nightly as a circling zone thou seest
Powdered with stars.
Not even Carl Sagan could compete with that.
___________________________________________________
The day I realised the potential of the internet was infused with wonder. Not wonder at the network itself, however handy it would become for shovelling bits, but at what it revealed, televised live by NASA, as I crowded round a screen with the other staff of Nature magazine on 16 July 1994. That was the day the first piece of Comet Shoemaker-Levy 9 smashed into Jupiter, turning our cynicism about previous astronomical fireworks promised but not delivered into the carping of ungrateful children. There on our cosmic doorstep bloomed a fiery apocalypse that left an Earth-sized hole in the giant planet’s baroquely swirling atmosphere. This was old-style wonder: awe tinged with horror at forces beyond our comprehension.
Aristotle and Plato didn’t agree on much, but they were united in identifying wonder as the origin of their profession: as Aristotle put it, “It was owing to their wonder that men began to philosophize”. This idea appeals to scientists, who frequently enlist wonder as a goad to inquiry. “I think everyone in every culture has felt a sense of awe and wonder looking at the sky”, wrote Carl Sagan, locating in this response the stirrings of a Copernican desire to know who and where we are.
But that’s not the only direction in which wonder may take us. To Thomas Carlyle, wonder sits at the beginning not of science but of religion. That’s is the central tension in forging an alliance of wonder and science: will it make us curious, or induce us to prostrate ourselves in pitiful ignorance?
We had better get to grips with this question before too hastily appropriating wonder to sell science. That’s surely what is going on when pictures from the Hubble Space Telescope are (unconsciously?) cropped and coloured to recall the sublime iconography of Romantic landscape painting, or the Human Genome Project is wrapped in Biblical rhetoric, or the Large Hadron Collider’s proton-smashing is depicted as “replaying the moment of creation”. The point is not that such things are deceitful or improper, but that if we want to take that path, we should first consider the complex evolution of science’s relation to wonder.
For Sagan, wonder is evidently not just an invitation to be curious but a delight: it is wonderful. Maybe the ancients felt this too; the Latin equivalents admiratio and mirabilia seem to have their roots in an Indo-European word for ‘smile’. But this was not the wonder enthusiastically commended by medieval theologians, which was more apt to induce fear, reverence and bewilderment. Wonder was a reminder of God’s infinite, unknowable power – and as such, it was the pious response to nature, as opposed to the sinful prying of ‘curiosity’, damned by Saint Augustine as a ‘lust of the eyes’.
In that case, wonder was a signal to cease questioning and fall to your knees. Historians Lorraine Daston and Katharine Park argue that wonder and curiosity followed mirror-image trajectories between the Middle Ages and the Enlightenment, from good to bad and vice versa, conjoining symbiotically only in the sixteenth and seventeenth centuries – not incidentally, the period in which modern science was born.
It’s no surprise, then, to find the early prophets of science uncertain how to manage this difficult emotion of wonder. Francis Bacon admitted it only as a litmus test of ignorance: wonder signified “broken knowledge”. The implicit aim of Bacon’s scientific programme was to make wonders cease by explaining them, a quest that began with medieval rationalists such as Roger Bacon and Albertus Magnus. That which was understood was no longer wonderful.
Undisciplined wonder was thought to induce stupefaction. Descartes distinguished useful wonder (admiration) from useless (astonishment, literally a ‘turning to stone’ that “makes the whole body remain immobile like a statue”). Useful wonder focused the attention: it was, said Descartes, “a sudden surprise of the soul which makes it tend to consider alternatively those objects which seem to it rare and extraordinary”. If the ‘new philosophers’ of the seventeenth century admitted wonder at all, it was a source of admiration, not debilitating fear. The northern lights might seem “frightful” to the “vulgar Beholder”, said Edmond Halley, but to him they would be “a most agreeable and wish’d for Spectacle”.
Others shifted wonder to the far side of curiosity: something that emerges only after the dour slog of study. In this way, wonder could be dutifully channelled away from the phenomenon itself and turned into esteem for God’s works. “Wonder was the reward rather than the bait for curiosity”, say Daston and Park, “the fruit rather than the seed.” It is only after he has carefully studied the behaviour of ants to understand how elegantly they coordinate their affairs that Dutch naturalist Jan Swammerdam admits to his wonder at how God could have arranged things thus. “Nature is never so wondrous, nor so wondered at, as when she is known”, wrote Bernard Fontenelle, secretary of the French Academy of Sciences. This is a position that most modern scientists, even those of a robustly secular persuasion, are comfortable with: “The science only adds to the excitement and mystery and awe of a flower”, said physicist Richard Feynman.
This kind of wonder is not an essential part of scientific practice, but may constitute a form of post hoc genuflection. It is informed wonder that science generally aims to cultivate today. The medieval alternative, regarded as ignorant, gaping wonder, was and is denounced and ridiculed. That wonder, says social historian Mary Baine Campbell, “is a form of perception now mostly associated with innocence: with children, the uneducated (that is, the poor), women, lunatics, and non-Western cultures… and of course artists.” Since the Enlightenment, Daston and Park concur, uncritical wonder has become “a disreputable passion in workaday science, redolent of the popular, the amateurish, and the childish.” Understanding nature was a serious business, requiring discipline rather than pleasure, diligence rather than delight.
Descartes’ informed, sober wonder re-emerged as an aspect of Romanticism, whether in the Naturphilosophie of Schilling and Goethe or the passion of English Romantics like Coleridge, Shelley and Byron, who had a considerable interest in science. Now it was not God but nature herself who was the object of awe and veneration. While natural theologians such as William Paley discerned God’s handiwork in the minutiae of nature, the grander marvels of the Sublime – wonder’s “elite relative” as Campbell aptly calls it – exposed the puny status of humanity before the ungovernable forces of nature. The divine creator of the Sublime was no intricate craftsman who wrought exquisite marvels, but worked only on a monolithic scale, with massive and inviolable laws. He (if he existed at all) was an architect not of profusion but of a single, awesome order.
Equally vexed during science’s ascension was the question of what was an appropriate object for wonder. The cognates of the Latin mirabilia – marvels and miracles – reveal that wonder was generally reserved for the strange and rare: the glowing stone, the monstrous birth, the fabulous beast. No mere flower would elicit awe like Feynman’s – it would have to be misshapen, or to spring from a stone, or have extraordinary curative powers. This was a problem for early science, because it threatened to misdirect curiosity towards precisely those objects that are the least representative of the natural order. When the early Royal Society sought to amass specimens for its natural history collection, it was frustrated by the inclination of its well-meaning donors throughout the world to donate ‘wonderful’ oddities, thinking that only exotica were worthy gifts. If they sent an egg, it would be a ‘monstrous’ double-shelled one; if a chicken, it had four legs. What they were supposed to do with the four-foot cucumber of one benefactor was anyone’s guess.
This collision of the wondrous with the systematic was evident in botanist Nehemiah Grew’s noble efforts to catalogue the Society’s chaotic collection in the 1680s. What this “inventory of nature” needed, Grew grumbled, were “not only Things strange and rare, but the most known and common amongst us.” By fitting strange objects into his complex classification scheme, Grew was attempting to neutralize their wonder. Underlying that objective was a growing conviction that nature’s order (or was it God’s?) brooked no exceptions. In earlier times, wondrous things took their significance precisely from their departure from the quotidian: monstrous births were portents, as the term itself implied (monstrare: to show). Aristotle had no problem with such departures from regular laws – but precisely because they were exceptions, they were of little interest. Now, in contrast, these wonders became accommodated into the grand system of the world. Far from being aberrations that presaged calamity and change, comets obeyed the same gravitational laws as the planets.
There is perhaps a little irony in the fact that, while attempting to distance themselves from a love of wonders found in the tradition of collectors of curiosities, these early scientists discovered wonders lurking in the most prosaic and unlikely of places, once they were examined closely enough. Robert Hooke’s Micrographia (1665), a gorgeously illustrated book of microscopic observations, was a compendium of marvels equal to any fanciful medieval account of journeys in distant lands. Under the microscope, mould and moss became fantastic gardens, lice and fleas were intricate armoured brutes, and the multifaceted eyes of a fly reflect back ten thousand images of Hooke’s laboratory. Micrographia shows us a determined rationalist struggling to discipline his wonder into a dispassionate record.
Stern and disciplined reason triumphed: it came to seem that science would bleach the world of wonder. Thence the disillusion in Keats’ Lamia:
Do not all charms fly
At the mere touch of cold philosophy?
There was an awful rainbow once in heaven:
We know her woof, her texture; she is given
In the dull catalogue of common things.
But science today appreciates that the link between curiosity and wonder should not and probably cannot be severed, for true curiosity – as opposed, say, to obsessive pedantry, acquisitiveness or problem-solving – grinds to a halt when deprived of wonder’s fuel. You might say that we first emancipated curiosity at the expense of wonder, and then re-admitted wonder to take care of public relations. Yet in the fear of the subjective that characterizes scientific discourse, wonder is one of the casualties; excitement and fervour remain banished from the official records. This does not mean they aren’t present. Indeed, the passions involved in wonder and curiosity, as an aspect of the motivations for research, are a part of the broader moral economy of science that, as Lorraine Daston says, “cannot dictate the products of science in their details [but is] the framework that gives them coherence and value.”
Pretending that science is performed by people who have undergone a Baconian purification of the emotions only deepens the danger that it will seem alien and odd to outsiders, something carried out by people who do not think as they do. Daston believes that we have inherited a “view of intelligence as neatly detached from emotional, moral, and aesthetic impulses, and a related and coeval view of scientific objectivity that brand[s] such impulses as contaminants.” It’s easy to understand the historical motivations of this attitude: the need to distinguish science from credulous ‘enthusiasm’, to develop an authoritative voice, to strip away the pretensions of the mystical Renaissance magus acquiring knowledge by personal revelation. But we no longer need this dissimulation; worse, it becomes a defensive reflex that exposes scientists to the caricature of the emotionally constipated boffin, hiding within thickets of jargon.
They were never really like this, despite their best efforts. Reading Robert Boyle’s account of witnessing phosphorus for the first time, daubed on the finger of a German chemical showman to trace out “Domini” on his sister’s expensive carpet in Pall Mall, you can’t miss the wonder tinged with fear in his account of this “mixture of strangeness, beauty and frightfulness”.
That response to nature’s spectacle remains. It’s easy to mock Brian Cox’s spellbound admiration as he looks heavenward, but the spark in his eyes isn’t just there for the cameras. You only have to point binoculars at the crescent moon on a clear night, seeing as Galileo did the sunlit peaks and shadowed valleys where lunar day becomes night, to see why there is no need to manufacture a sense of wonder about such sights.
Through a frank acknowledgement of wonder – admitting it not just for marketing, but into the very inception of scientific inquiry – it might be possible to weave science back into ordinary experience, to unite the objective with the subjective. Sagan suggested that “By far the best way I know to engage the religious sensibility, the sense of awe, is to look up on a clear night.” Richard Holmes locates in wonder a bridge between the sentiments of the Romantic poets and that of their scientific contemporaries.
Science deserves this poetry, and needs it too. When his telescope showed the Milky Way to be not a cloudy vapour but “unfathomable… swarms of small stars placed exceedingly close together”, Galileo already did better than today’s astronomers in conveying his astonishment and wonder without compromising the clarity of his description. But look at what John Milton, who may have seen the same sight through Galileo’s own telescope when he visited the old man under house arrest in Arcetri, made of this vision in Paradise Lost:
A broad and ample road, whose dust is gold,
And pavement stars, as stars to thee appear
Seen in the galaxy, that milky way
Which nightly as a circling zone thou seest
Powdered with stars.
Not even Carl Sagan could compete with that.
Who knew?
I don’t really understand science reporting in the mainstream media. They tend to set a very high bar of originality and novelty, which is fair enough, but will then go and publish stuff that seems ancient news. I guess that occasionally there’s an argument that what seems extremely old hat to those who follow science will be new to a more general readership, which may explain Jeff Forshaw’s (perfectly good) piece on quantum computing in last week’s Observer. (There was an excuse for this, a recent Nature paper on a quantum simulator consisting of 300 beryllium atoms in an electromagnetic trap – but that nice work was deeply exotic, and so was skated over very briefly.) But the article in the New York Times on the uncertainties of cloud feedbacks on climate, and Richard Lindzen’s sceptical line on it, could have been written circa the turn of the millennium. Far be it from me to complain about a piece that does a good job of setting the record straight on Lindzen’s campaign of confusion, but it seems mighty odd to be talking about it now, and I couldn’t even see an attempt at a topical peg. I’m not complaining, I’m just very puzzled about how these decisions are made.
Thursday, May 10, 2012
The start of curiosity
Here’s essentially a brief overview of my new book Curiosity, published this month. The piece appears in the latest issue of New Humanist.
____________________________________________________________
The Abel Prize, the “mathematics Nobel” awarded by the Norwegian Academy of Sciences, always goes to some pretty head-scratching stuff. But the arcane number theory of this year’s winner, Endre Szemerédi, has turned out to have important applications in computer science: a validation, according to the Academy’s president Nils Stenseth, of purely “curiosity-driven” research.
It’s a common refrain in science: questions pursued purely from a desire to know about the world have unforeseen practical applications. This argument has been advanced to justify the $6 bn Large Hadron Collider at the European particle-physics centre CERN, which, according to CERN’s former Director General Robert Aymar is “continuing a tradition of human curiosity that’s as old as mankind itself.” At a time when the UK physical sciences research council is starting to demand absurd “impact assessments” for grant applications, this defence of science motivated by nothing more than inquisitiveness is essential.
But Aymar’s image of a long-standing “tradition of curiosity”, although widely shared by scientists, is too simplistic. There’s evidently an evolutionary benefit in wanting to explore our environment – we’re not the only animals to do that. But curiosity is a much more subtle, many-faceted notion, and our relationship to it has fluctuated over the ages. We are unlikely to do justice to what curiosity in science could and should mean today unless we understand this history.
For one thing, the word itself has had many meanings – too many, in fact, to identify any core concept at all. A “curious” person could indeed be an inquisitive one, but could equally be one who simply took care (Latin cura) in what they did. Not just people but objects too might be described as “curious”, and this might mean that they were rare, exotic, elegant, collectable, valuable, small, hidden, useless, expensive – but conversely, in certain contexts, common, useful or cheap. From the late sixteenth century, European nobles and intellectuals indulged a cult of curiosities, amassing vast collections of weird and wonderful objects which they displayed in room-sized ‘cabinets’. A typical cabinet of curiosities, like that of Charles I’s gardener John Tradescant in Lambeth, might contain all manner of rare beasts, shells, furs, minerals, ethnographic objects and exquisite works of craftsmanship. This spirit of collecting, usually biased towards the strange and wonderful rather than the representative, infused early science – the Royal Society had its own collection – and it gave rise to the first public museums. But it also made some early scientists focus on peculiar rather than ordinary phenomena, which threatened to turn them into bauble collectors rather than investigators of nature.
This enthusiasm for curiosities was something new, and arose outside of the mainstream academic tradition. Until the late Renaissance, curiosity in the sense that is normally implied today – investigation driven purely by the wish to know – was condemned. In ancient Greece it was seen as an unwelcome distraction rather than an aid to knowledge. For Aristotle, curiosity (periergia) had little role to play in philosophy: it was a kind of aimless, witless tendency to pry into things that didn’t concern us. Plutarch considered curiosity the vice of those given to snooping into the affairs of others: the kind of busybody known in Greek as a polypragmon.
In early Christianity it was worse than that. Now curiosity was not merely frowned upon but deemed sinful. “We want no curious disputation after possessing Christ Jesus”, wrote the second-century Christian apologist Tertullian, “no inquisition after enjoying the gospel.” The Bible told us all we needed – and should expect – to know.
Scripture made it clear that there were some things we were not supposed to know. God was said to have created Adam last so that he would not see how the rest of the job was done. Desire for forbidden knowledge led to the Fall. The transgressive aspect of curiosity is an insistent theme in Christian theology, which time and again demanded that one respect the limits of inquiry and be wary of too much learning. ‘The secret things belong to the Lord our God’, proclaims Deuteronomy, while Solomon declares in Ecclesiastes that we should “be not curious in unnecessary matters, for more things are shewed unto thee than men understand.”
In the hands of Augustine, curiosity became a “disease”, one of the vices or lusts at the root of all sin. “It is in divine language called the lust of the eyes”, he wrote. “From the same motive, men proceed to investigate the workings of nature, which is beyond our ken – things which it does no good to know and which men only want to know for the sake of knowing.” He claimed that curiosity is apt to pervert, to foster an interest in “mangled corpses, magical effects and marvellous spectacles.”
There was, then, a lot of work to be done before the early modern scientists of the seventeenth century – men like Galileo, Johannes Kepler, Robert Boyle, Robert Hooke and Isaac Newton – could give free rein to their curiosity. Needless to say, despite popular accounts of the so-called Scientific Revolution which imply that these men began to ask questions merely because of their great genius, there were many factors that emancipated curiosity. Not least was the influence of the tradition of natural magic, which insisted that nature was controlled by occult forces (literally invisible, such as magnetism and gravity) that could furnish a rational explanation of even the most marvellous things. This tradition had a strong experimental bias, denied the cosy tautologies of academic Aristotelianism, and was determined to uncover the “secrets” of nature.
The discovery of the New World, and the age of exploration in general, also opened minds with its demonstration that there was far more in the world than was described in the books of revered ancient philosophers. Accounts of investigations with telescopes and microscopes by the likes of Galileo and Hooke make reference to the “new worlds” that these devices reveal at both cosmic and minute scales, often presenting these studies as voyages of discovery – and conquest – comparable to that of Columbus.
But this liberation of curiosity was more complicated than is sometimes implied. For one thing, it forced the issue of how to assess evidence and reports – whose word could be trusted? Scientists like Boyle began to develop what historian Steven Shapin has called a “literary technology” designed to convey authority with rhetorical tricks, such as the dispassionate, disembodied tone that now characterizes, some might say blights, the scientific literature. Curiosity became apt to be laughed at rather than condemned: during the Restoration and the early Enlightenment, writers such as Thomas Shadwell, Samuel Butler and Jonathan Swift wrote satires mocking the Royals Society’s apparent fascination with trivia, such as the details of a fly’s eye.
And the problem with curiosity is that it can be voracious: the questions never cease. Everything Hooke put into his microscope looked new and strange. Boyle lamented that curiosity provoked disquiet and anxiety because it goaded people on without any prospect of comprehending all of nature in one person’s lifetime. Like others, he drew up “to do” lists that are more or less random and incontinent, showing how hard it was to discipline curiosity into a coherent research programme.
Today we continue this slightly uneasy dance with curiosity. Not just curiosity but also its mercurial cousin wonder are enlisted in support of huge projects like the LHC and the Hubble Space Telescope. But, however well motivated they are, one has to ask how much space is left in huge, costly international collaborations like this for the sort of spontaneous curiosity that would allow Hooke and Boyle to follow their noses: can we really have “curiosity by committee”? That’s why we shouldn’t let Big Science blind us to the virtues of Small Science, of the benchtop experiment, often with cheap, improvised equipment, that leaves space for trying out hunches and wild ideas, revelling in little surprises, and indulging in science as a craft. Such experiments may turn out to be fantastically useful, or spectacularly useless. They are each little acts of homage to curiosity, and in consequence, to our humanity.
____________________________________________________________
The Abel Prize, the “mathematics Nobel” awarded by the Norwegian Academy of Sciences, always goes to some pretty head-scratching stuff. But the arcane number theory of this year’s winner, Endre Szemerédi, has turned out to have important applications in computer science: a validation, according to the Academy’s president Nils Stenseth, of purely “curiosity-driven” research.
It’s a common refrain in science: questions pursued purely from a desire to know about the world have unforeseen practical applications. This argument has been advanced to justify the $6 bn Large Hadron Collider at the European particle-physics centre CERN, which, according to CERN’s former Director General Robert Aymar is “continuing a tradition of human curiosity that’s as old as mankind itself.” At a time when the UK physical sciences research council is starting to demand absurd “impact assessments” for grant applications, this defence of science motivated by nothing more than inquisitiveness is essential.
But Aymar’s image of a long-standing “tradition of curiosity”, although widely shared by scientists, is too simplistic. There’s evidently an evolutionary benefit in wanting to explore our environment – we’re not the only animals to do that. But curiosity is a much more subtle, many-faceted notion, and our relationship to it has fluctuated over the ages. We are unlikely to do justice to what curiosity in science could and should mean today unless we understand this history.
For one thing, the word itself has had many meanings – too many, in fact, to identify any core concept at all. A “curious” person could indeed be an inquisitive one, but could equally be one who simply took care (Latin cura) in what they did. Not just people but objects too might be described as “curious”, and this might mean that they were rare, exotic, elegant, collectable, valuable, small, hidden, useless, expensive – but conversely, in certain contexts, common, useful or cheap. From the late sixteenth century, European nobles and intellectuals indulged a cult of curiosities, amassing vast collections of weird and wonderful objects which they displayed in room-sized ‘cabinets’. A typical cabinet of curiosities, like that of Charles I’s gardener John Tradescant in Lambeth, might contain all manner of rare beasts, shells, furs, minerals, ethnographic objects and exquisite works of craftsmanship. This spirit of collecting, usually biased towards the strange and wonderful rather than the representative, infused early science – the Royal Society had its own collection – and it gave rise to the first public museums. But it also made some early scientists focus on peculiar rather than ordinary phenomena, which threatened to turn them into bauble collectors rather than investigators of nature.
This enthusiasm for curiosities was something new, and arose outside of the mainstream academic tradition. Until the late Renaissance, curiosity in the sense that is normally implied today – investigation driven purely by the wish to know – was condemned. In ancient Greece it was seen as an unwelcome distraction rather than an aid to knowledge. For Aristotle, curiosity (periergia) had little role to play in philosophy: it was a kind of aimless, witless tendency to pry into things that didn’t concern us. Plutarch considered curiosity the vice of those given to snooping into the affairs of others: the kind of busybody known in Greek as a polypragmon.
In early Christianity it was worse than that. Now curiosity was not merely frowned upon but deemed sinful. “We want no curious disputation after possessing Christ Jesus”, wrote the second-century Christian apologist Tertullian, “no inquisition after enjoying the gospel.” The Bible told us all we needed – and should expect – to know.
Scripture made it clear that there were some things we were not supposed to know. God was said to have created Adam last so that he would not see how the rest of the job was done. Desire for forbidden knowledge led to the Fall. The transgressive aspect of curiosity is an insistent theme in Christian theology, which time and again demanded that one respect the limits of inquiry and be wary of too much learning. ‘The secret things belong to the Lord our God’, proclaims Deuteronomy, while Solomon declares in Ecclesiastes that we should “be not curious in unnecessary matters, for more things are shewed unto thee than men understand.”
In the hands of Augustine, curiosity became a “disease”, one of the vices or lusts at the root of all sin. “It is in divine language called the lust of the eyes”, he wrote. “From the same motive, men proceed to investigate the workings of nature, which is beyond our ken – things which it does no good to know and which men only want to know for the sake of knowing.” He claimed that curiosity is apt to pervert, to foster an interest in “mangled corpses, magical effects and marvellous spectacles.”
There was, then, a lot of work to be done before the early modern scientists of the seventeenth century – men like Galileo, Johannes Kepler, Robert Boyle, Robert Hooke and Isaac Newton – could give free rein to their curiosity. Needless to say, despite popular accounts of the so-called Scientific Revolution which imply that these men began to ask questions merely because of their great genius, there were many factors that emancipated curiosity. Not least was the influence of the tradition of natural magic, which insisted that nature was controlled by occult forces (literally invisible, such as magnetism and gravity) that could furnish a rational explanation of even the most marvellous things. This tradition had a strong experimental bias, denied the cosy tautologies of academic Aristotelianism, and was determined to uncover the “secrets” of nature.
The discovery of the New World, and the age of exploration in general, also opened minds with its demonstration that there was far more in the world than was described in the books of revered ancient philosophers. Accounts of investigations with telescopes and microscopes by the likes of Galileo and Hooke make reference to the “new worlds” that these devices reveal at both cosmic and minute scales, often presenting these studies as voyages of discovery – and conquest – comparable to that of Columbus.
But this liberation of curiosity was more complicated than is sometimes implied. For one thing, it forced the issue of how to assess evidence and reports – whose word could be trusted? Scientists like Boyle began to develop what historian Steven Shapin has called a “literary technology” designed to convey authority with rhetorical tricks, such as the dispassionate, disembodied tone that now characterizes, some might say blights, the scientific literature. Curiosity became apt to be laughed at rather than condemned: during the Restoration and the early Enlightenment, writers such as Thomas Shadwell, Samuel Butler and Jonathan Swift wrote satires mocking the Royals Society’s apparent fascination with trivia, such as the details of a fly’s eye.
And the problem with curiosity is that it can be voracious: the questions never cease. Everything Hooke put into his microscope looked new and strange. Boyle lamented that curiosity provoked disquiet and anxiety because it goaded people on without any prospect of comprehending all of nature in one person’s lifetime. Like others, he drew up “to do” lists that are more or less random and incontinent, showing how hard it was to discipline curiosity into a coherent research programme.
Today we continue this slightly uneasy dance with curiosity. Not just curiosity but also its mercurial cousin wonder are enlisted in support of huge projects like the LHC and the Hubble Space Telescope. But, however well motivated they are, one has to ask how much space is left in huge, costly international collaborations like this for the sort of spontaneous curiosity that would allow Hooke and Boyle to follow their noses: can we really have “curiosity by committee”? That’s why we shouldn’t let Big Science blind us to the virtues of Small Science, of the benchtop experiment, often with cheap, improvised equipment, that leaves space for trying out hunches and wild ideas, revelling in little surprises, and indulging in science as a craft. Such experiments may turn out to be fantastically useful, or spectacularly useless. They are each little acts of homage to curiosity, and in consequence, to our humanity.
Tuesday, May 08, 2012
Comment is free, for better or worse
I’ve been meaning for ages to say something about the brief experience of writing a column for the Guardian. I’m prompted to do it belatedly now in the light of the current discussion (here and here, for example) about online comments/poisonous tweets/trolling. Not that I felt I was at the sharp end of all that, and certainly not in comparison to some poor souls (although do I really mean Louise Mensch?). But what hasn’t received a great deal of comment in the latest debates is the general tone of online comments which forms the backdrop against which the more obvious acts of nastiness and lunacy get played out.
The column was prematurely terminated, as I always knew it might be, for cost-cutting reasons. And I had somewhat mixed feelings about that. It was undoubtedly disappointing, because I was getting into my stride and had topics that I’d hoped to be able to cover. But it was also something of a relief. The column went onto the Comment is Free site, which meant that it got a lot of web feedback. And this is always a somewhat odd beast, but I hadn’t experienced it to quite this degree before. I had been encouraged to engage with the responses, to the extent of making comments of my own. But I’d begun to find that rather draining.
This was not simply a matter of time. I was finding the tone of the discussion wearying, not least because I found myself responding in the same spirit. And that disturbed me.
I’d discovered before the typical tenor of web feedback when a piece I wrote on economics for the FT was picked up and debated – or rather, dissected and derogated – on some economics blogs. On that occasion I’d been naively surprised at how aggressive some of the posts were. As it happened, I responded on that occasion to one to these threads, which opened up a debate with the (extremely well informed) blogger Dave Altig that ended up being productive and constructive: I felt that we’d both listened and taken on board some aspects of the other point of view. This left me thinking that it can be valuable to engage with critics online – I’ve subsequently discovered that that is sometimes true and sometimes not.
All the same, that episode gave me a glimpse of the snarky, embittered tone that characterises quite a lot of online feedback. By no means all of the Guardian comments were of that nature. Some were very thoughtful and informed, particularly in my piece on science funding. But after having written several of these columns, some common themes among the critical comments began to emerge.
The most sobering was this. I have tended, again naively, to assume that when one writes something in public, people read it and then decide whether they agree or not. Some might decide you’ve written a pile of tosh, and might tell you so. That’s fine. But now I realise that this isn’t how it works. It seems that many readers – at least the ones who post comments, which is of course an extremely particular and self-selecting group – don’t read what you’ve said in the first place. I don’t mean that in the sense of that annoying rhetorical accusation that “you obviously didn’t even read what I said”. I mean that they read the words through such a cloud of preconceptions that the real meaning simply cannot register. Many readers, it seems, read just what they want/expect to read, which is often a ready-made version of an idea that they disagree with. The disagreement then comes not so much from a difference of opinion but from a lack of comprehension. And let me say that this comprehension doesn’t seem to have any correlation with education or professional status – I’m shocked at how poorly some scientists seem able to understand basic English. It almost makes me wonder how the scientific literature functions.
There are some other recurring strategies and tropes. Chief among them is a sense of immense resentment – who the hell are you to be writing this stuff? You call yourself a scientist/journalist/expert, but you don’t even know the most basic facts! It’s again very sobering to discover that there has presumably always been this burning rancour against people who write in the media that only now has been given a means of expression. And so the feedback becomes a litany of one-upmanship, like the chap who couldn’t possibly imagine that anyone writing a science column could have managed the awesome feat of actually reading Jonathan Swift. It was this vying for the intellectual high ground – or rather, a crude “I know more than you” – that I could see myself succumbing to, and I didn’t like it.
Then there are the comments that are clearly meant to be gems of caustic wit but which are merely incomprehensible onanism. “This article is pure rot. The knitting of shreddies, however small, by Grandmothers can only be seen as a force for good. Cold milk and plenty of sugar.” Yes, well thank you. This isn’t a big deal, but it is strangely irritating.
And there’s the question of anonymity. I can’t help feeling (and it’s been said countless times, I know) that the tone of the feedback and the fact that it is presented by folk who conceal their names (even just a given name – I’m heartened to see that most of the [by definition] lovely folks who comment on my blog are comfortable with that), and who choose instead macho monikers such as “CrapRadar”, is all of a piece. Why this issue of utter concealment? So full marks to those appearing (I assume) as themselves and not some cartoon character. I don’t mean to imply that anyone who doesn’t use their own name is some kind of craven bully, but just that this aspect of web culture is not without its problems.
Oh, I know it could be so much worse. What we’ve heard recently about online misogyny includes some very grim stuff. In comparison, I’m not sure that anything on CiF qualifies as trolling exactly, and some of the comments are interesting and funny. But I do find it a little dispiriting to discover that so much of what passes as debate on these forums is really a jaded effort to be as cynical, dismissive and superior as one can be. And that presumably this attitude had always been out there, longing for the platform that it now has.
The column was prematurely terminated, as I always knew it might be, for cost-cutting reasons. And I had somewhat mixed feelings about that. It was undoubtedly disappointing, because I was getting into my stride and had topics that I’d hoped to be able to cover. But it was also something of a relief. The column went onto the Comment is Free site, which meant that it got a lot of web feedback. And this is always a somewhat odd beast, but I hadn’t experienced it to quite this degree before. I had been encouraged to engage with the responses, to the extent of making comments of my own. But I’d begun to find that rather draining.
This was not simply a matter of time. I was finding the tone of the discussion wearying, not least because I found myself responding in the same spirit. And that disturbed me.
I’d discovered before the typical tenor of web feedback when a piece I wrote on economics for the FT was picked up and debated – or rather, dissected and derogated – on some economics blogs. On that occasion I’d been naively surprised at how aggressive some of the posts were. As it happened, I responded on that occasion to one to these threads, which opened up a debate with the (extremely well informed) blogger Dave Altig that ended up being productive and constructive: I felt that we’d both listened and taken on board some aspects of the other point of view. This left me thinking that it can be valuable to engage with critics online – I’ve subsequently discovered that that is sometimes true and sometimes not.
All the same, that episode gave me a glimpse of the snarky, embittered tone that characterises quite a lot of online feedback. By no means all of the Guardian comments were of that nature. Some were very thoughtful and informed, particularly in my piece on science funding. But after having written several of these columns, some common themes among the critical comments began to emerge.
The most sobering was this. I have tended, again naively, to assume that when one writes something in public, people read it and then decide whether they agree or not. Some might decide you’ve written a pile of tosh, and might tell you so. That’s fine. But now I realise that this isn’t how it works. It seems that many readers – at least the ones who post comments, which is of course an extremely particular and self-selecting group – don’t read what you’ve said in the first place. I don’t mean that in the sense of that annoying rhetorical accusation that “you obviously didn’t even read what I said”. I mean that they read the words through such a cloud of preconceptions that the real meaning simply cannot register. Many readers, it seems, read just what they want/expect to read, which is often a ready-made version of an idea that they disagree with. The disagreement then comes not so much from a difference of opinion but from a lack of comprehension. And let me say that this comprehension doesn’t seem to have any correlation with education or professional status – I’m shocked at how poorly some scientists seem able to understand basic English. It almost makes me wonder how the scientific literature functions.
There are some other recurring strategies and tropes. Chief among them is a sense of immense resentment – who the hell are you to be writing this stuff? You call yourself a scientist/journalist/expert, but you don’t even know the most basic facts! It’s again very sobering to discover that there has presumably always been this burning rancour against people who write in the media that only now has been given a means of expression. And so the feedback becomes a litany of one-upmanship, like the chap who couldn’t possibly imagine that anyone writing a science column could have managed the awesome feat of actually reading Jonathan Swift. It was this vying for the intellectual high ground – or rather, a crude “I know more than you” – that I could see myself succumbing to, and I didn’t like it.
Then there are the comments that are clearly meant to be gems of caustic wit but which are merely incomprehensible onanism. “This article is pure rot. The knitting of shreddies, however small, by Grandmothers can only be seen as a force for good. Cold milk and plenty of sugar.” Yes, well thank you. This isn’t a big deal, but it is strangely irritating.
And there’s the question of anonymity. I can’t help feeling (and it’s been said countless times, I know) that the tone of the feedback and the fact that it is presented by folk who conceal their names (even just a given name – I’m heartened to see that most of the [by definition] lovely folks who comment on my blog are comfortable with that), and who choose instead macho monikers such as “CrapRadar”, is all of a piece. Why this issue of utter concealment? So full marks to those appearing (I assume) as themselves and not some cartoon character. I don’t mean to imply that anyone who doesn’t use their own name is some kind of craven bully, but just that this aspect of web culture is not without its problems.
Oh, I know it could be so much worse. What we’ve heard recently about online misogyny includes some very grim stuff. In comparison, I’m not sure that anything on CiF qualifies as trolling exactly, and some of the comments are interesting and funny. But I do find it a little dispiriting to discover that so much of what passes as debate on these forums is really a jaded effort to be as cynical, dismissive and superior as one can be. And that presumably this attitude had always been out there, longing for the platform that it now has.
Lip-reading the emotions
And another BBC Future piece… I was interviewed on related issues recently (not terribly coherently, I fear) for the BBC’s See Hear programme for the deaf community. This in turn was a spinoff from my involvement in a really splendid documentary by Lindsey Dryden on the musical experiences of people with partial hearing, called Lost and Sound, which will hopefully get a TV airing some time soon.
____________________________________________________________
I have no direct experience with cochlear implants (CIs) – electronic devices that partly compensate for severe hearing impairment – but listening to a simulation of the sound produced is salutary. It is rather like hearing things underwater: fuzzy and with an odd timbre, yet still conveying words and some other identifiable sounds. It’s a testament to the adaptability of the human brain that auditory information can be recognizable even when the characteristics of the sound are so profoundly altered. Some people with CIs can appreciate and even perform music.
The use of these devices can provide insights into how sound is processed in people with normal hearing – insights that can help us to identify what can potentially go wrong and how it might be fixed. That’s evident in a trio of papers buried in the recondite but infallibly fascinating Journal of the Acoustical Society of America, a publication whose scope ranges from urban noise pollution to whale song and the sonic virtues of cathedrals.
These three papers examine what gets lost in translation in CIs. Much of the emotional content, as well as some semantic information, in speech is conveyed by the rising and falling of voice – what is called prosody. In English, prosody can distinguish a question from a statement (at least before the rising inflection became fashionable). It can tell us if the speaker is happy, sad or angry. But because the pitch of sounds, as well as their ‘spectrum’ of sound frequencies, are not well conveyed by CIs, users may find it harder to identify such cues – they can’t easily tell a question from a statement, say, and they rely more on visual than auditory information to gauge a speaker’s emotional state.
Takayuki Nakata of Future University Hakodate in Japan and his coworkers have verified that Japanese children who are congenitally deaf but use CIs are significantly less able to identify happy, sad, and angry voices in tests in which normal hearers of the same age have virtually total success [1]. They went further than previous studies, however, in asking whether these difficulties inhibit a child’s ability to communicate emotion through prosody in their own speech. Indeed they do, regardless of age – an indication both that we acquire this capability by hearing and copying, and that CI users face the additional burden of being less likely to have their emotions perceived.
Difficulties in hearing pitch can create even more severe linguistic problems. In tonal languages such as Mandarin Chinese, changes in pitch may alter the semantic meaning of a word. CI users may struggle to distinguish such tones even after years of using the device, and hearing-impaired Mandarin-speaking children who start using them before they can speak are often scarcely intelligible to adult listeners – again, they can’t learn to produce the right sounds if they can’t hear them.
To understand how language tones might be perceived by CI users, Damien Smith and Denis Burnham of the University of Western Sydney in Australia have tested normal hearers with audio signals of spoken Mandarin altered to simulate CIs. The results were surprising [2].
Both native Mandarin speakers and English-speaking subjects do better in identifying the (four) Mandarin tones when the CI-simulated voices are accompanied by video footage of the speakers’ faces. That’s not so surprising: it’s well known that we use visual cues to perceive speech. But all subjects did better than random guessing with the visuals alone, and in this case non-Mandarin speakers did better than Mandarin speakers. In other words, native speakers learn to disregard visual information in preference for auditory. What’s more, these findings suggest that CI users could be helped by training them to recognize the visual cues of tonal languages: if you like, to lip-read the tones.
There’s still hope for getting CIs to convey pitch information better. Xin Luo of Purdue University in West Lafayette, Indiana, in collaboration with researchers from the House Research Institute, a hearing research centre in Los Angeles, has figured out how to make CIs create a better impression of smooth pitch changes such as those in prosody [3]. CIs do already offer some pitch sensation, albeit very coarse-grained. The cochlea, the pitch-sensing organ of the ear, contains a coiled membrane which is stimulated in different regions by different sound frequencies – low at one end, high at the other, rather like a keyboard. The CI creates a crude approximation of this continuous pitch-sensing device using a few (typically 16-22) electrodes to excite different auditory-nerve endings, producing a small set of pitch steps instead of a smooth pitch slope. Luo and colleagues have figured out a way of sweeping the signal from one electrode to the next such that pitch changes seem gradual instead of jumpy.
The cochlea can also identify pitches by, in effect, ‘timing’ successive acoustic oscillations to figure out the frequency. CIs can simulate this method of pitch discrimination too, but only for frequencies up to about 300 Hertz, the upper limit of a bass singing voice. Luo and colleagues say that a judicious combination of these two ways of conveying pitch, enabled by signal-processing circuits in the implant, creates a synergy that, with further work, should offer much improved pitch perception for users: enough, at least, to allow them to capture more of the emotion-laden prosody of speech.
References
1. T. Nakata, S. E. Trehub & Y. Kanda, Journal of the Acoustical Society of America 131, 1307 (2012).
2. D. Smith & D. Burnham, Journal of the Acoustical Society of America 131, 1480 (2012).
3. X. Luo, M. Padilla & D. M. Landsberger, Journal of the Acoustical Society of America 131, 1325 (2012).
____________________________________________________________
I have no direct experience with cochlear implants (CIs) – electronic devices that partly compensate for severe hearing impairment – but listening to a simulation of the sound produced is salutary. It is rather like hearing things underwater: fuzzy and with an odd timbre, yet still conveying words and some other identifiable sounds. It’s a testament to the adaptability of the human brain that auditory information can be recognizable even when the characteristics of the sound are so profoundly altered. Some people with CIs can appreciate and even perform music.
The use of these devices can provide insights into how sound is processed in people with normal hearing – insights that can help us to identify what can potentially go wrong and how it might be fixed. That’s evident in a trio of papers buried in the recondite but infallibly fascinating Journal of the Acoustical Society of America, a publication whose scope ranges from urban noise pollution to whale song and the sonic virtues of cathedrals.
These three papers examine what gets lost in translation in CIs. Much of the emotional content, as well as some semantic information, in speech is conveyed by the rising and falling of voice – what is called prosody. In English, prosody can distinguish a question from a statement (at least before the rising inflection became fashionable). It can tell us if the speaker is happy, sad or angry. But because the pitch of sounds, as well as their ‘spectrum’ of sound frequencies, are not well conveyed by CIs, users may find it harder to identify such cues – they can’t easily tell a question from a statement, say, and they rely more on visual than auditory information to gauge a speaker’s emotional state.
Takayuki Nakata of Future University Hakodate in Japan and his coworkers have verified that Japanese children who are congenitally deaf but use CIs are significantly less able to identify happy, sad, and angry voices in tests in which normal hearers of the same age have virtually total success [1]. They went further than previous studies, however, in asking whether these difficulties inhibit a child’s ability to communicate emotion through prosody in their own speech. Indeed they do, regardless of age – an indication both that we acquire this capability by hearing and copying, and that CI users face the additional burden of being less likely to have their emotions perceived.
Difficulties in hearing pitch can create even more severe linguistic problems. In tonal languages such as Mandarin Chinese, changes in pitch may alter the semantic meaning of a word. CI users may struggle to distinguish such tones even after years of using the device, and hearing-impaired Mandarin-speaking children who start using them before they can speak are often scarcely intelligible to adult listeners – again, they can’t learn to produce the right sounds if they can’t hear them.
To understand how language tones might be perceived by CI users, Damien Smith and Denis Burnham of the University of Western Sydney in Australia have tested normal hearers with audio signals of spoken Mandarin altered to simulate CIs. The results were surprising [2].
Both native Mandarin speakers and English-speaking subjects do better in identifying the (four) Mandarin tones when the CI-simulated voices are accompanied by video footage of the speakers’ faces. That’s not so surprising: it’s well known that we use visual cues to perceive speech. But all subjects did better than random guessing with the visuals alone, and in this case non-Mandarin speakers did better than Mandarin speakers. In other words, native speakers learn to disregard visual information in preference for auditory. What’s more, these findings suggest that CI users could be helped by training them to recognize the visual cues of tonal languages: if you like, to lip-read the tones.
There’s still hope for getting CIs to convey pitch information better. Xin Luo of Purdue University in West Lafayette, Indiana, in collaboration with researchers from the House Research Institute, a hearing research centre in Los Angeles, has figured out how to make CIs create a better impression of smooth pitch changes such as those in prosody [3]. CIs do already offer some pitch sensation, albeit very coarse-grained. The cochlea, the pitch-sensing organ of the ear, contains a coiled membrane which is stimulated in different regions by different sound frequencies – low at one end, high at the other, rather like a keyboard. The CI creates a crude approximation of this continuous pitch-sensing device using a few (typically 16-22) electrodes to excite different auditory-nerve endings, producing a small set of pitch steps instead of a smooth pitch slope. Luo and colleagues have figured out a way of sweeping the signal from one electrode to the next such that pitch changes seem gradual instead of jumpy.
The cochlea can also identify pitches by, in effect, ‘timing’ successive acoustic oscillations to figure out the frequency. CIs can simulate this method of pitch discrimination too, but only for frequencies up to about 300 Hertz, the upper limit of a bass singing voice. Luo and colleagues say that a judicious combination of these two ways of conveying pitch, enabled by signal-processing circuits in the implant, creates a synergy that, with further work, should offer much improved pitch perception for users: enough, at least, to allow them to capture more of the emotion-laden prosody of speech.
References
1. T. Nakata, S. E. Trehub & Y. Kanda, Journal of the Acoustical Society of America 131, 1307 (2012).
2. D. Smith & D. Burnham, Journal of the Acoustical Society of America 131, 1480 (2012).
3. X. Luo, M. Padilla & D. M. Landsberger, Journal of the Acoustical Society of America 131, 1325 (2012).
Thursday, May 03, 2012
The entropic sieve
Here’s another of my (pre-edited) earlier pieces for the BBC Future site. Must catch up on these now – there are several more.
_______________________________________________________
Sorting out tiny particles and molecules of different sizes is necessary for various technologies, from gene sequencing to nanotechnology. But it sounds like a pretty tedious business, right?
It’s no surprise, then, that a recent paper describing a new technique for doing this garnered no headlines. But it’s well worth a closer look. For one thing, it sounds like sheer magic.
Physicist Peter Hänggi at the University of Augsburg in Germany and his colleagues show that you can take a tube containing a mixture of big and small particles, apply some force to pull them through in one direction (an electric field would do the job for charged particles, say), and then give it a shake. And hey presto – the small particles will drop out of the end towards which the force pulls them, whereas the big particles drop out of the other end (D. Reguera et al., Phys. Rev. Lett. 108, 020604 (2012)).
Not only is this trick very clever but it’s also rather profound, touching on some of the most fundamental principles of physics. The device stems from a loophole proposed in the nineteenth century for evading the second law of thermodynamics, in effect making a perpetual motion machine. Needless to say, the new particle separator isn’t that, but the explanation of why not requires an excursion into the recondite field of information theory. Deep stuff from what is basically a grain sorter.
There are already ways to separate molecules by size. You can literally sieve them using solid materials with tiny pores of uniform size, such as the zeolite minerals used to separate and selectively alter some hydrocarbons in crude oil. And a technique called gel electrophoresis is used to separate strands of DNA chopped into different lengths – a standard procedure for sequencing genes – according to their size-dependent speed of being dragged along by an electric field. These techniques work well enough for most purposes. But that devised by Hänggi and colleagues is potentially more efficient.
Like all good magic, you have to look inside to see how it’s done. The tube is divided into a series of funnel-shaped chambers connected by narrow necks – looked at in cross-section, it resembles two saw blades with the teeth not quite touching. This sawtooth profile is all it takes to make the large and small particles move in opposite directions in response to a combination of the force and the shaking.
The tube is what physicists call a Brownian ratchet. The name derives from Brownian motion, the random movement of tiny particles such as pollen grains in water, or indeed water molecules themselves, due to the jiggling of heat. (For pollen grains, it’s actually the random collisions of jiggling water molecules that cause the movement.) Normal Brownian motion doesn’t favour any direction over any other – the particles just wander at random. But a bias can be introduced by putting the particles in asymmetric surroundings, such as lodging them in a series of grooves with a sawtooth profile, the slopes steeper in one direction than the other.
When Brownian ratchets were first proposed, they caused consternation because they seemed to violate the laws of thermodynamics and allow perpetual motion. In 1912 the Polish physicist Marian Smoluchowski suggested that a tiny ratchet-and-pawl might be induced to turn in just one direction by random thermal shaking. 50 years later, Richard Feynman showed why it wouldn’t work, if the temperature of the apparatus is the same everywhere.
But Brownian ratchets aren’t easily dismissed. They seem to represent an example of a Maxwell demon, which also violates thermodynamics. In the nineteenth century, James Clerk Maxwell suggested how heat might travel from cold to hot, in contradiction of the second law of thermodynamics, if a little ‘demon’ opened a trapdoor between two compartments each time a ‘hot’ molecule happened to reach it, thereby accumulating heat in one compartment. It wasn’t until the 1980s that the reason prohibiting Maxwell’s demon was understood: you have to take into account the information that the demon uses to make its choices, which itself incurs a cost in entropy – in disorder – that balances the thermodynamic books.
Yet Brownian ratchets can work if they don’t rely on random thermal ‘noise’ alone – if there is some other factor that tips the balance, so that the system is out of thermodynamic equilibrium. It seems likely that Brownian ratchets exist in molecular biology, inducing directional motion of components of the cell driven by a combination of biochemical energy and noise.
What makes the ratchet described theoretically by Hänggi’s team different from previous incarnations is that they have shown how to make the different particles move in wholly different directions. Normally they’d just move in the same direction at different speeds, because the small particles find it easier to ‘climb’ the steep slopes than the big particles. Another way of saying this is that the big particles are more strongly repelled by entropy from the vicinity of the steep slopes. The researchers show that the force pulling the particles against the ratcheting flow can be adjusted to a level just big enough to overcome the tendency of the small molecules’ random jiggling to move them preferentially in the direction of the shallow slopes, but not big enough to counteract this for the big molecules. And voilà: they head off in opposite directions, separated by entropy. The team show that, after several passes through the tube, a mixture of two particles of slightly different sizes – two chopped-up, screwed-up strands of DNA like those encountered in gene sequencing, say – can be segregated damned near perfectly.
_______________________________________________________
Sorting out tiny particles and molecules of different sizes is necessary for various technologies, from gene sequencing to nanotechnology. But it sounds like a pretty tedious business, right?
It’s no surprise, then, that a recent paper describing a new technique for doing this garnered no headlines. But it’s well worth a closer look. For one thing, it sounds like sheer magic.
Physicist Peter Hänggi at the University of Augsburg in Germany and his colleagues show that you can take a tube containing a mixture of big and small particles, apply some force to pull them through in one direction (an electric field would do the job for charged particles, say), and then give it a shake. And hey presto – the small particles will drop out of the end towards which the force pulls them, whereas the big particles drop out of the other end (D. Reguera et al., Phys. Rev. Lett. 108, 020604 (2012)).
Not only is this trick very clever but it’s also rather profound, touching on some of the most fundamental principles of physics. The device stems from a loophole proposed in the nineteenth century for evading the second law of thermodynamics, in effect making a perpetual motion machine. Needless to say, the new particle separator isn’t that, but the explanation of why not requires an excursion into the recondite field of information theory. Deep stuff from what is basically a grain sorter.
There are already ways to separate molecules by size. You can literally sieve them using solid materials with tiny pores of uniform size, such as the zeolite minerals used to separate and selectively alter some hydrocarbons in crude oil. And a technique called gel electrophoresis is used to separate strands of DNA chopped into different lengths – a standard procedure for sequencing genes – according to their size-dependent speed of being dragged along by an electric field. These techniques work well enough for most purposes. But that devised by Hänggi and colleagues is potentially more efficient.
Like all good magic, you have to look inside to see how it’s done. The tube is divided into a series of funnel-shaped chambers connected by narrow necks – looked at in cross-section, it resembles two saw blades with the teeth not quite touching. This sawtooth profile is all it takes to make the large and small particles move in opposite directions in response to a combination of the force and the shaking.
The tube is what physicists call a Brownian ratchet. The name derives from Brownian motion, the random movement of tiny particles such as pollen grains in water, or indeed water molecules themselves, due to the jiggling of heat. (For pollen grains, it’s actually the random collisions of jiggling water molecules that cause the movement.) Normal Brownian motion doesn’t favour any direction over any other – the particles just wander at random. But a bias can be introduced by putting the particles in asymmetric surroundings, such as lodging them in a series of grooves with a sawtooth profile, the slopes steeper in one direction than the other.
When Brownian ratchets were first proposed, they caused consternation because they seemed to violate the laws of thermodynamics and allow perpetual motion. In 1912 the Polish physicist Marian Smoluchowski suggested that a tiny ratchet-and-pawl might be induced to turn in just one direction by random thermal shaking. 50 years later, Richard Feynman showed why it wouldn’t work, if the temperature of the apparatus is the same everywhere.
But Brownian ratchets aren’t easily dismissed. They seem to represent an example of a Maxwell demon, which also violates thermodynamics. In the nineteenth century, James Clerk Maxwell suggested how heat might travel from cold to hot, in contradiction of the second law of thermodynamics, if a little ‘demon’ opened a trapdoor between two compartments each time a ‘hot’ molecule happened to reach it, thereby accumulating heat in one compartment. It wasn’t until the 1980s that the reason prohibiting Maxwell’s demon was understood: you have to take into account the information that the demon uses to make its choices, which itself incurs a cost in entropy – in disorder – that balances the thermodynamic books.
Yet Brownian ratchets can work if they don’t rely on random thermal ‘noise’ alone – if there is some other factor that tips the balance, so that the system is out of thermodynamic equilibrium. It seems likely that Brownian ratchets exist in molecular biology, inducing directional motion of components of the cell driven by a combination of biochemical energy and noise.
What makes the ratchet described theoretically by Hänggi’s team different from previous incarnations is that they have shown how to make the different particles move in wholly different directions. Normally they’d just move in the same direction at different speeds, because the small particles find it easier to ‘climb’ the steep slopes than the big particles. Another way of saying this is that the big particles are more strongly repelled by entropy from the vicinity of the steep slopes. The researchers show that the force pulling the particles against the ratcheting flow can be adjusted to a level just big enough to overcome the tendency of the small molecules’ random jiggling to move them preferentially in the direction of the shallow slopes, but not big enough to counteract this for the big molecules. And voilà: they head off in opposite directions, separated by entropy. The team show that, after several passes through the tube, a mixture of two particles of slightly different sizes – two chopped-up, screwed-up strands of DNA like those encountered in gene sequencing, say – can be segregated damned near perfectly.
Below the surface
Here’s my Crucible column for the May issue of Chemistry World. Arguably a bit parochial, but hopefully not without some resonance outside the UK.
_________________________________________________________________________
According to the UK’s Engineering and Physical Sciences Research Council’s latest announcement in their “Shaping capability” initiative, surface science is to receive reduced funding in future. It’s a perplexing decision.
This is just one of the several controversial aspects of the directions that EPSRC is taking. But when you look at the topic-by-topic ratings made by the council (each is designated ‘maintain’, ‘grow’ or ‘reduce’), it is hard not to feel a little sympathy. Almost every subject is earmarked for receiving the current level of support, or more. Among the latter category are many well motivated choices, such as energy storage and photonics. Obviously not every subject can enjoy this privilege, and so hard decisions must be made. Whatever it ‘reduces’, the EPSRC is bound to incur criticism from those affected. The decision to reduce synthetic organic chemistry will surely also provoke dismay among RSC members. All the same, compromising surface science seems especially short-sighted given the apparent desire to focus on subjects that might boost economic growth.
It’s true that one of the most industrially important aspects of surface science – catalysis – is covered by a separate category that will not suffer the same fate. But there’s plenty more to the subject that deserves strong support. As Peter Knight, president of the Institute of Physics, has said in response to the announcement, “surface science is an area of interdisciplinary research, often the most fertile source of new scientific breakthroughs”.
The EPSRC argues that it doesn’t regard the importance of surface science as having declined, but rather, that it is becoming assimilated into other topics. The funding cut is intended to accelerate this transition: the EPSRC seems to be proposing that the previous system is no longer the best way to allocate funds for surface science. Or to put it another way, the topic has become a victim of its own success in making itself so pervasive
The council says that “we would expect future surface science research to make significant contributions to other disciplines and key societal challenges”, and identifies nanotechnology and microelectronic device engineering among these. Some surface scientists have already suggested that ‘rebadging’ into such areas will rescue them.
But can applications like these be severed from the wellspring of basic science that makes them possible? Take the development of scanning probe microscopes in the 1980s, pioneered at IBM’s laboratories in Zurich. These tools, now fundamental to nanoscience and biophysics (for example), were devised purely as a means of high-resolution surface imaging, although their potential for nanoscale manipulation of matter, probing surface forces, and exploring quantum phenomena quickly became apparent. IBM has emphasized these fundamental aspects of the methodology ever since, most recently by demonstrating that charge distributions of single molecules can be imaged directly (Mohn, F., Gross, L., Moll, N. & Meyer, G. Nature Nanotechnol. online publication doi:10.1038/nnano.2012.20.) – an advance that could conceivably offer new insights into chemical bond formation.
This is just one example of how the development of new techniques in surface science is rarely problem-specific. Whether it is low-energy electron diffraction, surface-enhanced Raman spectroscopy, scanning optical microscopy or countless other methods, these techniques are hungrily adopted by many different fields. In fairness, the EPSRC says that a priority for surface science “in the reduced environment is the development of novel and sophisticated new tools and techniques for the study of surfaces”. But how can that objective avoid seeming diminished by its ‘reduced environment’?
And furthermore, can the core of surface science really be just methodological? I doubt it. The conceptual foundations, laid down by the likes of J. D. van der Waals and Irving Langmuir, lie with notions of surface free energy, intermolecular forces, adsorption, wetting and two-dimensional phases that are of undiminished relevance today, whether one is talking about chemical vapour deposition or biomolecular hydration. There is an intellectual unity to the discipline that transcends its rich variety of techniques.
This raises an almost philosophical question of whether or not a discipline can exist and perhaps even thrive when largely divorced from an over-arching label. At the very least, it’s a gamble. But what seems most alarming is the message that this sends out at a time when the study of surfaces and interfaces is looking ever more vital to so many areas of science and technology. The days when surface science meant looking at single molecular phases on perfect crystal faces in high vacuum are disappearing. Now we are starting to get to grips with interfaces in all their scary – as Wolfgang Pauli saw it, diabolical – complexity. Real surface processes are often dominated by impurities and mixed phases, by inhomogeneous solvation, by roughness, curvature, charge accumulation, defects. Understanding these things will tell us important things about cell and molecular biology, corrosion, atmospheric aerosols and cloud microphysics, nanoelectronics, biomaterials and much more.
That seems to be understood elsewhere. A new initiative for ‘solvation science’ in Germany, for example, recognizes the cross-cutting features of studying interfaces. And despite excelling in this area, the UK lacks a dedicated surface-science body like the Surface Science Society of Japan. Such considerations suggest that it would be more opportune to be strengthening foundations rather than chipping away at them.
According to the UK’s Engineering and Physical Sciences Research Council’s latest announcement in their “Shaping capability” initiative, surface science is to receive reduced funding in future. It’s a perplexing decision.
This is just one of the several controversial aspects of the directions that EPSRC is taking. But when you look at the topic-by-topic ratings made by the council (each is designated ‘maintain’, ‘grow’ or ‘reduce’), it is hard not to feel a little sympathy. Almost every subject is earmarked for receiving the current level of support, or more. Among the latter category are many well motivated choices, such as energy storage and photonics. Obviously not every subject can enjoy this privilege, and so hard decisions must be made. Whatever it ‘reduces’, the EPSRC is bound to incur criticism from those affected. The decision to reduce synthetic organic chemistry will surely also provoke dismay among RSC members. All the same, compromising surface science seems especially short-sighted given the apparent desire to focus on subjects that might boost economic growth.
It’s true that one of the most industrially important aspects of surface science – catalysis – is covered by a separate category that will not suffer the same fate. But there’s plenty more to the subject that deserves strong support. As Peter Knight, president of the Institute of Physics, has said in response to the announcement, “surface science is an area of interdisciplinary research, often the most fertile source of new scientific breakthroughs”.
The EPSRC argues that it doesn’t regard the importance of surface science as having declined, but rather, that it is becoming assimilated into other topics. The funding cut is intended to accelerate this transition: the EPSRC seems to be proposing that the previous system is no longer the best way to allocate funds for surface science. Or to put it another way, the topic has become a victim of its own success in making itself so pervasive
The council says that “we would expect future surface science research to make significant contributions to other disciplines and key societal challenges”, and identifies nanotechnology and microelectronic device engineering among these. Some surface scientists have already suggested that ‘rebadging’ into such areas will rescue them.
But can applications like these be severed from the wellspring of basic science that makes them possible? Take the development of scanning probe microscopes in the 1980s, pioneered at IBM’s laboratories in Zurich. These tools, now fundamental to nanoscience and biophysics (for example), were devised purely as a means of high-resolution surface imaging, although their potential for nanoscale manipulation of matter, probing surface forces, and exploring quantum phenomena quickly became apparent. IBM has emphasized these fundamental aspects of the methodology ever since, most recently by demonstrating that charge distributions of single molecules can be imaged directly (Mohn, F., Gross, L., Moll, N. & Meyer, G. Nature Nanotechnol. online publication doi:10.1038/nnano.2012.20.) – an advance that could conceivably offer new insights into chemical bond formation.
This is just one example of how the development of new techniques in surface science is rarely problem-specific. Whether it is low-energy electron diffraction, surface-enhanced Raman spectroscopy, scanning optical microscopy or countless other methods, these techniques are hungrily adopted by many different fields. In fairness, the EPSRC says that a priority for surface science “in the reduced environment is the development of novel and sophisticated new tools and techniques for the study of surfaces”. But how can that objective avoid seeming diminished by its ‘reduced environment’?
And furthermore, can the core of surface science really be just methodological? I doubt it. The conceptual foundations, laid down by the likes of J. D. van der Waals and Irving Langmuir, lie with notions of surface free energy, intermolecular forces, adsorption, wetting and two-dimensional phases that are of undiminished relevance today, whether one is talking about chemical vapour deposition or biomolecular hydration. There is an intellectual unity to the discipline that transcends its rich variety of techniques.
This raises an almost philosophical question of whether or not a discipline can exist and perhaps even thrive when largely divorced from an over-arching label. At the very least, it’s a gamble. But what seems most alarming is the message that this sends out at a time when the study of surfaces and interfaces is looking ever more vital to so many areas of science and technology. The days when surface science meant looking at single molecular phases on perfect crystal faces in high vacuum are disappearing. Now we are starting to get to grips with interfaces in all their scary – as Wolfgang Pauli saw it, diabolical – complexity. Real surface processes are often dominated by impurities and mixed phases, by inhomogeneous solvation, by roughness, curvature, charge accumulation, defects. Understanding these things will tell us important things about cell and molecular biology, corrosion, atmospheric aerosols and cloud microphysics, nanoelectronics, biomaterials and much more.
That seems to be understood elsewhere. A new initiative for ‘solvation science’ in Germany, for example, recognizes the cross-cutting features of studying interfaces. And despite excelling in this area, the UK lacks a dedicated surface-science body like the Surface Science Society of Japan. Such considerations suggest that it would be more opportune to be strengthening foundations rather than chipping away at them.