One of the easiest ways to bring humour to music is with timbre. It’s cheap (literally) but still funny to play Led Zeppelin’s “Whole Lotta Love” or Richard Strauss’s “Also Sprach Zarathustra” on kazoo, as the Temple City Kazoo Orchestra did in the 1970s. Most things played on kazoo are funny. It just has a comical timbre.
Such performances inadvertently make a serious point about timbre, which is that it can matter more than the notes. This is overlooked when music is considered as notes on paper. Yet musicologists have largely neglected it, for the simple reason that we don’t really know what it is. One definition amounts to a negative: if two sound signals differ while being identical in pitch and loudness, the difference is down to timbre.
One feature of timbre is the spectrum of pitches in a note: the amplitudes of the various overtones. These are quite different, for example, for a trumpet and a violin both the same note. But our sense of timbre depends also on how this spectrum, and the overall volume, changes over time, particularly in the initial “attack” period of the first few fractions of a second. These are acoustic properties, though, and it might be more relevant to ask what are the perceptual qualities by which we distinguish timbre. Some music psychologists claim that these are things like “brightness” and attack, others argue that we interpret timbre in terms of the physical processes we imagine causing the sound: blowing, plucking, striking and so on. It’s significant too that we often talk of the “colour” of the sound.
Arnold Schoenberg thought it should be possible to write music based on changes of timbre rather than pitch. It’s because we don’t know enough about how the brain organizes timbre that this notion didn’t really work. All the same, Schoenberg and his pupils created a style called Klangfarbenmelodie (sound colour melody) in which melodies were parceled out between instruments of different timbre, producing a mesmeric, shimmering effect. Anton Webern’s arrangement of a part of Bach “The Musical Offering” is the most renowned example.
There’s one thing for sure: timbre is central to our appreciation of music, and if we relegate it below more readily definable qualities like pitch and rhythm then we miss out on a huge part of what conditions our emotional response. It would be fair to say that critical opinion on the music of heavy-metal band Motörhead, led by the late bass guitarist Lemmy Kilmister, was divided. But if ever there was a music defined by timbre, this was it.
Wednesday, March 23, 2016
Thursday, March 17, 2016
The Roman melting pot
Here's my column for the March issue of Nature Materials.
_________________________________________________________
Recycling of materials is generally good for the planet, but it makes life hard for archaeologists. Analysis of ancient materials, for example by studying element or isotope compositions, can provide clues about the provenance of the raw materials and thus about the trade routes and economies of past cultures. But that business becomes complex, even indecipherable, if materials were reused and perhaps reprocessed in piecemeal fashion.
This, however, does seem to have been the way of the world. Extracting metals from ores and minerals from quarries and mines, and making glass and ceramics, were labour-intensive and often costly affairs, so that a great deal of the materials inventory was repurposed. Besides, the knowledge was sometimes lacking to make a particular material from scratch in situ. The glorious cobalt-blue glass in the windows of medieval French churches and cathedrals is often rich in sodium, characteristic of glass from the Mediterranean region. It was probably made from shards imported from the south using techniques that the northern Europeans didn’t possess, and perhaps dating back to Roman or Byzantine times. The twelfth-century monk Theophilus records that the French collected such glass and remelted it to make their windows [1].
In that instance, composition does say something about provenance. But if glass was recycled en masse, the chemical signature of its origin may get scrambled. It’s not surprising that such reuse was very common, for making glass from scratch was hugely burdensome: by one estimate, 100 kg of wood was needed to produce the ash for making 2 kg of glass, and collecting it took a whole day [2].
Just how extensively glass was recycled in large batches in Roman times is made clear in a new study by Jackson and Paynter [3]. Their analysis of glass fragments from a Roman site in York, England, shows that a lot of it came out of “a great big melting pot”: a jumble of recycled items melted together. The fragments can be broadly divided into classes differentiated by their antimony and manganese compositions. Both of these metals were typically added purposely during the Roman glass-making process because they could remove the colour (typically a blue-green tint) imparted by the impurities, such as iron, in the sand or ash [4]. Manganese was known in medieval Europe as “glassmaker’s soap”.
It’s the difficulty of making it that meant colourless glass was highly prized – and so particularly likely to be recycled. The results of Jackson and Paynter confirm how common this was. The largest category of glass samples that they analysed – around 40 percent of the total – contained high levels of both Sb and Mn, implying that glass rendered colourless by either additive would be separated from the rest and then recycled by melting.
But most of those samples aren’t colourless. That’s because remelting tends to incorporate other impurities, such as aluminium, titanium and iron, from the crucibles, furnaces or blowing irons. The recycled glass may then end up as tinted and undistinguished as that made with only low amounts of Mn. As a result, while it is derived from once highly prized, colourless glass reserved for fine tableware, this high Sb-Mn glass becomes devalued and used for mundane, material-intensive items such as windows and bottles. Eventually it just disappears into the melting pot.
1. Theophilus, On Divers Arts, transl. Hawthorne, J. G. & Smith, C. S. (Dover, New York, 1979).
2. Smedley, J. W., Jackson, C. M. & Booth, C. A., in Ceramics and Civilisation Vol. 8, eds McCray, P. & Kingery, W. D. (American Ceramic Society, 1998).
3. Jackson, C. M. & Paynter, S., Archaeometry 58, 68-95 (2016). [here]
4. Jackson, C. M., Archaeometry 47, 763-780 (2005).
_________________________________________________________
Recycling of materials is generally good for the planet, but it makes life hard for archaeologists. Analysis of ancient materials, for example by studying element or isotope compositions, can provide clues about the provenance of the raw materials and thus about the trade routes and economies of past cultures. But that business becomes complex, even indecipherable, if materials were reused and perhaps reprocessed in piecemeal fashion.
This, however, does seem to have been the way of the world. Extracting metals from ores and minerals from quarries and mines, and making glass and ceramics, were labour-intensive and often costly affairs, so that a great deal of the materials inventory was repurposed. Besides, the knowledge was sometimes lacking to make a particular material from scratch in situ. The glorious cobalt-blue glass in the windows of medieval French churches and cathedrals is often rich in sodium, characteristic of glass from the Mediterranean region. It was probably made from shards imported from the south using techniques that the northern Europeans didn’t possess, and perhaps dating back to Roman or Byzantine times. The twelfth-century monk Theophilus records that the French collected such glass and remelted it to make their windows [1].
In that instance, composition does say something about provenance. But if glass was recycled en masse, the chemical signature of its origin may get scrambled. It’s not surprising that such reuse was very common, for making glass from scratch was hugely burdensome: by one estimate, 100 kg of wood was needed to produce the ash for making 2 kg of glass, and collecting it took a whole day [2].
Just how extensively glass was recycled in large batches in Roman times is made clear in a new study by Jackson and Paynter [3]. Their analysis of glass fragments from a Roman site in York, England, shows that a lot of it came out of “a great big melting pot”: a jumble of recycled items melted together. The fragments can be broadly divided into classes differentiated by their antimony and manganese compositions. Both of these metals were typically added purposely during the Roman glass-making process because they could remove the colour (typically a blue-green tint) imparted by the impurities, such as iron, in the sand or ash [4]. Manganese was known in medieval Europe as “glassmaker’s soap”.
It’s the difficulty of making it that meant colourless glass was highly prized – and so particularly likely to be recycled. The results of Jackson and Paynter confirm how common this was. The largest category of glass samples that they analysed – around 40 percent of the total – contained high levels of both Sb and Mn, implying that glass rendered colourless by either additive would be separated from the rest and then recycled by melting.
But most of those samples aren’t colourless. That’s because remelting tends to incorporate other impurities, such as aluminium, titanium and iron, from the crucibles, furnaces or blowing irons. The recycled glass may then end up as tinted and undistinguished as that made with only low amounts of Mn. As a result, while it is derived from once highly prized, colourless glass reserved for fine tableware, this high Sb-Mn glass becomes devalued and used for mundane, material-intensive items such as windows and bottles. Eventually it just disappears into the melting pot.
1. Theophilus, On Divers Arts, transl. Hawthorne, J. G. & Smith, C. S. (Dover, New York, 1979).
2. Smedley, J. W., Jackson, C. M. & Booth, C. A., in Ceramics and Civilisation Vol. 8, eds McCray, P. & Kingery, W. D. (American Ceramic Society, 1998).
3. Jackson, C. M. & Paynter, S., Archaeometry 58, 68-95 (2016). [here]
4. Jackson, C. M., Archaeometry 47, 763-780 (2005).
Tuesday, March 01, 2016
Many worlds or many words?
I’ve been rereading Max Tegmark’s 1997 paper on the Many Worlds Interpretation of quantum mechanics, written in response to an informal poll taken that year at a quantum workshop. There, the MWI was the second most popular interpretation adduced by the attendees, after the Copenhagen Interpretation (which is here undefined). What, Tegmark asks, can account for the robust, even increasing, popularity of the MWI even after it has been so heavily criticized?
He gives various possible reasons, among them the idea that the emerging understanding of decoherence in the 1970s and 1980s removed the apparently serious objection “why don’t we perceive superpositions then?” Perhaps that’s true. Tegmark also says that enough experimental evidence had accumulated by then that quantum mechanics really is weird (quantum nonlocality, molecular superpositions etc) that maybe experimentalists (apparently a more skeptical bunch than theorists) were concluding, “hell, why not?” Again, perhaps so. Perhaps they really did think that “weirdness” here justified weirdness “there”. Perhaps they had become more ready to embrace quantum explanations of homeopathy and telepathy too.
But honestly, some of the stuff here. It’s delightful to see Tegmark actually write down for once the wave vector for an observer, since I’ve always wondered what that looked like. This particular observer makes a measurement on the spin state of a silver atom, and is happy with an up result but unhappy with a down result. In the former case, her state looks like this: |☺>. The latter case? Oh, you got there before me: |☹>. These two states are then combined as tensor products with the corresponding spin states. These equations are identified by numbers, rather as you do when you’re doing science.
Well, but what then of the objection that the very notion of probability is problematic when one is dealing with the MWI, given that everything that can happen does happen with certainty? This issue has been much debated, and certainly it is subtle. Subtler, I think, than the resolution Tegmark proposes. Let’s suppose, he says, that the observer is sleeping in bed when the spin measurement is made, and is placed in one or other of two identical rooms depending on the outcome. Yes, I can see you asking in what sense she is then an observer, and invoking Wigner’s friend and so on, but stay with me. You could at least imagine some apparatus designed to do this, right? So then she wakes up and wonders which room she is in. And she can then meaningfully calculate the probabilities – 50% for each. And, says Tegmark, these probabilities “could have been computed in advance of the experiment, used as gambling odds, etc., before the orthodox linguist would allow us to call them probabilities.”
Did you spot the flaw? She went to sleep – perhaps having realized that she’d have a 50% chance of waking up in either room – and then when she woke up she could find out which. But hang on – she? The “she” who went to sleep is not the “she” who woke up in one of the rooms. According to this view of the MWI, that first she is a superposition of the two shes who woke up. All that first she can say is that with 100% certainty, two future shes will occupy both rooms. At that point, the “probability” that “she” will wake up in room A or room B is a meaningless concept. “She”, or some other observer, could still place a bet on it, though, right, knowing that there will be one outcome or the other? Not really – rational betters would know that it makes no difference, if the MWI holds true. They’ll win and lose either way, with certainty. I wonder if Max, who I think truly does believe the MWI, would place a bet?
The point, I think, is that a linguist would be less bothered by the definition of “probability” here than by the definition of the observer. Posing the issue this way involves the usual refusal to admit that we lack any coherent way to relate the experiences of an individual before a quantum event (on which their life history is contingent) to the whole notion of that “same” individual afterwards. Still, we have the maths: |☺> + |☹> (pardon me for not normalizing) becomes |☺> and |☹> afterwards. And in Tegmark’s universe, it’s the maths that counts.
Oh, and I didn’t even ask what happens when the probability of the spin measurements is not 50:50 but 70:30. Another day, perhaps.
He gives various possible reasons, among them the idea that the emerging understanding of decoherence in the 1970s and 1980s removed the apparently serious objection “why don’t we perceive superpositions then?” Perhaps that’s true. Tegmark also says that enough experimental evidence had accumulated by then that quantum mechanics really is weird (quantum nonlocality, molecular superpositions etc) that maybe experimentalists (apparently a more skeptical bunch than theorists) were concluding, “hell, why not?” Again, perhaps so. Perhaps they really did think that “weirdness” here justified weirdness “there”. Perhaps they had become more ready to embrace quantum explanations of homeopathy and telepathy too.
But honestly, some of the stuff here. It’s delightful to see Tegmark actually write down for once the wave vector for an observer, since I’ve always wondered what that looked like. This particular observer makes a measurement on the spin state of a silver atom, and is happy with an up result but unhappy with a down result. In the former case, her state looks like this: |☺>. The latter case? Oh, you got there before me: |☹>. These two states are then combined as tensor products with the corresponding spin states. These equations are identified by numbers, rather as you do when you’re doing science.
Well, but what then of the objection that the very notion of probability is problematic when one is dealing with the MWI, given that everything that can happen does happen with certainty? This issue has been much debated, and certainly it is subtle. Subtler, I think, than the resolution Tegmark proposes. Let’s suppose, he says, that the observer is sleeping in bed when the spin measurement is made, and is placed in one or other of two identical rooms depending on the outcome. Yes, I can see you asking in what sense she is then an observer, and invoking Wigner’s friend and so on, but stay with me. You could at least imagine some apparatus designed to do this, right? So then she wakes up and wonders which room she is in. And she can then meaningfully calculate the probabilities – 50% for each. And, says Tegmark, these probabilities “could have been computed in advance of the experiment, used as gambling odds, etc., before the orthodox linguist would allow us to call them probabilities.”
Did you spot the flaw? She went to sleep – perhaps having realized that she’d have a 50% chance of waking up in either room – and then when she woke up she could find out which. But hang on – she? The “she” who went to sleep is not the “she” who woke up in one of the rooms. According to this view of the MWI, that first she is a superposition of the two shes who woke up. All that first she can say is that with 100% certainty, two future shes will occupy both rooms. At that point, the “probability” that “she” will wake up in room A or room B is a meaningless concept. “She”, or some other observer, could still place a bet on it, though, right, knowing that there will be one outcome or the other? Not really – rational betters would know that it makes no difference, if the MWI holds true. They’ll win and lose either way, with certainty. I wonder if Max, who I think truly does believe the MWI, would place a bet?
The point, I think, is that a linguist would be less bothered by the definition of “probability” here than by the definition of the observer. Posing the issue this way involves the usual refusal to admit that we lack any coherent way to relate the experiences of an individual before a quantum event (on which their life history is contingent) to the whole notion of that “same” individual afterwards. Still, we have the maths: |☺> + |☹> (pardon me for not normalizing) becomes |☺> and |☹> afterwards. And in Tegmark’s universe, it’s the maths that counts.
Oh, and I didn’t even ask what happens when the probability of the spin measurements is not 50:50 but 70:30. Another day, perhaps.
Friday, February 19, 2016
Manipulated by music
Here's my music psychology column from the latest issue of Sapere magazine.
______________________________________________
Does Alex, the ultra-violent delinquent in Anthony Burgess’ novel A Clockwork Orange, find something in Beethoven that matches his psychopathic tendencies? Does Beethoven perhaps even incite them? We’re left to guess. It seems more than mere coincidence however, that 16 years after Stanley Kubrick’s notorious movie of the novel, musicologist Susan McClary argued that Beethoven’s Ninth Symphony, one of Alex’s favourites, articulates a rapist’s rage.
That suggestion drew much criticism, even derision. But behind it seems to lie the suspicion that music can influence behaviour, for better or worse. It’s an ancient idea. Aristotle felt that the wrong kind of music can lead a person astray, while the right kind cultivates good citizenship. Such convictions meant that music was strictly regulated in Athens and Sparta. The Greeks organized their music in terms of modes – a little like our major and minor scales – and Plato insists that the Dorian mode is the one to induce bravery and resolve. Armies have long marched to war to the sounds of martial music, whether it’s the skirling of a Scottish bagpipe or Wagner’s “Ride of the Valkyries” blasting from the attack helicopters in Apocalypse Now.
That’s just one arena in which music is thought to manipulate mood. Ever since efficiency became the mantra of the modern workplace, employers have hoped that music will boost workers’ productivity. There’s a great deal of wishful thinking and shoddy science in this field, but some serious study too. The stereotype is of factories piping music to workers engaged in robotic routines, but in fact much of the interest is in using music to boost creativity. One study in 2012 found that workers in a computer software company solved problems faster and had better ideas when allowed to listen to music of their choice: a sign that positive mood makes for better work, rather than an indication of specific links between the type of music and productivity. The effects were small, though, and almost non-existent for expert workers.
Retailers have a strong interest in this stuff. Can music make people buy more? I’m afraid so. It’s been shown that certain musical genres enhance our receptiveness to – and what we’ll pay for – certain products. We’ll pay more for mundane products like toothbrushes and light bulbs when we hear country music, and more for products connected to “social identity” (jewellery, pin badges) when listening to classical music. But sellers beware: get the musical choice wrong, and it’s worse than no music at all.
______________________________________________
Does Alex, the ultra-violent delinquent in Anthony Burgess’ novel A Clockwork Orange, find something in Beethoven that matches his psychopathic tendencies? Does Beethoven perhaps even incite them? We’re left to guess. It seems more than mere coincidence however, that 16 years after Stanley Kubrick’s notorious movie of the novel, musicologist Susan McClary argued that Beethoven’s Ninth Symphony, one of Alex’s favourites, articulates a rapist’s rage.
That suggestion drew much criticism, even derision. But behind it seems to lie the suspicion that music can influence behaviour, for better or worse. It’s an ancient idea. Aristotle felt that the wrong kind of music can lead a person astray, while the right kind cultivates good citizenship. Such convictions meant that music was strictly regulated in Athens and Sparta. The Greeks organized their music in terms of modes – a little like our major and minor scales – and Plato insists that the Dorian mode is the one to induce bravery and resolve. Armies have long marched to war to the sounds of martial music, whether it’s the skirling of a Scottish bagpipe or Wagner’s “Ride of the Valkyries” blasting from the attack helicopters in Apocalypse Now.
That’s just one arena in which music is thought to manipulate mood. Ever since efficiency became the mantra of the modern workplace, employers have hoped that music will boost workers’ productivity. There’s a great deal of wishful thinking and shoddy science in this field, but some serious study too. The stereotype is of factories piping music to workers engaged in robotic routines, but in fact much of the interest is in using music to boost creativity. One study in 2012 found that workers in a computer software company solved problems faster and had better ideas when allowed to listen to music of their choice: a sign that positive mood makes for better work, rather than an indication of specific links between the type of music and productivity. The effects were small, though, and almost non-existent for expert workers.
Retailers have a strong interest in this stuff. Can music make people buy more? I’m afraid so. It’s been shown that certain musical genres enhance our receptiveness to – and what we’ll pay for – certain products. We’ll pay more for mundane products like toothbrushes and light bulbs when we hear country music, and more for products connected to “social identity” (jewellery, pin badges) when listening to classical music. But sellers beware: get the musical choice wrong, and it’s worse than no music at all.
Friday, February 12, 2016
On being "harsh" to Babylonia
Never read the comments, they say, and indeed it’s often a depressing experience. But it can be instructive too. I’m a little astonished, but better informed, by the comments below my piece for the Atlantic on Babylonian astronomy. It had honestly never occurred to me that merely by suggesting we not call the Babylonian astronomers scientists I would be deemed to be dissing them. From what I’ve seen, this historians will not have anticipated his misconception either.
It speaks volumes, though, about our cultural preconceptions. The idea seems to be that if you deny someone is doing science then you’re saying they are ignorant fools dabbling in a load of superstition. Oh crikey – how did the public perception of the history of science ever come to this? What have we done to land us here? Who is to blame? It seems that all those scientists cherry-picking from the past to hand out medals for getting things “right” really have captured the conversation, if the popular conception is that if you don’t get a pat on the head for being a “good scientist” then you fail the test.
Actually this really is a bit depressing. I’m not sure even where to start. Maybe just with this: when we say that we are not going to mine the past for congruence with the present, we are not dismissing that past as worthless ignorance. On the contrary, it means that we are taking it seriously. (And that, incidentally, is why modern “astrology” seems to me not to be perpetuating but in fact to be undermining its tradition. To pretend that astrology is a serious business today is, even if unintentionally, to do an injustice to its historical context.) So let me just say it again: Babylonian astronomy was not an “imperfect science” but a self-contained intellectual framework woven into the rest of their culture.
It speaks volumes, though, about our cultural preconceptions. The idea seems to be that if you deny someone is doing science then you’re saying they are ignorant fools dabbling in a load of superstition. Oh crikey – how did the public perception of the history of science ever come to this? What have we done to land us here? Who is to blame? It seems that all those scientists cherry-picking from the past to hand out medals for getting things “right” really have captured the conversation, if the popular conception is that if you don’t get a pat on the head for being a “good scientist” then you fail the test.
Actually this really is a bit depressing. I’m not sure even where to start. Maybe just with this: when we say that we are not going to mine the past for congruence with the present, we are not dismissing that past as worthless ignorance. On the contrary, it means that we are taking it seriously. (And that, incidentally, is why modern “astrology” seems to me not to be perpetuating but in fact to be undermining its tradition. To pretend that astrology is a serious business today is, even if unintentionally, to do an injustice to its historical context.) So let me just say it again: Babylonian astronomy was not an “imperfect science” but a self-contained intellectual framework woven into the rest of their culture.
Friday, January 29, 2016
What is selfish DNA?
Richard Dawkins’ The Selfish Gene was a landmark book in many ways: the first to lay out for a general audience the gene-centred view of evolution, but also one of the first to re-invigorate (arguably since the 1920s) science popularization as a part of the cultural conversation – and to show how beautifully written it should aspire to be. Dawkins might be divisive today for a variety of reasons, but science popularizers owe him a huge debt.
That’s why it is good and proper to have The Selfish Gene celebrated in Matt Ridley’s nice article in Nature. You can tell that I’m preparing to land a punch, can’t you?
Well, sort of. You see, I can’t help but be frustrated at how Matt turns one of the most problematic aspects of the book into a virtue. He suggests that Dawkins’ viewpoint was the inspiration for the discussions of selfish genes presented in Nature in 1980 by Orgel and Crick and by Doolittle and Sapienza. And it is true that The Selfish Gene is the first citation in both papers.
But both cite the book as one of the most recent discussions of the issue. As Orgel and Crick say, “The idea is not new. We have not attempted to trace it back to its root.” So it is not at all clear that, as Matt says, “a throwaway remark by Dawkins led to an entirely new theory in genomics”.
The problem is not simply one of quibbling about priority, however. Matt points out that this “throwaway remark” concerns the “apparently surplus DNA” – in the hugely problematic later coinage, junk DNA – that populates the genome, and which Dawkins suggested is merely parasitic. Yes indeed, and this is what those two later Nature papers discuss – as Orgel and Crick put it, DNA that “makes no specific contribution to the phenotype”.
But is this what The Selfish Gene is about? Absolutely not, and that’s why Dawkins’ remark was throwaway. His contention was that all genes should be regarded as “selfish”. Orgel, Crick, Doolittle and Sapienza are specifically talking about DNA that is produced and sustained by non-phenotypic selection. This, they say, is what we might regard as truly selfish DNA. Now, one can argue about the word “selfish” even in that context – it perhaps only makes sense if this DNA becomes detrimental to the survival of the organism. But the implication is that the phenotypic DNA is then not selfish, and that the term should be reserved for parasitic DNA. That makes good sense – and it is precisely these waters that Dawkins’ title muddied.
I can’t resist also asking what Matt means by saying that “genes that cause birds and bees to breed survive at the expense of other genes”. (“No other explanation makes sense…”) It seems to me more meaningful to say “genes that cause birds and bees to breed survive while helping other genes to survive.” I don’t exactly mean here to allude to the semantic selfish/cooperative debate (although there are good reasons to have it), but rather, it seems to me that Matt’s statement only makes sense if we replace “genes” with “alleles”. This is not pedantry. Genes do not, in general, compete with each other – at least, that is not the basis of the neodarwinian modern synthesis. Although one might find examples where specific genes do propagate at the expense of others, in general it is surely different variants of the same gene that compete with each other. And when a new allele proves to be more successful, other genes come along for the ride. To fail to make this distinction (which of course Matt recognizes) seems to me to propagate a very common misconception in evolutionary genetics, which is that genes are little pseudo-organisms all competing with one another. That isn’t a helpful or accurate way to present the picture.
Matt understands all this far better than I do. So I am quite prepared for him to tell me I have something wrong here.
That’s why it is good and proper to have The Selfish Gene celebrated in Matt Ridley’s nice article in Nature. You can tell that I’m preparing to land a punch, can’t you?
Well, sort of. You see, I can’t help but be frustrated at how Matt turns one of the most problematic aspects of the book into a virtue. He suggests that Dawkins’ viewpoint was the inspiration for the discussions of selfish genes presented in Nature in 1980 by Orgel and Crick and by Doolittle and Sapienza. And it is true that The Selfish Gene is the first citation in both papers.
But both cite the book as one of the most recent discussions of the issue. As Orgel and Crick say, “The idea is not new. We have not attempted to trace it back to its root.” So it is not at all clear that, as Matt says, “a throwaway remark by Dawkins led to an entirely new theory in genomics”.
The problem is not simply one of quibbling about priority, however. Matt points out that this “throwaway remark” concerns the “apparently surplus DNA” – in the hugely problematic later coinage, junk DNA – that populates the genome, and which Dawkins suggested is merely parasitic. Yes indeed, and this is what those two later Nature papers discuss – as Orgel and Crick put it, DNA that “makes no specific contribution to the phenotype”.
But is this what The Selfish Gene is about? Absolutely not, and that’s why Dawkins’ remark was throwaway. His contention was that all genes should be regarded as “selfish”. Orgel, Crick, Doolittle and Sapienza are specifically talking about DNA that is produced and sustained by non-phenotypic selection. This, they say, is what we might regard as truly selfish DNA. Now, one can argue about the word “selfish” even in that context – it perhaps only makes sense if this DNA becomes detrimental to the survival of the organism. But the implication is that the phenotypic DNA is then not selfish, and that the term should be reserved for parasitic DNA. That makes good sense – and it is precisely these waters that Dawkins’ title muddied.
I can’t resist also asking what Matt means by saying that “genes that cause birds and bees to breed survive at the expense of other genes”. (“No other explanation makes sense…”) It seems to me more meaningful to say “genes that cause birds and bees to breed survive while helping other genes to survive.” I don’t exactly mean here to allude to the semantic selfish/cooperative debate (although there are good reasons to have it), but rather, it seems to me that Matt’s statement only makes sense if we replace “genes” with “alleles”. This is not pedantry. Genes do not, in general, compete with each other – at least, that is not the basis of the neodarwinian modern synthesis. Although one might find examples where specific genes do propagate at the expense of others, in general it is surely different variants of the same gene that compete with each other. And when a new allele proves to be more successful, other genes come along for the ride. To fail to make this distinction (which of course Matt recognizes) seems to me to propagate a very common misconception in evolutionary genetics, which is that genes are little pseudo-organisms all competing with one another. That isn’t a helpful or accurate way to present the picture.
Matt understands all this far better than I do. So I am quite prepared for him to tell me I have something wrong here.
Friday, January 15, 2016
More on the beauty question
Here’s my review of Frank Wilczek’s book A Beautiful Question: Finding Nature’s Deep Design, which appeared in Physics World last year.
__________________________________________________________________
There aren’t many books on which you will find admiring blurbs by both Lawrence Krauss and Deepak Chopra, but this is one. You can see why. Wilczek writes in a freewheeling, almost poetic way, while retaining a penetrating and rigorous vision of what he wants to say about physics, science and the world.
His opening question – “Is the world a work of art?” – sets the tone: at the same time lyrical and baffling. Wilczek’s answer, as you might guess from the title, is “Yes, and it’s a beautiful one.” He reaches this conclusion after surveying the central role that symmetry plays in modern physics, from the shapes of atomic orbitals to the structure of quantum chromodynamics. He makes one of the most compelling cases I have seen for why symmetry can be considered a guiding principle worth heeding in efforts to push back the frontiers of physical theory. The latest prospect of doing that – of expanding fundamental physics beyond the Standard Model, which Wilczek prefers to call the Core Theory – comes from the principle of supersymmetry, which promises to unify bosons (“force particles”, with integer spin) and fermions (“substance particles”, with half-integer spin). This idea looms large on the agenda of the Large Hadron Collider now that it has returned to operation after an upgrade. Thanks to Wilczek, I now have a better sense of why the theory not only might be true but ought to be.
All the same, if this were a regular popular science book then it would be considered something of a mess. Like poetry, Wilczek’s prose is often highly concentrated thought, and he doesn’t always bother to unravel it or even to define his terms. Even with the glossary, I’m not sure how much the uninitiated reader will get from statements such as “Color gluons are the avatars of gauge symmetry 3.0.” What seem to be more straightforward concepts, such as light perception by the eye, become reconfigured into shapes that, while fitting into Wilczek’s intellectual framework, take time to decrypt: “When we perceive a color, we see a symbol of change, not anything that changes.”
Wilczek’s suggestion that, when the going gets tough, we read the text like poetry rather than hoping to understand all it says, seems optimistic. But these challenges aren’t, I think, exactly defects of the book, because this is not a regular science book. Like Stephen Hawking's A Brief History of Time, it is instead the unique vision of a brilliant mind (with that added advantage that it doesn’t pretend otherwise). For every baffling passage there are other moments when Wilczek explains something in a way that no one else has, or perhaps could, so that you come away with a fresh perspective on something that you thought you understood already. Never again will I be frustrated by pop-science suggestions that Einstein simply decided to posit the constancy of the speed of light: of course he didn’t, and Wilczek cuts straight to the physics of the matter. Put simply, he sees things differently, and that’s the true and compelling reason to read the book.
For the fact is that this book is not a work of explanation but, like Plato’s Timaeus, an extended argument – indeed, what you might call a gentle polemic. It wants to steer us towards Wilczek’s own answer to his initial question. And so, quietly and soberly, he marshals facts that fit his case and soft-pedals ones that don’t. That’s fine – it is what polemics do – so long as we recognize what’s happening. For example, in his discussion of Pythagorean musical consonance he gives us a simple (albeit speculative) physical mechanism for why we prefer harmonies with simple frequency ratios while all but ignoring the fact that we plainly don’t: unless you’ve heard music played in tunings other than equal temperament, you’ll never have heard the interval of a Pythagorean fifth. And the discussion of Chinese yin and yang glosses over the fact that it not an aesthetic idea but a philosophical one: beauty is never, to my knowledge, mentioned by Chinese philosophers in this context.
Such goal-directed argument is most apparent in Wilczek’s discussion of beauty itself, for which the closest thing he gives to a definition is “symmetry and economy of means”. But neither of these features plays a key role either in most art or in most theories of aesthetics. Immanuel Kant, who made one of the most searching enquiries into the nature of beauty, argued that there is something repugnant in too much order and regularity. Even Francis Bacon asserted that “There is no excellent beauty that hath not some strangeness in the proportion”.
Kant’s careful distinction between real beauty and the intellectual satisfaction of perceiving an idea is precisely what physicists ignore when, like Lewis Carroll’s Humpty Dumpty, they make the word mean just what they want it to mean. Wilczek at least admits that not all types of beauty are included in his picture; but the physicists’ usual conception of beauty is Platonic in the extreme and barely if at all relevant to the arts. For Plato it was precisely art’s lack of symmetry (and thus intelligibility) that denied it access to real beauty: art was just too messy to be beautiful. It seems clear, and important, that many physicists do feel a kind of transcendent joy in the symmetries of nature’s laws. But if they really want to talk about it in terms of beauty, they should acknowledge that there is an intellectual heritage to that notion that they will have to confront.
__________________________________________________________________
There aren’t many books on which you will find admiring blurbs by both Lawrence Krauss and Deepak Chopra, but this is one. You can see why. Wilczek writes in a freewheeling, almost poetic way, while retaining a penetrating and rigorous vision of what he wants to say about physics, science and the world.
His opening question – “Is the world a work of art?” – sets the tone: at the same time lyrical and baffling. Wilczek’s answer, as you might guess from the title, is “Yes, and it’s a beautiful one.” He reaches this conclusion after surveying the central role that symmetry plays in modern physics, from the shapes of atomic orbitals to the structure of quantum chromodynamics. He makes one of the most compelling cases I have seen for why symmetry can be considered a guiding principle worth heeding in efforts to push back the frontiers of physical theory. The latest prospect of doing that – of expanding fundamental physics beyond the Standard Model, which Wilczek prefers to call the Core Theory – comes from the principle of supersymmetry, which promises to unify bosons (“force particles”, with integer spin) and fermions (“substance particles”, with half-integer spin). This idea looms large on the agenda of the Large Hadron Collider now that it has returned to operation after an upgrade. Thanks to Wilczek, I now have a better sense of why the theory not only might be true but ought to be.
All the same, if this were a regular popular science book then it would be considered something of a mess. Like poetry, Wilczek’s prose is often highly concentrated thought, and he doesn’t always bother to unravel it or even to define his terms. Even with the glossary, I’m not sure how much the uninitiated reader will get from statements such as “Color gluons are the avatars of gauge symmetry 3.0.” What seem to be more straightforward concepts, such as light perception by the eye, become reconfigured into shapes that, while fitting into Wilczek’s intellectual framework, take time to decrypt: “When we perceive a color, we see a symbol of change, not anything that changes.”
Wilczek’s suggestion that, when the going gets tough, we read the text like poetry rather than hoping to understand all it says, seems optimistic. But these challenges aren’t, I think, exactly defects of the book, because this is not a regular science book. Like Stephen Hawking's A Brief History of Time, it is instead the unique vision of a brilliant mind (with that added advantage that it doesn’t pretend otherwise). For every baffling passage there are other moments when Wilczek explains something in a way that no one else has, or perhaps could, so that you come away with a fresh perspective on something that you thought you understood already. Never again will I be frustrated by pop-science suggestions that Einstein simply decided to posit the constancy of the speed of light: of course he didn’t, and Wilczek cuts straight to the physics of the matter. Put simply, he sees things differently, and that’s the true and compelling reason to read the book.
For the fact is that this book is not a work of explanation but, like Plato’s Timaeus, an extended argument – indeed, what you might call a gentle polemic. It wants to steer us towards Wilczek’s own answer to his initial question. And so, quietly and soberly, he marshals facts that fit his case and soft-pedals ones that don’t. That’s fine – it is what polemics do – so long as we recognize what’s happening. For example, in his discussion of Pythagorean musical consonance he gives us a simple (albeit speculative) physical mechanism for why we prefer harmonies with simple frequency ratios while all but ignoring the fact that we plainly don’t: unless you’ve heard music played in tunings other than equal temperament, you’ll never have heard the interval of a Pythagorean fifth. And the discussion of Chinese yin and yang glosses over the fact that it not an aesthetic idea but a philosophical one: beauty is never, to my knowledge, mentioned by Chinese philosophers in this context.
Such goal-directed argument is most apparent in Wilczek’s discussion of beauty itself, for which the closest thing he gives to a definition is “symmetry and economy of means”. But neither of these features plays a key role either in most art or in most theories of aesthetics. Immanuel Kant, who made one of the most searching enquiries into the nature of beauty, argued that there is something repugnant in too much order and regularity. Even Francis Bacon asserted that “There is no excellent beauty that hath not some strangeness in the proportion”.
Kant’s careful distinction between real beauty and the intellectual satisfaction of perceiving an idea is precisely what physicists ignore when, like Lewis Carroll’s Humpty Dumpty, they make the word mean just what they want it to mean. Wilczek at least admits that not all types of beauty are included in his picture; but the physicists’ usual conception of beauty is Platonic in the extreme and barely if at all relevant to the arts. For Plato it was precisely art’s lack of symmetry (and thus intelligibility) that denied it access to real beauty: art was just too messy to be beautiful. It seems clear, and important, that many physicists do feel a kind of transcendent joy in the symmetries of nature’s laws. But if they really want to talk about it in terms of beauty, they should acknowledge that there is an intellectual heritage to that notion that they will have to confront.
Thursday, January 14, 2016
What's in a name?
Shawn Burdette’s blog post on element-naming has some nice things in it, but I wonder if he appreciates that the entire discussion around the names of the four new elements is itself largely a bit of fun? Sure, I can imagine that there are some people signing the petitions for lemmium and octarine thinking that the Japanese or Russian teams are going to say “Hey, several of those Brits want us to name this element after a heavy-rock musician we’ve never heard of/some magical colour in a series of books by a fantasy writer we’ve never heard of – well, that seems like a good idea to us.” Who knows, perhaps they are hoping one of the scientists will pipe up with “Oh yeah, I remember Silver Machine from my student days in Kyoto/St Petersburg. Let’s do it, freaks!” But really, do most of the signatories think this is anything but a fun way to celebrate a couple of recently deceased people whose work they liked?
The point is that most people aren’t suggesting names because they have the slightest hope, or even wish, that they’ll be taken seriously, or that the researchers need a bit of help. Rather, this is an unusually rich opportunity to both make a few funny/wistful/ridiculous suggestions and to have a considered discussion about how these names come about. If we aren’t allowed to do that unless we are “in the element discovery business”, it’s a sadder world. Certainly that’s why I said in my Nature piece that levium is a name I’d love to see, not one that I think ought to be adopted. It was a personal view (the clue was in the article category), not an absurd attempt to “impose my ideas for element names on the discoverers”. And if it is sanctimonious to wish for element names to be inclusive rather than proprietorial, so be it.
Which brings us to nationalism. Let me confess right away that I am not entirely consistent on this, because I can’t help feeling a soft spot for the Curies’ polonium. Poland had a pretty crap time of it in the 19th and early 20th century, and besides, Marie seemed to have regarded this as a kind of homage to a distant homeland rather than a boast. No, my case is not airtight. But as Shawn says, germanium and francium did seem more aggressively flag-waving (I’ve never got to the bottom of the accusations of egotism behind Lecoq’s gallium.)
And it surely doesn’t stop there: americium smells of the Cold War, although in fairness this doesn’t appear to refer solely to the United States. If berkelium, californium, dubnium, hassium and livermorium aren’t necessarily expressions of patriotism, they do seem to veer towards bragging. Shawn asks: well, why not? It is damned hard to do this work, why shouldn’t the teams get the credit, even if it seems a little vain? I’m not convinced. They definitely deserve credit, of course, but there are other avenues for that. My biggest concern, though, is that this triumphalism is a reflection of the competitiveness of the whole business, which seems unfortunate and tiresome. When there is a dispute over priority and then the “winner” goes and names the element after themselves (in effect), it is like sticking your tongue out at the “losers”: it’s us, not you. The disputatious nature of element-making during the Cold War years is notorious, and even if things are somewhat more collaborative now, there are still arguments.
It’s precisely because the work is so hard that priority can be so contentious: it is a matter of fine judgement whether a claim is convincing or not. The Russian team insists that their claim for having seen element 113 in 2003 should count as the first, and that the Japanese group came second the next year. Their complaint that the Japanese result isn’t going to be easily reproduced by anyone, and that in any case the leader of that team Kosuke Morita learnt his chops at Dubna in the first place, seems particularly ungracious. All the same, can we be so sure that the Russians don’t have a case? I trust the IUPAC experts, but it seems unlikely that there are completely cut-and-dry arguments. Imagine if the situation was reversed: if the Japanese had toiled hard to get a suggestive decay signature, their first shot at an element discovered in the Far East, only to be dismissed by IUPAC in favour of those Russians again, who go and slap “moscovium” on it. Would we feel that was a good name that enhanced the justice of the situation?
This, of course, is science as normal – different people arrive at much the same result at much the same time, and priority is a murky issue. But this is precisely why a winner-takes-all approach to naming adds to the distorted view of discovery that such emphasis on coming first produces. I fully understand that for some individual scientists, priority can matter hugely to career prospects, even though it damned well shouldn’t. But to big, substantially funded projects like this? I don’t think so. Even if element-naming wasn’t solipsistic, there would surely still be a strong desire to claim priority. But do we have to make it worse?
The point is that most people aren’t suggesting names because they have the slightest hope, or even wish, that they’ll be taken seriously, or that the researchers need a bit of help. Rather, this is an unusually rich opportunity to both make a few funny/wistful/ridiculous suggestions and to have a considered discussion about how these names come about. If we aren’t allowed to do that unless we are “in the element discovery business”, it’s a sadder world. Certainly that’s why I said in my Nature piece that levium is a name I’d love to see, not one that I think ought to be adopted. It was a personal view (the clue was in the article category), not an absurd attempt to “impose my ideas for element names on the discoverers”. And if it is sanctimonious to wish for element names to be inclusive rather than proprietorial, so be it.
Which brings us to nationalism. Let me confess right away that I am not entirely consistent on this, because I can’t help feeling a soft spot for the Curies’ polonium. Poland had a pretty crap time of it in the 19th and early 20th century, and besides, Marie seemed to have regarded this as a kind of homage to a distant homeland rather than a boast. No, my case is not airtight. But as Shawn says, germanium and francium did seem more aggressively flag-waving (I’ve never got to the bottom of the accusations of egotism behind Lecoq’s gallium.)
And it surely doesn’t stop there: americium smells of the Cold War, although in fairness this doesn’t appear to refer solely to the United States. If berkelium, californium, dubnium, hassium and livermorium aren’t necessarily expressions of patriotism, they do seem to veer towards bragging. Shawn asks: well, why not? It is damned hard to do this work, why shouldn’t the teams get the credit, even if it seems a little vain? I’m not convinced. They definitely deserve credit, of course, but there are other avenues for that. My biggest concern, though, is that this triumphalism is a reflection of the competitiveness of the whole business, which seems unfortunate and tiresome. When there is a dispute over priority and then the “winner” goes and names the element after themselves (in effect), it is like sticking your tongue out at the “losers”: it’s us, not you. The disputatious nature of element-making during the Cold War years is notorious, and even if things are somewhat more collaborative now, there are still arguments.
It’s precisely because the work is so hard that priority can be so contentious: it is a matter of fine judgement whether a claim is convincing or not. The Russian team insists that their claim for having seen element 113 in 2003 should count as the first, and that the Japanese group came second the next year. Their complaint that the Japanese result isn’t going to be easily reproduced by anyone, and that in any case the leader of that team Kosuke Morita learnt his chops at Dubna in the first place, seems particularly ungracious. All the same, can we be so sure that the Russians don’t have a case? I trust the IUPAC experts, but it seems unlikely that there are completely cut-and-dry arguments. Imagine if the situation was reversed: if the Japanese had toiled hard to get a suggestive decay signature, their first shot at an element discovered in the Far East, only to be dismissed by IUPAC in favour of those Russians again, who go and slap “moscovium” on it. Would we feel that was a good name that enhanced the justice of the situation?
This, of course, is science as normal – different people arrive at much the same result at much the same time, and priority is a murky issue. But this is precisely why a winner-takes-all approach to naming adds to the distorted view of discovery that such emphasis on coming first produces. I fully understand that for some individual scientists, priority can matter hugely to career prospects, even though it damned well shouldn’t. But to big, substantially funded projects like this? I don’t think so. Even if element-naming wasn’t solipsistic, there would surely still be a strong desire to claim priority. But do we have to make it worse?
Does music really need a new philosophy?
I always enjoy Roger Scruton’s writing on music, even when I disagree with him vehemently. That holds true for his piece on the role of philosophy in music. We should ignore the habitual bluster about the melodic and harmonic paucity of popular music, which Scruton seems insistent on analysing in a social vacuum as though it is beholden to the same compositional and aesthetic rules as Mozart; indeed, most of what Scruton writes about music totally ignores the fact that it is a cultural activity with many functions, not just an artifact to appreciate over a glass of fine wine. (I have visions of him challenging the idea that Bowie was a great musical artist because his songs had poor voice-leading.) And Scruton’s perpetual denigration of today’s callow youth, passively consuming processed musical pap under their hoodies, makes you wish he’d get to bloody well know a few young people instead of sneering at them from afar. Most of the kids I know are learning an instrument – not that this is an essential aspect of active engagement with music, but it obviously helps.
I’m not sure that Scruton’s article is really concerned so much with philosophy at all (there is a large body of work on this that he doesn’t touch on, and which is not obsessing about modernist ideas, such as Stephen Davies’ excellent 2005 book Themes in the Philosophy of Music). His emphasis is rather on systems and rules of composition. Still, I agree with him that Schoenberg’s twelve-tone method is pretty arbitrary, that Adorno wrote with priestly dogmatism, and that serialism systematically undermined the accumulated wisdom about making melodies coherent. However, just as Schoenberg didn’t realise why this was so, so Scruton has only the vaguest sense of why Western tonal music does have this property of auditory coherence. It’s depressing to hear yet another appeal to the “naturalness” of the Western diatonic scale (under which system of intonation, one wonders? Have you heard how weird the Pythagorean scale sounds to our ears now?). Not only is there no good evidence that the harmonies it creates are innately consonant (with the exception of the octave and perhaps the fifth), but Scruton’s appeal to the harmonic series ignores the fact that Schoenberg appealed to the very same source of justification – he just wanted to “emancipate” the higher harmonics. If Scruton showed more awareness of musical cultures whose harmonic norms depart widely from Western tonality (say, Croatian ganga or Indonesian gamelan), I think he’d be less inclined to assert its naturalness.
The existence of a tonic and of a hierarchy of note usage is indeed a feature of how much musical melody becomes intelligible and perceptually grouped, and also contributes to its tense of tension and release. The circle of fifths, modulation and voice-leading aren’t by any means essential in rich and complex music, but they can certainly be put to good use for coherence, variation and nuance in Western tonal music, once they become part of the learnt musical language. So if all this is ditched, then Scruton is quite right to assert that other “binding” structures are needed if one wants music that has an easily apprehended cognitive structure. (I have written about this in some detail, with specific focus on serialism and modernism here.)
But there are ways to achieve cognitive coherence within serialism, and Berg in particular was masterful in using rhythm, pitch relationships and other techniques to do so. (I don’t fully understand how he does it, but I suspect it was intuitive.) Without such things, Scruton rightly asserts that no “normal ear” (which is to say, no mind employing the mental grouping mechanisms we acquire for navigating an auditory landscape) can hold the music together. Yet if he showed any interest in the cognition of music, he’d be less sure that the traditional rules of the Western tonal style were the only means of achieving this.
Yet does music have to hold together in that way? We’re back to Scruton’s insistence on listening to all music with an ear attuned to Mozart. True, if we’re not going to do that then we have to learn a new way of listening, which is not easy when you’ve been immersed in the Western tonal tradition from birth (as most Westerners have). But might it not be worth trying? Personally, I’ve found that it is. Ligeti, for example, offers musical experiences based on texture or a kind of pointillist sonic painting. OK, you won’t go away humming the tunes, but I would be sad if that were always held up as the test of fulfilling music.
Beyond all this, the notion that all contemporary classical (whatever that means) music today is in thrall to serialism is of course absurd. These remarks might have been more pertinent 50 years ago, but now the diversity of styles is exhilarating and dizzying. Pierre Boulez is dead, Roger, and we can do what we like! (I don’t mean to knock Pierre, who seemed to loosen up somewhat in old age, but really he was a bit of a serialist snob in his time.)
What is the “philosophy” that Scruton wants to see in place of that of Adorno and the other champions of modernism? One, apparently, in which “true artists are not the antagonists of tradition but their [sic] latest advocates”. There speaks a dyed-in-the-wool conservative, of course, but I have some sympathy with the idea that innovators extend and transform tradition rather than sticking the boot into it. Even the Sex Pistols arguably did that (if the “tradition” includes MC5, Iggy and the Stooges and garage rock generally). But I wouldn’t expect Scruton to approve of that example.
Thanks to Ángel Lamuño for bringing Scruton’s article to my attention.
I’m not sure that Scruton’s article is really concerned so much with philosophy at all (there is a large body of work on this that he doesn’t touch on, and which is not obsessing about modernist ideas, such as Stephen Davies’ excellent 2005 book Themes in the Philosophy of Music). His emphasis is rather on systems and rules of composition. Still, I agree with him that Schoenberg’s twelve-tone method is pretty arbitrary, that Adorno wrote with priestly dogmatism, and that serialism systematically undermined the accumulated wisdom about making melodies coherent. However, just as Schoenberg didn’t realise why this was so, so Scruton has only the vaguest sense of why Western tonal music does have this property of auditory coherence. It’s depressing to hear yet another appeal to the “naturalness” of the Western diatonic scale (under which system of intonation, one wonders? Have you heard how weird the Pythagorean scale sounds to our ears now?). Not only is there no good evidence that the harmonies it creates are innately consonant (with the exception of the octave and perhaps the fifth), but Scruton’s appeal to the harmonic series ignores the fact that Schoenberg appealed to the very same source of justification – he just wanted to “emancipate” the higher harmonics. If Scruton showed more awareness of musical cultures whose harmonic norms depart widely from Western tonality (say, Croatian ganga or Indonesian gamelan), I think he’d be less inclined to assert its naturalness.
The existence of a tonic and of a hierarchy of note usage is indeed a feature of how much musical melody becomes intelligible and perceptually grouped, and also contributes to its tense of tension and release. The circle of fifths, modulation and voice-leading aren’t by any means essential in rich and complex music, but they can certainly be put to good use for coherence, variation and nuance in Western tonal music, once they become part of the learnt musical language. So if all this is ditched, then Scruton is quite right to assert that other “binding” structures are needed if one wants music that has an easily apprehended cognitive structure. (I have written about this in some detail, with specific focus on serialism and modernism here.)
But there are ways to achieve cognitive coherence within serialism, and Berg in particular was masterful in using rhythm, pitch relationships and other techniques to do so. (I don’t fully understand how he does it, but I suspect it was intuitive.) Without such things, Scruton rightly asserts that no “normal ear” (which is to say, no mind employing the mental grouping mechanisms we acquire for navigating an auditory landscape) can hold the music together. Yet if he showed any interest in the cognition of music, he’d be less sure that the traditional rules of the Western tonal style were the only means of achieving this.
Yet does music have to hold together in that way? We’re back to Scruton’s insistence on listening to all music with an ear attuned to Mozart. True, if we’re not going to do that then we have to learn a new way of listening, which is not easy when you’ve been immersed in the Western tonal tradition from birth (as most Westerners have). But might it not be worth trying? Personally, I’ve found that it is. Ligeti, for example, offers musical experiences based on texture or a kind of pointillist sonic painting. OK, you won’t go away humming the tunes, but I would be sad if that were always held up as the test of fulfilling music.
Beyond all this, the notion that all contemporary classical (whatever that means) music today is in thrall to serialism is of course absurd. These remarks might have been more pertinent 50 years ago, but now the diversity of styles is exhilarating and dizzying. Pierre Boulez is dead, Roger, and we can do what we like! (I don’t mean to knock Pierre, who seemed to loosen up somewhat in old age, but really he was a bit of a serialist snob in his time.)
What is the “philosophy” that Scruton wants to see in place of that of Adorno and the other champions of modernism? One, apparently, in which “true artists are not the antagonists of tradition but their [sic] latest advocates”. There speaks a dyed-in-the-wool conservative, of course, but I have some sympathy with the idea that innovators extend and transform tradition rather than sticking the boot into it. Even the Sex Pistols arguably did that (if the “tradition” includes MC5, Iggy and the Stooges and garage rock generally). But I wouldn’t expect Scruton to approve of that example.
Thanks to Ángel Lamuño for bringing Scruton’s article to my attention.
Tuesday, January 12, 2016
The place of the periodic table
I can fully understand that Eric Scerri, who has done so much to explain, popularize and clarify the periodic table, would object to my suggestion in a Nature article that “chemists rarely need to refer to it” and that it “holds more interest and glamour for the public than it does for the working chemist”. These statements are too general; I should say “many” (most?) chemists. There are some who surely do use it, and a rather small group of others – Eric among them, of course – who expend a lot of time and thought on the right way to structure it. Those latter questions are interesting and valuable, and I regret that Eric seems to have been offended by an apparent implication (not intended) that they are not.
If I exaggerate, it’s to make a point, which is that it is not terribly good for chemistry if it is seen as being all about the periodic table – and that is the impression I think non-scientists often get. Not only does it obscure what most chemists do, but it leads to the idea that the quantum explanation of the periodic table means that chemistry is “just physics”, or that, now we know all the elements (except ones we make ourselves), “pure” chemistry is pretty much over as an academic discipline (if you don’t believe me, see here). And chemistry is not alone in the risks associated with giving too much emphasis to its organizational schemas, as I say. One could easily get the impression, from Higgs- and LHC-mania (which is fine in itself), that all physicists want to do is find new particles. Yet most physicists never need to consult the tabulation of the standard model, even mentally. Nor do most biologists need to know the genetic code (though of course they learn it anyway). This is not a question of whether these lists and tables and classifications are significant – of course they are. It is about guiding public perception away from the notion that this is what the respective disciplines are all about.
The periodic table is not a “mere list”. It is far richer than that. But chemistry as a whole is much, much richer still, because it is primarily about making things with, and not simply categorizing, its building blocks. I am not convinced that this is widely understood (Tom Lehrer’s song, for all that it’s fun, suggests as much), and I worry that at least some of the excitement about the new elements amounts to the perception that “hey, we’ve completed the list!” That’s the challenge that needs to be faced.
If I exaggerate, it’s to make a point, which is that it is not terribly good for chemistry if it is seen as being all about the periodic table – and that is the impression I think non-scientists often get. Not only does it obscure what most chemists do, but it leads to the idea that the quantum explanation of the periodic table means that chemistry is “just physics”, or that, now we know all the elements (except ones we make ourselves), “pure” chemistry is pretty much over as an academic discipline (if you don’t believe me, see here). And chemistry is not alone in the risks associated with giving too much emphasis to its organizational schemas, as I say. One could easily get the impression, from Higgs- and LHC-mania (which is fine in itself), that all physicists want to do is find new particles. Yet most physicists never need to consult the tabulation of the standard model, even mentally. Nor do most biologists need to know the genetic code (though of course they learn it anyway). This is not a question of whether these lists and tables and classifications are significant – of course they are. It is about guiding public perception away from the notion that this is what the respective disciplines are all about.
The periodic table is not a “mere list”. It is far richer than that. But chemistry as a whole is much, much richer still, because it is primarily about making things with, and not simply categorizing, its building blocks. I am not convinced that this is widely understood (Tom Lehrer’s song, for all that it’s fun, suggests as much), and I worry that at least some of the excitement about the new elements amounts to the perception that “hey, we’ve completed the list!” That’s the challenge that needs to be faced.
Sunday, January 10, 2016
The myth of the Enlightenment (again)
To cite Kant in defence of the “Enlightement values” of freedom of speech, democratic representation, universal equality and so forth, as Nick Cohen does here, is simply to invite the response that Kant rejected democracy and displayed the conventional misogyny, racism and class-based snobberies of his times. In other words, it is to incite an empty argument in which we hold Kant anachronistically to account for the prejudices that just about every other educated and privileged male European of his age shared.
Which is why it drives me up the bloody wall that folks like Cohen are still banging on about “Enlightenment values” – by which they generally mean some carefully selected values advanced by certain Enlightenment figures that we (some of us – me and Nick alike) would like to see upheld today, such as freedom to think for ourselves. The sad irony is that Kent seems to think this is a different category of statement than speaking of equally meaningless (because utterly polysemous) “Christian values”.
Cohen’s criticisms of the pope in his article are entirely justified. Trying to support them by appealing to some fictitious Enlightenment does him no favours at all. He calls “people who call themselves liberals” (that would be me, then) “thoughtless prigs” who probably don’t know what the Enlightenment was. Isn’t it odd, then, that folk who talk today about Enlightenment values are usually arguing in favour of a secular, classless, “rationalistic” democracy? Because, to state the bleedin’ obvious, there were no secular classless democracies in eighteenth century Europe.
And the heroes of the Enlightenment had no intention of introducing them. Take that other Enlightenment icon Voltaire. Like Kant, Voltaire had some attractive ideas about religious tolerance and separation of church and state. But he was representative of the philosophes in opposing any idea that reason should become a universal basis for thought. It was grand for the ruling classes, but far too dangerous to advocate for the lower orders, who needed to be kept in ignorance for the sake of the social order. Here’s what he said about that: “the rabble… are not worthy of being enlightened and are apt for every yoke”. Voltaire has been said to be a deist, which means that he believed in a God whose existence can be deduced by reason rather than revelation, and who made the world according to rational principles. But he insisted that ideas like this should be confined to the better classes. The message of the church should be kept simple for the lower orders, so that they didn’t get confused. Voltaire said that complex ideas such as deism are suited only “among the well-bred, among those who wish to think.”
The Enlightenment was not strongly secular in any case. Atheism was very rare, and condemned by almost all philosophers as a danger to social stability. Rousseau calls for religious tolerance – except for atheists, who should be banished from the state because their lack of fear of divine punishment meant that they couldn’t be trusted to obey the laws.
The idea that the Enlightenment was some great Age of Reason is now rejected by most historians. So why do intelligent people like Nick Cohen still invoke this trope today whenever they fear that irrational and dogmatic forces are threatening to undermine science and society? I suspect it has something to do with the allure of the Golden Age: things were all rosy once, but now the barbarians are dragging us back to that other mythical period in history, the “Dark Ages”. Sadly, history is never so simple.
Stand up for principles of tolerance, compassion, equality, reasoned decision-making, and free speech, by all means. But don’t try to conscript bad history to your cause. What people today call “Enlightenment values” are like universal human rights: we might like them and think they are worth defending (I do), but that doesn’t alter the fact that they are a modern invention.
Which is why it drives me up the bloody wall that folks like Cohen are still banging on about “Enlightenment values” – by which they generally mean some carefully selected values advanced by certain Enlightenment figures that we (some of us – me and Nick alike) would like to see upheld today, such as freedom to think for ourselves. The sad irony is that Kent seems to think this is a different category of statement than speaking of equally meaningless (because utterly polysemous) “Christian values”.
Cohen’s criticisms of the pope in his article are entirely justified. Trying to support them by appealing to some fictitious Enlightenment does him no favours at all. He calls “people who call themselves liberals” (that would be me, then) “thoughtless prigs” who probably don’t know what the Enlightenment was. Isn’t it odd, then, that folk who talk today about Enlightenment values are usually arguing in favour of a secular, classless, “rationalistic” democracy? Because, to state the bleedin’ obvious, there were no secular classless democracies in eighteenth century Europe.
And the heroes of the Enlightenment had no intention of introducing them. Take that other Enlightenment icon Voltaire. Like Kant, Voltaire had some attractive ideas about religious tolerance and separation of church and state. But he was representative of the philosophes in opposing any idea that reason should become a universal basis for thought. It was grand for the ruling classes, but far too dangerous to advocate for the lower orders, who needed to be kept in ignorance for the sake of the social order. Here’s what he said about that: “the rabble… are not worthy of being enlightened and are apt for every yoke”. Voltaire has been said to be a deist, which means that he believed in a God whose existence can be deduced by reason rather than revelation, and who made the world according to rational principles. But he insisted that ideas like this should be confined to the better classes. The message of the church should be kept simple for the lower orders, so that they didn’t get confused. Voltaire said that complex ideas such as deism are suited only “among the well-bred, among those who wish to think.”
The Enlightenment was not strongly secular in any case. Atheism was very rare, and condemned by almost all philosophers as a danger to social stability. Rousseau calls for religious tolerance – except for atheists, who should be banished from the state because their lack of fear of divine punishment meant that they couldn’t be trusted to obey the laws.
The idea that the Enlightenment was some great Age of Reason is now rejected by most historians. So why do intelligent people like Nick Cohen still invoke this trope today whenever they fear that irrational and dogmatic forces are threatening to undermine science and society? I suspect it has something to do with the allure of the Golden Age: things were all rosy once, but now the barbarians are dragging us back to that other mythical period in history, the “Dark Ages”. Sadly, history is never so simple.
Stand up for principles of tolerance, compassion, equality, reasoned decision-making, and free speech, by all means. But don’t try to conscript bad history to your cause. What people today call “Enlightenment values” are like universal human rights: we might like them and think they are worth defending (I do), but that doesn’t alter the fact that they are a modern invention.
Friday, December 18, 2015
Talking about talking about history
David Wootton has sent me some responses to the accusations made by some of the reviewers of his book The Invention of Science, including me in Nature and Steven Poole in New Statesman, that he somewhat over-eggs the “science wars”/relativism arguments. Some other reviewers have suggested that these polemical sections of the book are referring to an academic turf war that doesn’t need to be awarded so much space here. In her review in the Guardian, Lorraine Daston commented that this material is “unlikely to be of interest to readers who are not historians of science over the age of 50.” Well, I plead guilty to the second at least, and so perhaps it isn’t surprising that those chapters most certainly were of interest to me. I might not agree with all of David’s arguments in the book, but I was very happy to see them. It is a discussion that still needs to happen, not least because “histories of science” like that of Steven Weinberg’s To Explain the World are still being put out into the public arena.
For that reason too, I’m delighted to post David’s responses here. I don’t exactly disagree with anything he says; I think the issues are at least partly a matter of interpretation. For example, in my review I commented that Steven Shapin and Simon Schaffer’s influential Leviathan and the Air-Pump (1985) doesn’t to my eye offer the kind of “hard relativist” perspective that David seems to find in it. In my original draft of the book review, I also said the same about David’s comments on Simon Schaffer’s article on prisms:
“I see no reason to believe, for example, that Schaffer really questions Newton’s compound theory of white light in his 1989 essay on prisms and the experimentum crucis, but just that he doubts the persuasiveness of Newton’s own experimental evidence.”
David seemed to say that Simon’s comments even implied he had doubts about the modern theory of optics and additive mixing; I can’t find grounds for reaching that conclusion. In my conversations with Simon, I have never had the slightest impression that he doesn’t regard science as a system of thought that offers a progressively more reliable description of the world. If he thinks it is no truer than witchcraft, he hides it extraordinarily well.
As further evidence of S&S’s relativism, David quotes from Leviathan and the Air-Pump, which, he says, maintains that the success of experimental science depended on its proponents’ “political success ... in insinuating themselves into the activities of other institutions and other interest groups. He who has the most, and the most powerful, allies wins.” When I first read this (in preparing my book Curiosity), it never once occurred to me that S&S meant it as some kind of statement to the effect that we only think Boyle’s law is correct because Boyle was more politically astute than his opponents. I took it to mean that Boyle was able to gain rapid acceptance of his ideas because he was politically well situated (central to the Royal Society, for example) and canny with his rhetoric. It seemed to me that the reception of scientific ideas when they first appear surely is, both then and now, conditioned by social factors. It surely is the case that some such ideas, though they might indeed now be revealed as superior to the alternatives, were more quickly taken up at the time not just (or even) because they were more convincing or better supported by evidence but because of the way their advocates were able to corner the market or rewrite the discourse in their favour. Lavoisier’s “new chemistry” is the obvious example. Indeed, David recognizes that social aspects of scientific debate in his book, which is one of its many strengths. I certainly don’t think Simon would argue that scientific ideas might then stay fixed for hundreds of years simply because their initial proponents gained the upper hand in the cut and thrust of debate.
David says that Steven Shapin does betray an affiliation to extreme relativism, however – and he cites as evidence Shapin’s comment in his (unsurprisingly damning) review of the Weinberg book:
“Science remains almost unique in that respect. It’s modernity’s reality-defining enterprise, a pattern of proper knowledge and of right thinking, in the same way that—though Mr. Weinberg will hate the allusion—Christian religion once defined what the world was like and what proper knowledge should be.”
This is a complicated claim, and I would like to know more about what Shapin meant by it. Perhaps I will ask him. I can see why David might interpret it as a statement to the effect that the scientific theory of the origin of the universe is no more “true” than the account given in Genesis. And I think he is right to point out that Shapin should be alert to the possibility of that interpretation. But I think one can also interpret the remark as saying that we should be as wary of scientism – the idea that the only knowledge that counts as proper knowledge is scientific – as we should be of the doctrinaire Christianity that once pervaded Western thought, which was once the jury before which all ideas were to be scrutinized. Christian theology was certainly regarded at times as a superior arbiter to pre-scientific rationalism in efforts to understand the universe – for example in the 1277 Condemnation that pitched Aristotelian natural history against the Church). But just as Christianity was finally compelled to stay within the proper limits of its authority (in most parts of the civilized Christian world, if not perhaps Kansas), so should we make sure that science does so: it is the best method we have for understanding the physical world, but not the yardstick for all “proper knowledge”. I hope this is what Shapin means, but I confess that I cannot be sure.
The real problem here – and it is one that David rightly complains about – is not so much excessive relativism in the academic study of the history of science, but what he calls a conspiracy of silence within that discipline. It seems to have become taboo to say that scientific knowledge goes through a reliability filter that makes it rather dependable, predictive and amenable to improvement – even if you believe that to be the case. As a historian of science, David must be regularly faced with disapproving frowns and tuts if he wishes to express value judgements about scientific ideas, because this seems to have become bad form and now to be rather rigidly policed in some quarters.
I have experienced this myself, when a publisher’s reviewer of my book Invisible evidently felt it his/her duty to scour it for the slightest taint of presentism – and, when he/she decided it had been detected, to reel out what was obviously a pre-prepared little spiel to that effect. For example, I was sternly told that
“Hooke and Leeuwenhoek did not "in fact" see "single-celled organisms called protozoa". They also did not drive modern cars, neither did they long for a new iphone.”
This is of course just silly (not to say rather incoherent) academic Gotcha-style point-scoring. What I wrote was “It was Leeuwenhoek’s discoveries of invisibly small ‘animals’ – he was in fact seeing large bacteria and single-celled organisms called protozoa – in 1676…” Outrageous, huh?
Then I got some nonsense about "Great Men" histories because I had the temerity to mention that Pasteur and Koch did some important work on germ theory. The reviewer’s terror of making what his/her colleagues would regard as a disciplinary faux pas seems to be preventing him/her from being able to actually tell any history.
The situation in that case became clear enough when the reviewer finally complained that it was hard to judge my argument because what he/she needed was “a clear statement of the author's intent and theoretical position” – followed by “rewriting the whole text in such a way that the author clearly articulates his chosen positions throughout.” To which I’m afraid I replied: “What is my “theoretical position”? It’s in the text, not in some badge that I choose to display at the outset. The persistent misreading of the text to force it into one camp or another [and the cognitive dissonance evident when it doesn’t quite fit] seems to highlight a pretty serious problem with the academic approach, for all that I benefit from it shamelessly.”
So perhaps David will understand (I suspect he does already) that I have considerable sympathy with his predicament. I just wonder if his frustration (like mine) leaked out a little too much. I don’t know if he is right to say that “The [Oxford] faculty, as a group of professional historians, feels it must ward off anyone interested in studying science as a project that succeeds and makes progress, and at the same time encourage anyone who wants to study science as a purely social enterprise” – and if he is, that doesn’t seem terribly healthy. But the job advert he quotes doesn’t seem to me to deny the possibility of progress, but simply to point out that the primary job of the historian is not to sift the past for nuggets of the present.
Which of course brings me to Weinberg. He apparently wants to reshape the history of science, although his response to critics in the NYRB makes me more sympathetic to the sincerity, if not to the value, of his programme. I wonder if we might get a little clearer about the issues here by considering how one might wish to, say, write about medieval and early modern witchcraft. I wonder if what David sees as an unconscionable silence from historians on the veracity and validity of witchcraft is more a matter of historians thinking that, in the 21st century, one should not feel obliged to begin a paper or a book with a statement along the lines of
“I must point out that witchcraft is not a very effective way to understand the world, and if you wish to make a flying device, you will be far better advised to use the modern theory of fluid mechanics.”
On the other hand, if said author were to be quizzed along the lines of “But does witchcraft make broomsticks fly?”, it would be intellectually feeble, indeed derelict, to respond “That’s not the issue I am addressing, and I do not propose to comment on it.” David implies that this happens; I suspect he is right, though I do not know how often. There doesn’t seem to be anything sacrificed by saying instead something like: “Of course, witchcraft will not summon demons and make people fly. Now let me get on with talking about it.”
The Weinberg position, on the other hand, seems to be along the lines of “By all means study witchcraft as history, if you like, but as far as science is concerned we should make it absolutely clear that it was just superstitious nonsense that got in the way of true progress.” To which, of course, the historian might want to say “But Robert Boyle believed that demons exist and could be summoned!” The Weinbergian (I don’t want to put words into his own mouth) might respond, “Well Boyle wasn’t perfect and he believed some pretty daft things – like alchemical transmutation.”
And at that point I say “You really don’t give a toss what Robert Boyle thought, do you? You just want to mark his homework.” But I do give a toss, and not just because Boyle was an interesting thinker, or because I don’t have any illusion that we are smarter today than people were in the seventeenth century. I want to take seriously what Boyle thought and why, because it is a part of how ideas have developed, and because I don’t believe the history of science was a process of gradually shaking off delusions and misapprehensions and refining our rationality. It is much messier than that, now and always. If your starting position in assessing Boyle’s belief in demons and alchemy is that he was sometimes a bit gullible and deluded, then you are simply not going to get much of a grasp of what or how he thought. (Boyle was somewhat gullible when it came to alchemical charlatans, but his belief in transmutation wasn’t a part of that credulity.)
My own position is more along the lines of “It’s interesting that people once believed in witchcraft. I wonder what sustained that belief, and how it interacted with emerging ideas about science?” I am not being disingenuous if I say that I am inevitably a naïve reader of Shapin, Schaffer, Daston, Fara, and indeed David Wootton. But I find this same spirit in all of their books, and that’s what I appreciate in them.
Comments from David Wootton
A number of the reviews of The Invention of Science have expressed puzzlement that my book opens and closes with extensive historiographical, methodological, and philosophical discussions. Why not just leave all that stuff out? The charge is that I am refighting the Science Wars of the 1990s when everyone else has moved on. I under- stand why people would think this, but, with respect, I think they are wrong. Let’s break down the issues as follows:
1) Are relativists still confident that they speak for the history of science profession? Yes they are. See for example Steven Shapin’s breathtaking review of Steven Weinberg in the Wall Street Journal, where Shapin actually presents belief in science as being strictly comparable to belief in Christianity (http://goo.gl/qULelt) [1]. Or see Shapin’s and Schaffer’s introduction to the anniversary edition of Leviathan and the Air Pump (2011). Or see Peter Dear’s “Historiography of Not-So-Recent Science”, History of Science 50 (2012), 197-211 (“we are all post- modernists now”).
2) Are students still taught from relativist textbooks? Yes they are. The key text- books are Shapin’s Scientific Revolution (1996; now translated into seventeen languages); Peter Dear’s Revolutionizing the Sciences (2001, revised in 2009); John Henry’s The Scientific Revolution (1997, with later revisions). This may change – there is Principe’s Very Short Introduction (2011), for example – but it hasn’t changed yet.
3) Has the profession moved on? Rather than moving on, it has decided to pretend the Science Wars never happened, and as a consequence it is stuck in a rut, incapable of generating a new account of what was happening in science in the early modern period. To quote Lorraine Daston’s 2009 essay on the present state of the discipline (http://goo.gl/rMEAiy), what historians have produced is “a swarm of microhistories ... archivally based and narrated in exquisite detail.” These microhistories, as she herself acknowledges, do not enable one to put together a bigger picture. The resulting confusion is embodied, for example, in David Knight’s Voyaging in Strange Seas: the Great Revolution in Science (Yale, 2014).
4) Are the relativists more moderate than I maintain? Philip Ball thinks I and the authors of Leviathan and the Air Pump have more in common than I imagine. I doubt Shapin and Schaffer will think so, and I suggest Philip rereads p. 342 of that book, which maintains that the success of experimental science depended on its proponents’ “political success ... in insinuating themselves into the activities of other institutions and other interest groups. He who has the most, and the most powerful, allies wins.” In this sort of story the evidence counts for nothing – indeed, the strong programme insists that the evidence must count for nothing (and note the introduction of the strong programme’s key principle of symmetry on p. 5)[2].
5) Can you separate methodology and historiography from substantive history? It’s very difficult to do so, because your methodology and the previous history of your discipline shape the questions you ask and the answers you give. Thus relat- ivist historiography has privileged controversy studies (http://goo.gl/uVfxFF), and simply ignored cases where new scientific claims have been accepted without dispute. Indeed if the Duhem-Quine thesis were right there are always grounds for dispute when new evidence is presented. I don’t see how one can discuss the collapse of Ptolemaic astronomy in the years immediately after 1610 without acknowledging that this is an event which has been invisible to previous historians because they have been unwilling to acknowledge that an empirical fact (the phases of Venus) could be decisive in determining the fate of a well- established theory — in a case like this it is not the evidence that is new, but the questions that are being asked of it, and these are inseparable from issues of methodology and historiography [3].
6) The Economist thinks I have a disagreement with a few “callow” relativists. Odd that these insignificant people hold chairs in Harvard, Cambridge, Oxford, Edinburgh, Cornell. But there is a much bigger point here: a fundamental claim made by my opponents is that historians are committed, in principle, to treating bad and good knowledge identically. The historical profession tends to agree with them (see for example Gordon Wood’s NYRB essay on medicine in the American Revolution, http://goo.gl/ZoFuMu: “The problem is most historians are relativists”).
The consequences are apparent in the Cambridge History of Science, vol. 3, ed. Park and Daston (2006), which contains a twenty page chapter on “Coffee Houses and Print Shops” (as part of a two hundred page section on “Personae- and Sites of Natural Knowledge”) and others equally long on “Astrology” and “Magic” (Astrology gets twenty pages while Astronomy gets thirty), but, despite being 850 pages long, contains no extended discussion of Digges, Stevin, Gilbert, or Pascal, nothing on magnets, and only two pages on vacuum experiments [4].
It is also apparent in Oxford University’s recent (April 2015) advertisement for its Chair in the History of Science which stated: “The professor will share the faculty’s vision of the scope of the history of science, which is less focused on the history of scientific truth and more interested in reconstructing the practices of science, and the claims to science-based authority within given societies at given times” [5]. The Oxford Faculty of History does not declare its vision of the scope of the discipline when advertising its chair in, say, military history. But the history of science is different. The faculty, as a group of professional historians, feels it must ward off anyone interested in studying science as a project that succeeds and makes progress, and at the same time encourage anyone who wants to study science as a purely social enterprise. What interests them is not scientific knowledge but the authority claimed by “scientists” — be they alchemists or phrenologists. What’s at stake here is not just the history of science, but also the claim, made over and over again by historians, that the past must be studied solely in its own terms — an approach which may lead to understanding, but cannot lead to explanation. So historians of witchcraft report encounters with devils as if the devils were real — and never ask what’s really going on.
7) What is science? I was dismayed to discover that students in my own university were being taught (by someone with a new PhD in history of science from a prestigious institution) that there was no such thing as science in the seventeenth century. But this, after all, is what Henry’s textbook says, and Dear in his 2012 review essay confidently asserts: “specialist historians seem increasingly agreed that science as we now know it is an endeavour born of the nineteenth century.” On her university website one distinguished historian of science is described thus: “Paula Findlen teaches history of science before it was ‘science’ (which is, after all, a nineteenth-century word).” (http://web.stanford.edu/ dept/HPS/findlen.html, accessed 7 Dec 2015). How have we got to the point where it appears to make sense to claim that “science” is a nineteenth-century word? Because Newton, we are told, was not a scientist (which indeed is a nineteenth-century word) but a philosopher. Even if one charitably rephrases Findlen’s statement (or the statement made on her behalf) to read “‘science’ as we currently use the term is a nineteenth-century concept” it would be wrong unless, by a circular argument, one insists that earlier usages of the word can’t possibly have meant by science what we mean by science. The whole point of my book is to show that by the end of the seventeenth century “science” (as Dryden called it) really was science as we understand the term. To unpick the miscon- ception that there was no science in the seventeenth century you have to look at the history of words like “science” and “scientist” (noting, for example, the founding of the French Académie des Sciences in 1666), but also at an historiographical tradition which has insisted that what we think of as science is just a temporary and arbitrary social practice, like metaphysical poetry or Methodism, not an enduring and self-sustaining body of reliable knowledge.
8) What would have happened if I had left out the methodological and historiographical debates? I tried the alternative approach, of writing in layperson’s terms for commonsensical people, first. Just look at how my book Bad Medicine was treated by Steven Shapin, in the pages of the London Review of Books: http:/ /goo.gl/aA67fr! The book was a success in that lots of people read it and liked it, many of them doctors (see www.badmedicine.co.uk); but historians of medicine brushed it off. So this time I have felt obliged to address the core arguments which supposedly justify ignoring progress — the arguments that have bamboozled the profession for the last fifty years — in the hope of being taken a little more seriously, not by sensible people (who can’t understand why I don’t just cut to the chase), but by the professionals who think that the history of science is like cardiac surgery — not something “the laity” (Shapin’s peculiar term) can possible participate in, understand, or criticise, but something for the professionals alone. In trying to address this new clerisy I have evidently tried the patience of some of my more sensible, level-headed readers. That’s unfortunate and a matter of considerable regret: but if the way in which history of science is taught in the universities is to change, someone must take on the experts on their own ground, and someone must question the notion that the history of science ought not to concern itself with (amongst much else) the history of scientific truth. By all means skip the beginning and concluding chapters if you have no interest in how the history of science (and history more generally) is taught; but please read them carefully if you do.
Notes
[1] There is a paywall: to surmount it google “Why Scientists Shouldn’t Write History” and click on the first link. For a discussion see http://goo.gl/VYNVhX. I am grateful to Philip Ball for acknowledging that my book is very different in character from Weinberg’s, which saves me from having to stress the point.
[2] Patricia Fara thinks that social constructivism is “the idea that what people believe to be true is affected by their cultural context.” If that were the case then we would all be social constructivists and I really would be arguing with a straw person. But of course it isn’t, as I show over and over again in my book. It is, rather, the claim (made by her Cambridge colleague Andrew Cunningham) that science is “a human activity, wholly a human activity, and nothing but a human activity” — in other words that it is socially constituted, not merely socially influenced (the model for such an argument being, of course, Durkheim on religion). The consequence of this, constructivists rightly hold, is epistemological egalitarianism — any particular belief is to be regarded as being just as good as any other.
[3] Take for example William Donahue’s discussion of Galileo and the phases of Venus in Park and Daston, 585: “He argued... that this phenomenon was inconsistent with the Ptolemaic arrangement of the planets...” Galileo and his contemporaries understood perfectly well that Galileo had proved the Ptolemaic arrangements of the planets could not be right — the whole impact of Galileo’s discovery is lost by reducing it to a mere argument. Indeed Donahue does not acknowledge that it had any impact while I show the impact is measurable by counting editions of Sacrobosco.
[4] A colleague of mine unkindly calls this the Polo history: Polo Mints, to quote Wikipedia, “are a brand of mints whose defining feature is the hole in the middle.”
[5] The text is no longer on the Oxford University website, but can still be found, for example, at http://goo.gl/KOY05f (accessed 7 Dec 2015).
For that reason too, I’m delighted to post David’s responses here. I don’t exactly disagree with anything he says; I think the issues are at least partly a matter of interpretation. For example, in my review I commented that Steven Shapin and Simon Schaffer’s influential Leviathan and the Air-Pump (1985) doesn’t to my eye offer the kind of “hard relativist” perspective that David seems to find in it. In my original draft of the book review, I also said the same about David’s comments on Simon Schaffer’s article on prisms:
“I see no reason to believe, for example, that Schaffer really questions Newton’s compound theory of white light in his 1989 essay on prisms and the experimentum crucis, but just that he doubts the persuasiveness of Newton’s own experimental evidence.”
David seemed to say that Simon’s comments even implied he had doubts about the modern theory of optics and additive mixing; I can’t find grounds for reaching that conclusion. In my conversations with Simon, I have never had the slightest impression that he doesn’t regard science as a system of thought that offers a progressively more reliable description of the world. If he thinks it is no truer than witchcraft, he hides it extraordinarily well.
As further evidence of S&S’s relativism, David quotes from Leviathan and the Air-Pump, which, he says, maintains that the success of experimental science depended on its proponents’ “political success ... in insinuating themselves into the activities of other institutions and other interest groups. He who has the most, and the most powerful, allies wins.” When I first read this (in preparing my book Curiosity), it never once occurred to me that S&S meant it as some kind of statement to the effect that we only think Boyle’s law is correct because Boyle was more politically astute than his opponents. I took it to mean that Boyle was able to gain rapid acceptance of his ideas because he was politically well situated (central to the Royal Society, for example) and canny with his rhetoric. It seemed to me that the reception of scientific ideas when they first appear surely is, both then and now, conditioned by social factors. It surely is the case that some such ideas, though they might indeed now be revealed as superior to the alternatives, were more quickly taken up at the time not just (or even) because they were more convincing or better supported by evidence but because of the way their advocates were able to corner the market or rewrite the discourse in their favour. Lavoisier’s “new chemistry” is the obvious example. Indeed, David recognizes that social aspects of scientific debate in his book, which is one of its many strengths. I certainly don’t think Simon would argue that scientific ideas might then stay fixed for hundreds of years simply because their initial proponents gained the upper hand in the cut and thrust of debate.
David says that Steven Shapin does betray an affiliation to extreme relativism, however – and he cites as evidence Shapin’s comment in his (unsurprisingly damning) review of the Weinberg book:
“Science remains almost unique in that respect. It’s modernity’s reality-defining enterprise, a pattern of proper knowledge and of right thinking, in the same way that—though Mr. Weinberg will hate the allusion—Christian religion once defined what the world was like and what proper knowledge should be.”
This is a complicated claim, and I would like to know more about what Shapin meant by it. Perhaps I will ask him. I can see why David might interpret it as a statement to the effect that the scientific theory of the origin of the universe is no more “true” than the account given in Genesis. And I think he is right to point out that Shapin should be alert to the possibility of that interpretation. But I think one can also interpret the remark as saying that we should be as wary of scientism – the idea that the only knowledge that counts as proper knowledge is scientific – as we should be of the doctrinaire Christianity that once pervaded Western thought, which was once the jury before which all ideas were to be scrutinized. Christian theology was certainly regarded at times as a superior arbiter to pre-scientific rationalism in efforts to understand the universe – for example in the 1277 Condemnation that pitched Aristotelian natural history against the Church). But just as Christianity was finally compelled to stay within the proper limits of its authority (in most parts of the civilized Christian world, if not perhaps Kansas), so should we make sure that science does so: it is the best method we have for understanding the physical world, but not the yardstick for all “proper knowledge”. I hope this is what Shapin means, but I confess that I cannot be sure.
The real problem here – and it is one that David rightly complains about – is not so much excessive relativism in the academic study of the history of science, but what he calls a conspiracy of silence within that discipline. It seems to have become taboo to say that scientific knowledge goes through a reliability filter that makes it rather dependable, predictive and amenable to improvement – even if you believe that to be the case. As a historian of science, David must be regularly faced with disapproving frowns and tuts if he wishes to express value judgements about scientific ideas, because this seems to have become bad form and now to be rather rigidly policed in some quarters.
I have experienced this myself, when a publisher’s reviewer of my book Invisible evidently felt it his/her duty to scour it for the slightest taint of presentism – and, when he/she decided it had been detected, to reel out what was obviously a pre-prepared little spiel to that effect. For example, I was sternly told that
“Hooke and Leeuwenhoek did not "in fact" see "single-celled organisms called protozoa". They also did not drive modern cars, neither did they long for a new iphone.”
This is of course just silly (not to say rather incoherent) academic Gotcha-style point-scoring. What I wrote was “It was Leeuwenhoek’s discoveries of invisibly small ‘animals’ – he was in fact seeing large bacteria and single-celled organisms called protozoa – in 1676…” Outrageous, huh?
Then I got some nonsense about "Great Men" histories because I had the temerity to mention that Pasteur and Koch did some important work on germ theory. The reviewer’s terror of making what his/her colleagues would regard as a disciplinary faux pas seems to be preventing him/her from being able to actually tell any history.
The situation in that case became clear enough when the reviewer finally complained that it was hard to judge my argument because what he/she needed was “a clear statement of the author's intent and theoretical position” – followed by “rewriting the whole text in such a way that the author clearly articulates his chosen positions throughout.” To which I’m afraid I replied: “What is my “theoretical position”? It’s in the text, not in some badge that I choose to display at the outset. The persistent misreading of the text to force it into one camp or another [and the cognitive dissonance evident when it doesn’t quite fit] seems to highlight a pretty serious problem with the academic approach, for all that I benefit from it shamelessly.”
So perhaps David will understand (I suspect he does already) that I have considerable sympathy with his predicament. I just wonder if his frustration (like mine) leaked out a little too much. I don’t know if he is right to say that “The [Oxford] faculty, as a group of professional historians, feels it must ward off anyone interested in studying science as a project that succeeds and makes progress, and at the same time encourage anyone who wants to study science as a purely social enterprise” – and if he is, that doesn’t seem terribly healthy. But the job advert he quotes doesn’t seem to me to deny the possibility of progress, but simply to point out that the primary job of the historian is not to sift the past for nuggets of the present.
Which of course brings me to Weinberg. He apparently wants to reshape the history of science, although his response to critics in the NYRB makes me more sympathetic to the sincerity, if not to the value, of his programme. I wonder if we might get a little clearer about the issues here by considering how one might wish to, say, write about medieval and early modern witchcraft. I wonder if what David sees as an unconscionable silence from historians on the veracity and validity of witchcraft is more a matter of historians thinking that, in the 21st century, one should not feel obliged to begin a paper or a book with a statement along the lines of
“I must point out that witchcraft is not a very effective way to understand the world, and if you wish to make a flying device, you will be far better advised to use the modern theory of fluid mechanics.”
On the other hand, if said author were to be quizzed along the lines of “But does witchcraft make broomsticks fly?”, it would be intellectually feeble, indeed derelict, to respond “That’s not the issue I am addressing, and I do not propose to comment on it.” David implies that this happens; I suspect he is right, though I do not know how often. There doesn’t seem to be anything sacrificed by saying instead something like: “Of course, witchcraft will not summon demons and make people fly. Now let me get on with talking about it.”
The Weinberg position, on the other hand, seems to be along the lines of “By all means study witchcraft as history, if you like, but as far as science is concerned we should make it absolutely clear that it was just superstitious nonsense that got in the way of true progress.” To which, of course, the historian might want to say “But Robert Boyle believed that demons exist and could be summoned!” The Weinbergian (I don’t want to put words into his own mouth) might respond, “Well Boyle wasn’t perfect and he believed some pretty daft things – like alchemical transmutation.”
And at that point I say “You really don’t give a toss what Robert Boyle thought, do you? You just want to mark his homework.” But I do give a toss, and not just because Boyle was an interesting thinker, or because I don’t have any illusion that we are smarter today than people were in the seventeenth century. I want to take seriously what Boyle thought and why, because it is a part of how ideas have developed, and because I don’t believe the history of science was a process of gradually shaking off delusions and misapprehensions and refining our rationality. It is much messier than that, now and always. If your starting position in assessing Boyle’s belief in demons and alchemy is that he was sometimes a bit gullible and deluded, then you are simply not going to get much of a grasp of what or how he thought. (Boyle was somewhat gullible when it came to alchemical charlatans, but his belief in transmutation wasn’t a part of that credulity.)
My own position is more along the lines of “It’s interesting that people once believed in witchcraft. I wonder what sustained that belief, and how it interacted with emerging ideas about science?” I am not being disingenuous if I say that I am inevitably a naïve reader of Shapin, Schaffer, Daston, Fara, and indeed David Wootton. But I find this same spirit in all of their books, and that’s what I appreciate in them.
Comments from David Wootton
A number of the reviews of The Invention of Science have expressed puzzlement that my book opens and closes with extensive historiographical, methodological, and philosophical discussions. Why not just leave all that stuff out? The charge is that I am refighting the Science Wars of the 1990s when everyone else has moved on. I under- stand why people would think this, but, with respect, I think they are wrong. Let’s break down the issues as follows:
1) Are relativists still confident that they speak for the history of science profession? Yes they are. See for example Steven Shapin’s breathtaking review of Steven Weinberg in the Wall Street Journal, where Shapin actually presents belief in science as being strictly comparable to belief in Christianity (http://goo.gl/qULelt) [1]. Or see Shapin’s and Schaffer’s introduction to the anniversary edition of Leviathan and the Air Pump (2011). Or see Peter Dear’s “Historiography of Not-So-Recent Science”, History of Science 50 (2012), 197-211 (“we are all post- modernists now”).
2) Are students still taught from relativist textbooks? Yes they are. The key text- books are Shapin’s Scientific Revolution (1996; now translated into seventeen languages); Peter Dear’s Revolutionizing the Sciences (2001, revised in 2009); John Henry’s The Scientific Revolution (1997, with later revisions). This may change – there is Principe’s Very Short Introduction (2011), for example – but it hasn’t changed yet.
3) Has the profession moved on? Rather than moving on, it has decided to pretend the Science Wars never happened, and as a consequence it is stuck in a rut, incapable of generating a new account of what was happening in science in the early modern period. To quote Lorraine Daston’s 2009 essay on the present state of the discipline (http://goo.gl/rMEAiy), what historians have produced is “a swarm of microhistories ... archivally based and narrated in exquisite detail.” These microhistories, as she herself acknowledges, do not enable one to put together a bigger picture. The resulting confusion is embodied, for example, in David Knight’s Voyaging in Strange Seas: the Great Revolution in Science (Yale, 2014).
4) Are the relativists more moderate than I maintain? Philip Ball thinks I and the authors of Leviathan and the Air Pump have more in common than I imagine. I doubt Shapin and Schaffer will think so, and I suggest Philip rereads p. 342 of that book, which maintains that the success of experimental science depended on its proponents’ “political success ... in insinuating themselves into the activities of other institutions and other interest groups. He who has the most, and the most powerful, allies wins.” In this sort of story the evidence counts for nothing – indeed, the strong programme insists that the evidence must count for nothing (and note the introduction of the strong programme’s key principle of symmetry on p. 5)[2].
5) Can you separate methodology and historiography from substantive history? It’s very difficult to do so, because your methodology and the previous history of your discipline shape the questions you ask and the answers you give. Thus relat- ivist historiography has privileged controversy studies (http://goo.gl/uVfxFF), and simply ignored cases where new scientific claims have been accepted without dispute. Indeed if the Duhem-Quine thesis were right there are always grounds for dispute when new evidence is presented. I don’t see how one can discuss the collapse of Ptolemaic astronomy in the years immediately after 1610 without acknowledging that this is an event which has been invisible to previous historians because they have been unwilling to acknowledge that an empirical fact (the phases of Venus) could be decisive in determining the fate of a well- established theory — in a case like this it is not the evidence that is new, but the questions that are being asked of it, and these are inseparable from issues of methodology and historiography [3].
6) The Economist thinks I have a disagreement with a few “callow” relativists. Odd that these insignificant people hold chairs in Harvard, Cambridge, Oxford, Edinburgh, Cornell. But there is a much bigger point here: a fundamental claim made by my opponents is that historians are committed, in principle, to treating bad and good knowledge identically. The historical profession tends to agree with them (see for example Gordon Wood’s NYRB essay on medicine in the American Revolution, http://goo.gl/ZoFuMu: “The problem is most historians are relativists”).
The consequences are apparent in the Cambridge History of Science, vol. 3, ed. Park and Daston (2006), which contains a twenty page chapter on “Coffee Houses and Print Shops” (as part of a two hundred page section on “Personae- and Sites of Natural Knowledge”) and others equally long on “Astrology” and “Magic” (Astrology gets twenty pages while Astronomy gets thirty), but, despite being 850 pages long, contains no extended discussion of Digges, Stevin, Gilbert, or Pascal, nothing on magnets, and only two pages on vacuum experiments [4].
It is also apparent in Oxford University’s recent (April 2015) advertisement for its Chair in the History of Science which stated: “The professor will share the faculty’s vision of the scope of the history of science, which is less focused on the history of scientific truth and more interested in reconstructing the practices of science, and the claims to science-based authority within given societies at given times” [5]. The Oxford Faculty of History does not declare its vision of the scope of the discipline when advertising its chair in, say, military history. But the history of science is different. The faculty, as a group of professional historians, feels it must ward off anyone interested in studying science as a project that succeeds and makes progress, and at the same time encourage anyone who wants to study science as a purely social enterprise. What interests them is not scientific knowledge but the authority claimed by “scientists” — be they alchemists or phrenologists. What’s at stake here is not just the history of science, but also the claim, made over and over again by historians, that the past must be studied solely in its own terms — an approach which may lead to understanding, but cannot lead to explanation. So historians of witchcraft report encounters with devils as if the devils were real — and never ask what’s really going on.
7) What is science? I was dismayed to discover that students in my own university were being taught (by someone with a new PhD in history of science from a prestigious institution) that there was no such thing as science in the seventeenth century. But this, after all, is what Henry’s textbook says, and Dear in his 2012 review essay confidently asserts: “specialist historians seem increasingly agreed that science as we now know it is an endeavour born of the nineteenth century.” On her university website one distinguished historian of science is described thus: “Paula Findlen teaches history of science before it was ‘science’ (which is, after all, a nineteenth-century word).” (http://web.stanford.edu/ dept/HPS/findlen.html, accessed 7 Dec 2015). How have we got to the point where it appears to make sense to claim that “science” is a nineteenth-century word? Because Newton, we are told, was not a scientist (which indeed is a nineteenth-century word) but a philosopher. Even if one charitably rephrases Findlen’s statement (or the statement made on her behalf) to read “‘science’ as we currently use the term is a nineteenth-century concept” it would be wrong unless, by a circular argument, one insists that earlier usages of the word can’t possibly have meant by science what we mean by science. The whole point of my book is to show that by the end of the seventeenth century “science” (as Dryden called it) really was science as we understand the term. To unpick the miscon- ception that there was no science in the seventeenth century you have to look at the history of words like “science” and “scientist” (noting, for example, the founding of the French Académie des Sciences in 1666), but also at an historiographical tradition which has insisted that what we think of as science is just a temporary and arbitrary social practice, like metaphysical poetry or Methodism, not an enduring and self-sustaining body of reliable knowledge.
8) What would have happened if I had left out the methodological and historiographical debates? I tried the alternative approach, of writing in layperson’s terms for commonsensical people, first. Just look at how my book Bad Medicine was treated by Steven Shapin, in the pages of the London Review of Books: http:/ /goo.gl/aA67fr! The book was a success in that lots of people read it and liked it, many of them doctors (see www.badmedicine.co.uk); but historians of medicine brushed it off. So this time I have felt obliged to address the core arguments which supposedly justify ignoring progress — the arguments that have bamboozled the profession for the last fifty years — in the hope of being taken a little more seriously, not by sensible people (who can’t understand why I don’t just cut to the chase), but by the professionals who think that the history of science is like cardiac surgery — not something “the laity” (Shapin’s peculiar term) can possible participate in, understand, or criticise, but something for the professionals alone. In trying to address this new clerisy I have evidently tried the patience of some of my more sensible, level-headed readers. That’s unfortunate and a matter of considerable regret: but if the way in which history of science is taught in the universities is to change, someone must take on the experts on their own ground, and someone must question the notion that the history of science ought not to concern itself with (amongst much else) the history of scientific truth. By all means skip the beginning and concluding chapters if you have no interest in how the history of science (and history more generally) is taught; but please read them carefully if you do.
Notes
[1] There is a paywall: to surmount it google “Why Scientists Shouldn’t Write History” and click on the first link. For a discussion see http://goo.gl/VYNVhX. I am grateful to Philip Ball for acknowledging that my book is very different in character from Weinberg’s, which saves me from having to stress the point.
[2] Patricia Fara thinks that social constructivism is “the idea that what people believe to be true is affected by their cultural context.” If that were the case then we would all be social constructivists and I really would be arguing with a straw person. But of course it isn’t, as I show over and over again in my book. It is, rather, the claim (made by her Cambridge colleague Andrew Cunningham) that science is “a human activity, wholly a human activity, and nothing but a human activity” — in other words that it is socially constituted, not merely socially influenced (the model for such an argument being, of course, Durkheim on religion). The consequence of this, constructivists rightly hold, is epistemological egalitarianism — any particular belief is to be regarded as being just as good as any other.
[3] Take for example William Donahue’s discussion of Galileo and the phases of Venus in Park and Daston, 585: “He argued... that this phenomenon was inconsistent with the Ptolemaic arrangement of the planets...” Galileo and his contemporaries understood perfectly well that Galileo had proved the Ptolemaic arrangements of the planets could not be right — the whole impact of Galileo’s discovery is lost by reducing it to a mere argument. Indeed Donahue does not acknowledge that it had any impact while I show the impact is measurable by counting editions of Sacrobosco.
[4] A colleague of mine unkindly calls this the Polo history: Polo Mints, to quote Wikipedia, “are a brand of mints whose defining feature is the hole in the middle.”
[5] The text is no longer on the Oxford University website, but can still be found, for example, at http://goo.gl/KOY05f (accessed 7 Dec 2015).
Thursday, December 03, 2015
Can science be made to work better?
Here is a longer version of the leader that I wrote for Nature this week.
_______________________________________________________________________
Suppose you’re seeking to develop a technique for transferring proteins from a gel to a plastic substrate for easier analysis. Useful, maybe – but will you gain much kudos for it? Will it enhance the reputation of your department? One of the sobering findings of last year’s survey of the 100 most cited papers on the Web of Science (Nature 514, 550; 2014) was how many of them reported such apparently mundane methodological research (this one was number six).
Not all prosaic work reaches such bibliometric heights, but that doesn’t deny its value. Overcoming the hurdles of nanoparticle drug delivery, for example, requires the painstaking characterization of pathways and rates of breakdown and loss in the body: work that is probably unpublishable, let alone unglamorous. One can cite comparable demands of detail for getting just about any bright idea to work in practice – but it’s the initial idea, not the hard grind, that garners the praise and citations.
An aversion to routine yet essential legwork seems at face value to be quite the opposite of the conclusions of a new study on how scientists pick their research topics. This analysis of discovery and innovation in biochemistry (A. Rzhetsky et al., Proc. Natl Acad. Sci. USA 112, 14569; 2015) finds that, in this field at least, choices of research problems are becoming more conservative and risk-averse. The results suggest that this trend over the past 30 years is quite the reverse of what is needed to make scientific discovery efficient.
But these problems – avoidance of both risk and drudge – are just opposite sides of the same coin. They reflect the fact that scientific norms, institutions and reward structures increasingly force researchers to aim at a “sweet spot” that will maximize their career prospects: work that is novel enough to be publishable but orthodox enough not to alarm or offend referees. That situation is surely driven in large degree by the importance attached to citation indices, as well as by the insistence of grant agencies that the short-term impact of the work can be defined in advance.
One might quibble with the necessarily crude measures of research strategy and knowledge generation employed in the PNAS study. But its general conclusion – that current norms discourage risk and therefore slow down scientific advance, and that the problem is worsening – ring true. It’s equally concerning that the incentives for boring but essential collection of fine-grained data to solve a specific problem are vanishing in a publish-or-perish culture.
A fashionably despairing cry of “Science is broken!” is not the way forward. The wider virtue of Rzhetsky et al.’s study is that it floats the notion of tuning practices and institutions to accelerate the process of scientific discovery. The researchers conclude, for example, that publication of experimental failures would assist this goal by avoiding wasteful repetition. Journals chasing impact factors might not welcome that, but they are no longer to sole repositories of scientific findings. Rzhetsky et al. also suggest some shifts in institutional structures that might help promote riskier but potentially more groundbreaking research – for example, spreading both risk and credit among teams or organizations, as used to be common at Bell Labs.
The danger is that efforts to streamline discovery simply become codified into another set of guidelines and procedures, creating yet more hoops that grant applicants have to jump through. If there’s one thing science needs less of, it is top-down management. A first step would be to recognize the message that research on complex systems has emphasized over the past decade or so: efficiencies are far more likely to come from the bottom up. The aim is to design systems with basic rules of engagement for participating agents that best enable an optimal state to emerge. Such principles typically confer adaptability, diversity, and robustness. There could be a wider mix of grant sources and sizes, say, less rigid disciplinary boundaries, and an acceptance that citation records are not the only measure of worth.
But perhaps more than anything, the current narrowing of objectives, opportunities and strategies in science reflects an erosion of trust. Obsessive focus on “impact” and regular scrutiny young (and not so young) researchers’ bibliometric data betray a lack of trust that would have sunk many discoveries and discoverers of the past. Bibliometrics might sometimes be hard to avoid as a first-pass filter for appointments (Nature 527, 279; 2015), but a steady stream of publications is not the only or even the best measure of potential.
Attempts to tackle these widely acknowledged problems are typically little more than a timid rearranging of deckchairs. Partly that’s because they are seen as someone else’s problem: the culprits are never the complainants, but the referees, grant agencies and tenure committees who oppress them. Yet oddly enough, these obstructive folk are, almost without exception, scientists too (or at least, once were).
It’s everyone’s problem. Given the global challenges that science now faces, inefficiencies can exact a huge price. It is time to get serious about oiling the gears.
_______________________________________________________________________
Suppose you’re seeking to develop a technique for transferring proteins from a gel to a plastic substrate for easier analysis. Useful, maybe – but will you gain much kudos for it? Will it enhance the reputation of your department? One of the sobering findings of last year’s survey of the 100 most cited papers on the Web of Science (Nature 514, 550; 2014) was how many of them reported such apparently mundane methodological research (this one was number six).
Not all prosaic work reaches such bibliometric heights, but that doesn’t deny its value. Overcoming the hurdles of nanoparticle drug delivery, for example, requires the painstaking characterization of pathways and rates of breakdown and loss in the body: work that is probably unpublishable, let alone unglamorous. One can cite comparable demands of detail for getting just about any bright idea to work in practice – but it’s the initial idea, not the hard grind, that garners the praise and citations.
An aversion to routine yet essential legwork seems at face value to be quite the opposite of the conclusions of a new study on how scientists pick their research topics. This analysis of discovery and innovation in biochemistry (A. Rzhetsky et al., Proc. Natl Acad. Sci. USA 112, 14569; 2015) finds that, in this field at least, choices of research problems are becoming more conservative and risk-averse. The results suggest that this trend over the past 30 years is quite the reverse of what is needed to make scientific discovery efficient.
But these problems – avoidance of both risk and drudge – are just opposite sides of the same coin. They reflect the fact that scientific norms, institutions and reward structures increasingly force researchers to aim at a “sweet spot” that will maximize their career prospects: work that is novel enough to be publishable but orthodox enough not to alarm or offend referees. That situation is surely driven in large degree by the importance attached to citation indices, as well as by the insistence of grant agencies that the short-term impact of the work can be defined in advance.
One might quibble with the necessarily crude measures of research strategy and knowledge generation employed in the PNAS study. But its general conclusion – that current norms discourage risk and therefore slow down scientific advance, and that the problem is worsening – ring true. It’s equally concerning that the incentives for boring but essential collection of fine-grained data to solve a specific problem are vanishing in a publish-or-perish culture.
A fashionably despairing cry of “Science is broken!” is not the way forward. The wider virtue of Rzhetsky et al.’s study is that it floats the notion of tuning practices and institutions to accelerate the process of scientific discovery. The researchers conclude, for example, that publication of experimental failures would assist this goal by avoiding wasteful repetition. Journals chasing impact factors might not welcome that, but they are no longer to sole repositories of scientific findings. Rzhetsky et al. also suggest some shifts in institutional structures that might help promote riskier but potentially more groundbreaking research – for example, spreading both risk and credit among teams or organizations, as used to be common at Bell Labs.
The danger is that efforts to streamline discovery simply become codified into another set of guidelines and procedures, creating yet more hoops that grant applicants have to jump through. If there’s one thing science needs less of, it is top-down management. A first step would be to recognize the message that research on complex systems has emphasized over the past decade or so: efficiencies are far more likely to come from the bottom up. The aim is to design systems with basic rules of engagement for participating agents that best enable an optimal state to emerge. Such principles typically confer adaptability, diversity, and robustness. There could be a wider mix of grant sources and sizes, say, less rigid disciplinary boundaries, and an acceptance that citation records are not the only measure of worth.
But perhaps more than anything, the current narrowing of objectives, opportunities and strategies in science reflects an erosion of trust. Obsessive focus on “impact” and regular scrutiny young (and not so young) researchers’ bibliometric data betray a lack of trust that would have sunk many discoveries and discoverers of the past. Bibliometrics might sometimes be hard to avoid as a first-pass filter for appointments (Nature 527, 279; 2015), but a steady stream of publications is not the only or even the best measure of potential.
Attempts to tackle these widely acknowledged problems are typically little more than a timid rearranging of deckchairs. Partly that’s because they are seen as someone else’s problem: the culprits are never the complainants, but the referees, grant agencies and tenure committees who oppress them. Yet oddly enough, these obstructive folk are, almost without exception, scientists too (or at least, once were).
It’s everyone’s problem. Given the global challenges that science now faces, inefficiencies can exact a huge price. It is time to get serious about oiling the gears.
Friday, October 16, 2015
The ethics of freelance reporting
There’s a very interesting post (if you’re a science writer) on journalistic ethics from Erik Vance here. I confess that I’ve been blissfully ignorant of this PR sideline that many science writers apparently have. It makes for a fairly clear division – either you’re writing PR or you’re not – but it doesn’t speak to my situation, and I can’t be alone in that. Erik worries about stories that come out of “institutionally sponsored trips”. I’m not entirely clear what he means by that, but I’m often in a situation like this:
A lab or department has asked if I might come and give a talk or take part in a seminar or some such. They’ll pay my expenses, including accommodation if necessary. And if I think it’ll be interesting, I’ll try to do it.
Is this then a junket? You see, what often happens is that the institute in question might line up a little programme of visits to researchers at the place in question, because I might find their work interesting or perhaps just because they would like to talk to me. And indeed I might well find their work interesting and want to write about it, or perhaps about the broader issues of the field they bring to my attention.
Now the question is: am I compromised by having the trip paid for me? Even more so on those rare occasions that I’m paid an honorarium? It’s for such reasons that Nature would always insist that the journal, not the visited institution, pays the way for its writers. This seems fair enough for a journal, but shouldn’t the same apply to a freelancer then?
I could say that life as a freelancer is already hard enough, given for example the more or less permanent freeze in pay rates, without our having to pay ourselves for any travelling that might produce a story (not least because you don’t always know that in advance). When a journal writer goes to give a talk or makes a lab visit, they are being paid by their employer to do it. As a freelancer, you are sacrificing working time to do that, and so are essentially already losing money by making the trip even if your travel and accommodation are covered.
But that doesn’t really answer the question, does it? It doesn’t mean that the piece you write is uncompromised just because you couldn’t afford to have gone if your expenses weren’t paid.
I don’t know what the answer is here. I do know that as a freelancer you’ll only get to write a piece if you pitch it to an editor who likes it, i.e. if it is a genuinely good story in the first place. In fact, you’ll probably only want to write it anyway if you sense it’s a good story yourself, and you do yourself no favours by pitching weak stories. But will your coverage be influenced by having been put up at a nice (if you’re lucky!) hotel by the institution? Erik is right to warn about unconscious biases, but I can’t easily see why the story would come out any different than if you’d come across the same work in a journal paper – you’d still be getting outside comment on it from objective specialists and so on. Still, I might be missing some important considerations here, and would be glad to have any pointed out to me.
It seems to me that a big part of this comes down to the attitude of the writer. If you start off from the position that you’re a cheerleader for science, you’re likely to be uncritical however you discover the story. If you consider yourself a critic in the proper sense, like a music or theatre critic, you’ll tend to look at the work accordingly. The same, it seems to me, has always applied to the issue of showing the authors a draft of the piece you’ve written about their work. Some journalists consider this an absolute no-no. I’ve never really understood why. If the scientist comes back pointing out technical errors in the piece, as they often do (and almost invariably in the nicest possible way), you get to give your readers a more accurate account. If they start demanding changes that seem unnecessary, interfering or pedantic, for example insisting that Professor Plum’s comments on their work are way off key, you just say sorry guys, this is the way it stays. That’s surely the job of a journalist. I can’t remember a time when feedback from authors on a rough draft was ever less than helpful and improving. So I guess I just don’t see what the problem is here.
But I am very conscious that I’ve never had any real training, as far as I can recall, in ethics in journalism. So I might be out of touch with what the issues are.
A lab or department has asked if I might come and give a talk or take part in a seminar or some such. They’ll pay my expenses, including accommodation if necessary. And if I think it’ll be interesting, I’ll try to do it.
Is this then a junket? You see, what often happens is that the institute in question might line up a little programme of visits to researchers at the place in question, because I might find their work interesting or perhaps just because they would like to talk to me. And indeed I might well find their work interesting and want to write about it, or perhaps about the broader issues of the field they bring to my attention.
Now the question is: am I compromised by having the trip paid for me? Even more so on those rare occasions that I’m paid an honorarium? It’s for such reasons that Nature would always insist that the journal, not the visited institution, pays the way for its writers. This seems fair enough for a journal, but shouldn’t the same apply to a freelancer then?
I could say that life as a freelancer is already hard enough, given for example the more or less permanent freeze in pay rates, without our having to pay ourselves for any travelling that might produce a story (not least because you don’t always know that in advance). When a journal writer goes to give a talk or makes a lab visit, they are being paid by their employer to do it. As a freelancer, you are sacrificing working time to do that, and so are essentially already losing money by making the trip even if your travel and accommodation are covered.
But that doesn’t really answer the question, does it? It doesn’t mean that the piece you write is uncompromised just because you couldn’t afford to have gone if your expenses weren’t paid.
I don’t know what the answer is here. I do know that as a freelancer you’ll only get to write a piece if you pitch it to an editor who likes it, i.e. if it is a genuinely good story in the first place. In fact, you’ll probably only want to write it anyway if you sense it’s a good story yourself, and you do yourself no favours by pitching weak stories. But will your coverage be influenced by having been put up at a nice (if you’re lucky!) hotel by the institution? Erik is right to warn about unconscious biases, but I can’t easily see why the story would come out any different than if you’d come across the same work in a journal paper – you’d still be getting outside comment on it from objective specialists and so on. Still, I might be missing some important considerations here, and would be glad to have any pointed out to me.
It seems to me that a big part of this comes down to the attitude of the writer. If you start off from the position that you’re a cheerleader for science, you’re likely to be uncritical however you discover the story. If you consider yourself a critic in the proper sense, like a music or theatre critic, you’ll tend to look at the work accordingly. The same, it seems to me, has always applied to the issue of showing the authors a draft of the piece you’ve written about their work. Some journalists consider this an absolute no-no. I’ve never really understood why. If the scientist comes back pointing out technical errors in the piece, as they often do (and almost invariably in the nicest possible way), you get to give your readers a more accurate account. If they start demanding changes that seem unnecessary, interfering or pedantic, for example insisting that Professor Plum’s comments on their work are way off key, you just say sorry guys, this is the way it stays. That’s surely the job of a journalist. I can’t remember a time when feedback from authors on a rough draft was ever less than helpful and improving. So I guess I just don’t see what the problem is here.
But I am very conscious that I’ve never had any real training, as far as I can recall, in ethics in journalism. So I might be out of touch with what the issues are.
Multiverse of Stone
This summer I went to one of the most extraordinary scientific gatherings I’ve ever attended. Where else would you find Martin Rees, Rolf Hauer, Carlos Frenk, Alex Vilenkin and Bernard Carr assembled to talk about the multiverse idea? The meeting was convened by architect and designer Charles Jencks to mark the opening of his remarkable new landscape, the Crawick Multiverse, in Dumfries on the Scottish borders. And the setting was no less striking: it took place in Drumlanrig Castle, a splendid baronial edifice that is the ancenstral home of the Duke of Buccleugh, whose generosity and hospitality made it probably the most congenial meeting I’ve ever been to. Representing the humanities were Mary-Jane Rubenstein, whose excellent book Worlds Without End (2014) places the multiverse in historical and theological perspective, Martin Kemp, who talked about spirals in nature (look out for Martin’s forthcoming book Structural Intuitions) and Michael Benson, whose Cosmigraphics (2014) shows how we have depicted and conceptualized the universe over time. I talked about pattern formation in nature.
Despite all this, the piece that I wrote about the event has not found a home, having fallen between too many stools in various potential forums. So I’ll put it here. You will also be able to download a pdf of this article from my website here, once some site reworking has been completed.
______________________________________________________________________
With the Crawick Multiverse, landscape architect and designer Charles Jencks has set the archaeologists of the future a delightful puzzle. They will spin theories of various degrees of fancifulness to explain why this earthwork was built in the rather beautiful but undeniably stark wilds of Dumfries and Galloway. Is there a cosmic significance in the alignment of the stone-flanked avenue? What do these twinned spiralling tumuli denote, these little crescent lagoons, these radial splashes of stone paving? Whence these cryptic inscriptions “Balanced Universe” and “PIC” on slabs and monoliths?
The Crawick Multiverse
If any futurist historian is on hand to explain, there are two ways in which her story might go. Either she will say that the monument marks the moment when ancient science awoke to the realization that, as every child now knows, ours is not the only universe but is merely one among the multiverse of worlds, all springing perpetually into existence in an expanding matrix of “false vacuum”, each with its unique laws of physics. Or she will explain (with a warning that we should not Whiggishly mock the seemingly odd and absurd ideas of the past) that the Crawick site was built at a time when scientists seriously entertained so peculiar and now obviously misguided a notion.
If only we could tell which way it will go! But right now, that’s anyone’s guess. Whatever the outcome, Jencks, the former theorist of postmodernism who today takes risks simultaneously intellectual, aesthetic, critical and financial in his efforts to represent scientific ideas about the cosmos at a herculean scale, has created an extraordinarily ambitious landscape that manages to blend Goldsworthy-style nature art with cutting-edge cosmology and more than a touch of what might be interpreted as New Age paganism. At the grand opening of the Crawick (pronounced “Croyck”) Multiverse in late June, no one seemed too worried if the science will stand up to scrutiny. Instead there were pipe bands, singing schoolchildren, performance art and generous blasts of Hibernian weather.
Jencks is no stranger to this kind of grand statement. His house at Portrack, near Dumfries and a 30-minute drive from Crawick, sits amidst the Garden of Cosmic Speculation, a landscape of undulating turf terraces, stones, water pools and ornate metal sculptures that represents all manner of scientific ideas, from the spacetime-bending antics of black holes and the helical forms of DNA to mathematical fractals and the “symmetry-breaking” shifts that produced structure and order in the early universe. Jencks opens the garden to the public for one day each year to raise funds for Maggie’s Centres, the drop-in centres for cancer patients that Jencks established after the death of his wife Maggie Keswick Jencks from cancer in 1995.
A panorama of Charles Jencks’ Garden of Cosmic Speculation at Portrack House, Dumfries. (Photo: Michael Benson.)
Jencks also designed the lawn that fronts the Scottish National Gallery of Modern Art in Edinburgh, a series of crescent-shaped stepped mounds and pools inspired by chaos theory and “the way nature organizes itself”, in Jencks’ words. By drawing on cutting-edge scientific ideas, Jencks has cultivated strong ties with scientists themselves, and a plan for a landscape at the European particle-physics centre of CERN, near Geneva, sits on the shelf, awaiting funding.
Charles Jencks’ science-inspired land art in the Garden of Cosmic Speculation (top) and the garden of the Scottish National Gallery of Modern Art in Edinburgh (bottom).
The Multiverse project began when the Duke of Buccleuch and Queensberry, whose ancestral home at Drumlanrig Castle stands near to Crawick, asked Jencks to reclaim the site, dramatically surrounded by rolling hills but disfigured by the slag heaps from open-cast coal mining. When work began in 2012, the excavations unearthed thousands of boulders half-buried in the ground, which Jencks has used to create a panorama of standing stones and sculpted tumuli.
“As we discovered more and more rocks, we laid out the four cardinal points, made the north-south axis the primary one, and thereby framed both the far horizons and the daily and monthly movements of the sun”, Jencks says. “One theory of pre-history is that stone circles frame the far hills and key points, and while I wanted to capture today’s cosmology not yesterday’s, I was aware of this long landscape tradition.”
Visitors to the site should recognize the spiral form of our own Milky Way Galaxy, Jencks says – but the layout invites them to delve deeper into cosmogenic origins. The Milky Way, he says, “emerged from our Local Group of galaxies, but where did they come from? From the supercluster of galaxies, and where did they come from? From the largest structures in the universe, the web of filaments? And so on and on.” Ultimately this leads to the questions confronted by theories of the Big Bang in which our own universe is thought to have formed – and to questions about whether this cosmic outburst, or others, might also have spawned other universes, or a multiverse.
How many universes do you need?
A decade or two ago, allusions to the notion that there are many – perhaps infinitely many – universes would have been regarded as dabbling on the fringes of respectable science. Now the multiverse idea is embraced by many leading cosmologists and other physicists. That’s not because we have any evidence for it, but because it seems to offer a simultaneous resolution to several outstanding problems on the wild frontier where fundamental physics – the science of the immeasurably small – blends with cosmology, which attempts to explain the origin and evolution of all the vastness of space.
“In the last twenty years the multiverse has developed from an exotic speculation into a full-blown theory”, says Jencks. “From avant-garde conjecture held by the few to serious hypothesis entertained by the many, leading thinkers now believe the multiverse is a plausible description of an ensemble of universes.”
To explore how the multiverse came in from the cold, Jencks convened a gathering of cosmologists and particle physicists whose eminence would rival the finest of international science conventions. While the opening celebrations braved the elements at Crawick, the scientists were hosted by the duke at Drumlanrig Castle – perhaps the most stunning example of French-inflected Scottish baronial architecture, fashioned from the gorgeous red stone of Dumfries. In one long afternoon while the sun conveyed its rare blessing on the jaw-dropping gardens outside, these luminaries explained to an invited audience why they have come to suppose a multiplicity of universes beyond all reasonable measure: why an understanding of the deepest physical laws is compelling us to make the position of humanity in the cosmos about as insignificant as it could possibly be.
Drumlanrig Castle near Dumfries, where scientists convened to discuss the multiverse.
It was a decidedly surreal gathering, with Powerpoint presentations on complex physics amidst Louis XIV furniture, while massive portraits of the duke’s illustrious ancestors (including Charles II’s unruly illegitimate son the 1st Duke of Monmouth) looked on. When art historian Martin Kemp, opening the proceedings with a survey of spiral patterns, discussed the nature art of Andy Goldsworthy, only to have the artist himself pop up to explain his intentions, one had to wonder if we had already strayed into some parallel universe.
Martin Rees, Astronomer Royal and past President of the Royal Society, suggested that the multiverse theory represents a “fourth Copernican revolution”: the fourth time since Copernicus shoved the earth away from the centre of creation that we have been forced to downgrade our status in the heavens. Yet curiously, this latest perspective also gives our very existence a central role in any explanation of why the basic laws of nature are the way they are.
Here’s the problem. A quest for the much-vaunted Theory of Everything – a set of “simple” laws, or perhaps just a single equation, from which all the other principles of physics can be derived, and which will achieve the much-sought reconciliation of gravity and quantum theory – has landed us in the perplexing situation of having more alternatives to choose from than there are fundamental particles in the known universe. To be precise, the latest version of string theory, which many physicists who delve into these waters insist is the best candidate for a “final theory”, offers 10**500 (1 followed by 500 zeros) distinct solutions: that many possible variations on the laws of physics, with no obvious reason to prefer one over any other. Some are tempted to conclude that this is the fault of string theory, not of the universe, and so prefer to ditch the whole edifice, which without doubt is built on some debatable assumptions and remains far beyond our means to test directly for the foreseeable future.
If that were all there was to it, you might well wonder if indeed we should be wiping the board clean and starting again. But cosmology now suggests that this crazy proliferation of physical laws can be put to good use. The standard picture of the Big Bang – albeit not the one that all physicists embrace – posits that, a fraction of a second after the universe began to expand from its mysterious origin, it underwent a fleeting instant of expansion at an enormous rate, far faster than the speed of light, called inflation. This idea explains, in what might seem like but is not a paradox, both why the universe is so uniform everywhere we look and why it is not perfectly so. Inflation blew up the “fireball” to a cosmic scale before it had a chance to get too clumpy.
That primordial state would, however, have been unavoidably ruffled by the tiny chance variations that quantum physics creates. These fluctuations are now preserved at astronomical scales in slight differences in temperature of the cosmic microwave background radiation, the faint afterglow of the Big Bang itself that several satellite-based telescopes have now mapped out in fine detail. As astrophysicist Carlos Frenk explained at Drumlanrig, the match between the spectrum of temperature variations – their size at different distance scales – predicted by inflationary theory and that measured is so good that, were it not so well attested in so huge an international effort, it would probably arouse suspicions of data-rigging.
The temperature variations of the cosmic microwave background, as mapped by the European Space Agency’s Planck space telescope in 2013. The tiny variations correspond to regions of slightly different density in the very early universe that seeded the formation of clumps of matter – galaxies and stars – today.
What has this got to do with multiverses? Well, to put it one way: if you have a theory for how the Big Bang happened as a natural phenomenon, almost by definition you no longer have reason to regard it as a one-off event. The current view is that the Big Bang itself was a kind of condensation of energy-filled empty space – the “true vacuum” – out of an unstable medium called the “false vacuum”, much as mist condenses from the moist air of the Scottish hills. But this false vacuum, for reasons I won’t attempt to explain, should also be subject to a kind of inflation in which it expands at fantastic speed. Then our universe appears as a sort of growing “bubble” in the false vacuum. But others do too: not just 13.6 billion years ago (the age of our universe) but constantly. It’s a scenario called “eternal inflation”, as one of its pioneers, cosmologist Alex Vilenkin, explained at the meeting. In this view, there are many, perhaps infinitely many, universes appearing and growing all the time.
The reason this helps with string theory is that it relieves us of the need to select any one of the 10**500 solutions it yields. There are enough homes for all versions. That’s not just a matter of accommodating homeless solutions to an equation. One of the most puzzling questions of modern cosmology is why the vacuum is not stuffed full of unimaginable amounts of energy. Quantum theory predicts that empty space should be so full of particles popping in and out of existence all the time, just because they can, that it should hold far more energy than the interior of a star. Evidently it doesn’t, and for a long time it was simply assumed that some unknown effect must totally purge the vacuum of all this energy. But the discovery of dark energy in the late 1990s – which manifests itself as an acceleration of the expansion of our universe – forced cosmologists to accept that a tiny amount of that vacuum energy does in fact remain. In this view, that’s precisely what dark energy is. Yet it is so tiny an amount – 10**-122 of what is predicted – that it seems almost a cosmic joke that the cancellation should be so nearly complete but not quite.
But if there is a multiverse, this puzzle of “fine-tuning” goes away. We just happen to be living in one of the universes in which the laws of nature are, out of all the versions permitted by string theory, set up this way. Doesn’t that seem too much of an extraordinary good fortune? Well no, because without this near cancellation of the vacuum energy, atoms could not exist, and so neither could ordinary matter, stars – or us. In any universe in which these conditions pertain, intelligent beings might be scratching their heads over this piece of apparent providence. In those – far more numerous – where that’s not the case, there is no one to lament it.
The pieces of the puzzle, bringing together the latest ideas in cosmology and fundamental physics, seem suddenly to dovetail rather neatly. Too neatly for some, who say that such arguments are metaphysical sleight of hand – a kind of cheating in which we rescue ourselves from theoretical problems not by solving them but by dressing them up as their own solution. How can we test these assertions, they ask? And isn’t it defeatist to accept that there’s ultimately no fundamental reason why the fundamental constant of nature have the values they do, because in other universes they don’t?
But there’s no denying that, without the multiverse, the “fine-tuning” problem of dark energy alone looks tailor-made for a theologian’s “argument by design”. If you don’t want a God, astrophysicist Bernard Carr has quipped (only half-jokingly), you’d better have a multiverse. It’s not the first time a “plurality of worlds” has sparked theological debate, as philosopher of religion Mary-Jane Rubenstein reminded the Drumlanrig gathering – his interpretation (albeit not simply his assertion) of such a multiplicity was partly what got the Dominican friar Giordano Bruno burnt at the stake in 1600.
Do these questions drift beyond science into metaphysics? Perhaps – but why should we worry about that, Carr asked the meeting? At the very least, if true science must be testable, who is to say on what timescale it must happen? (The current realistic possibilities at CERN are certainly more modest, as its Director General Rolf Heuer explained – but even they don’t exclude an exploration of other types of multiverse ideas, such as a search for the mini-black holes predicted by some theories that invoke extra, “invisible” dimensions of space beyond our familiar three.)
Reclaiming the multiverse
How much of all this finds its way into Jencks’ Crawick Multiverse is another matter. In line with his thinking about the hierarchy of “cosmic patterns” through which we trace our place in the cosmos, many of the structures depict our immediate environment. Two corkscrew hillocks represent the Milky Way galaxy and its neighbour Andromeda, while the local “supercluster” of galaxies becomes a gaggle of rock-paved artificial drumlins. The Sun Amphitheatre, which can house 5,000 people (though it’s a brave soul who organizes outdoor performances on a Scottish hillside at any time of year), is designed to depict the crescent shapes of a solar eclipse. The Multiverse itself is a mound up which mudstone slabs trace a spiral path, some of them carved to symbolize the different kinds of universe the theory predicts.
The local universe represented in the Crawick Multiverse.
But why create a Multiverse on a Scottish hillside anyway? Because, Jencks says, “it is our metaphysics, or at least is fast becoming so. And all art aspires to the condition of its present metaphysics. That’s so true today, in the golden age of cosmology, when the boundaries of truth, nature, and culture are being rewritten and people are again wondering in creative ways about the big issues.” “I wanted to confront the basic question which so many cosmologists raise: why is our universe so well-balanced, and in so many ways? What does the apparent fine-tuning mean, how can we express it, make it comprehensible, palpable?”
“Apart from all this”, he adds, “if you have a 55-acre site, and almost half the available money has to go into decontamination alone, then you’d better have a big idea for 2000 free boulders.”
Charles Jencks introduces his multiverse. (Photo: Michael Benson.)
The sculptures and forms of the Crawick Multiverse reflect Jencks’ own unique and sometimes impressionistic take on the theories. For example, he prefers to replace “anthropic” reasoning that uses our own observation of the observable universe as an explanation of apparent contingencies with the notion that this universe (at least) has a tendency to spawn ever more complexity: his Principle of Increasing Complexity (PIC). He is critical of some of science’s “Pentagon metaphors – wimps and machos (candidates for the mysterious dark matter that exceeds the amount of ordinary visible matter by a factor of around four), selfish genes and so on. “The universe did not start in a big bang”, Jencks says. “It was smaller than a quark, and noise wasn’t its most significant quality.” He prefers the term “Hot Stretch”.
But his intention isn’t really pedagogical – it’s about giving some meaning to this former site of mining-induced desolation. “I hope to achieve, first, something for the economically depressed coal-mining towns in the area”, Jencks says. “Richard [Buccleuch] had an obligation to make good the desolation, and he feels this responsibility strongly. I wanted to create something that related to this local culture. Like Arte Povera it makes use of what is to hand: virtually everything comes from the site, or three miles away. Second, I was keen on getting an annual festival based on local culture – the pipers in the area, the Riding of the Marches, the performing artists, the schools.”
Visitors to the site seem likely to be offered only the briefest of introductions to the underlying cosmic themes. That’s probably as it should be, not only because the theories are so provisional (they’ll surely look quite different in 20 years time, when the earthworks have had a chance to bed themselves into the landscape) but because, just like the medieval cosmos encoded in the Gothic cathedrals, this sort of architecture is primarily symbolic. It will speak to us not like a lecture, but through what Martin Kemp has called “structural intuitions”, an innate familiarity with the patterns of the natural world. Some scientists might look askance at any suggestion that the Crawick Multiverse can be seen as a sacred place. But it’s hard to imagine how even the most secular of them, if they really take the inflationary multiverse seriously, could fail to find within it some of the awe that a peasant from the wheatfields of the Beauce must have experienced on entering the nave of Chartres Cathedral – a representation in stone of the medieval concept of an orderly Platonic universe – and stepping into its cosmic labyrinth.
Despite all this, the piece that I wrote about the event has not found a home, having fallen between too many stools in various potential forums. So I’ll put it here. You will also be able to download a pdf of this article from my website here, once some site reworking has been completed.
______________________________________________________________________
With the Crawick Multiverse, landscape architect and designer Charles Jencks has set the archaeologists of the future a delightful puzzle. They will spin theories of various degrees of fancifulness to explain why this earthwork was built in the rather beautiful but undeniably stark wilds of Dumfries and Galloway. Is there a cosmic significance in the alignment of the stone-flanked avenue? What do these twinned spiralling tumuli denote, these little crescent lagoons, these radial splashes of stone paving? Whence these cryptic inscriptions “Balanced Universe” and “PIC” on slabs and monoliths?

The Crawick Multiverse
If any futurist historian is on hand to explain, there are two ways in which her story might go. Either she will say that the monument marks the moment when ancient science awoke to the realization that, as every child now knows, ours is not the only universe but is merely one among the multiverse of worlds, all springing perpetually into existence in an expanding matrix of “false vacuum”, each with its unique laws of physics. Or she will explain (with a warning that we should not Whiggishly mock the seemingly odd and absurd ideas of the past) that the Crawick site was built at a time when scientists seriously entertained so peculiar and now obviously misguided a notion.
If only we could tell which way it will go! But right now, that’s anyone’s guess. Whatever the outcome, Jencks, the former theorist of postmodernism who today takes risks simultaneously intellectual, aesthetic, critical and financial in his efforts to represent scientific ideas about the cosmos at a herculean scale, has created an extraordinarily ambitious landscape that manages to blend Goldsworthy-style nature art with cutting-edge cosmology and more than a touch of what might be interpreted as New Age paganism. At the grand opening of the Crawick (pronounced “Croyck”) Multiverse in late June, no one seemed too worried if the science will stand up to scrutiny. Instead there were pipe bands, singing schoolchildren, performance art and generous blasts of Hibernian weather.
Jencks is no stranger to this kind of grand statement. His house at Portrack, near Dumfries and a 30-minute drive from Crawick, sits amidst the Garden of Cosmic Speculation, a landscape of undulating turf terraces, stones, water pools and ornate metal sculptures that represents all manner of scientific ideas, from the spacetime-bending antics of black holes and the helical forms of DNA to mathematical fractals and the “symmetry-breaking” shifts that produced structure and order in the early universe. Jencks opens the garden to the public for one day each year to raise funds for Maggie’s Centres, the drop-in centres for cancer patients that Jencks established after the death of his wife Maggie Keswick Jencks from cancer in 1995.

A panorama of Charles Jencks’ Garden of Cosmic Speculation at Portrack House, Dumfries. (Photo: Michael Benson.)
Jencks also designed the lawn that fronts the Scottish National Gallery of Modern Art in Edinburgh, a series of crescent-shaped stepped mounds and pools inspired by chaos theory and “the way nature organizes itself”, in Jencks’ words. By drawing on cutting-edge scientific ideas, Jencks has cultivated strong ties with scientists themselves, and a plan for a landscape at the European particle-physics centre of CERN, near Geneva, sits on the shelf, awaiting funding.


Charles Jencks’ science-inspired land art in the Garden of Cosmic Speculation (top) and the garden of the Scottish National Gallery of Modern Art in Edinburgh (bottom).
The Multiverse project began when the Duke of Buccleuch and Queensberry, whose ancestral home at Drumlanrig Castle stands near to Crawick, asked Jencks to reclaim the site, dramatically surrounded by rolling hills but disfigured by the slag heaps from open-cast coal mining. When work began in 2012, the excavations unearthed thousands of boulders half-buried in the ground, which Jencks has used to create a panorama of standing stones and sculpted tumuli.
“As we discovered more and more rocks, we laid out the four cardinal points, made the north-south axis the primary one, and thereby framed both the far horizons and the daily and monthly movements of the sun”, Jencks says. “One theory of pre-history is that stone circles frame the far hills and key points, and while I wanted to capture today’s cosmology not yesterday’s, I was aware of this long landscape tradition.”
Visitors to the site should recognize the spiral form of our own Milky Way Galaxy, Jencks says – but the layout invites them to delve deeper into cosmogenic origins. The Milky Way, he says, “emerged from our Local Group of galaxies, but where did they come from? From the supercluster of galaxies, and where did they come from? From the largest structures in the universe, the web of filaments? And so on and on.” Ultimately this leads to the questions confronted by theories of the Big Bang in which our own universe is thought to have formed – and to questions about whether this cosmic outburst, or others, might also have spawned other universes, or a multiverse.
How many universes do you need?
A decade or two ago, allusions to the notion that there are many – perhaps infinitely many – universes would have been regarded as dabbling on the fringes of respectable science. Now the multiverse idea is embraced by many leading cosmologists and other physicists. That’s not because we have any evidence for it, but because it seems to offer a simultaneous resolution to several outstanding problems on the wild frontier where fundamental physics – the science of the immeasurably small – blends with cosmology, which attempts to explain the origin and evolution of all the vastness of space.
“In the last twenty years the multiverse has developed from an exotic speculation into a full-blown theory”, says Jencks. “From avant-garde conjecture held by the few to serious hypothesis entertained by the many, leading thinkers now believe the multiverse is a plausible description of an ensemble of universes.”
To explore how the multiverse came in from the cold, Jencks convened a gathering of cosmologists and particle physicists whose eminence would rival the finest of international science conventions. While the opening celebrations braved the elements at Crawick, the scientists were hosted by the duke at Drumlanrig Castle – perhaps the most stunning example of French-inflected Scottish baronial architecture, fashioned from the gorgeous red stone of Dumfries. In one long afternoon while the sun conveyed its rare blessing on the jaw-dropping gardens outside, these luminaries explained to an invited audience why they have come to suppose a multiplicity of universes beyond all reasonable measure: why an understanding of the deepest physical laws is compelling us to make the position of humanity in the cosmos about as insignificant as it could possibly be.

Drumlanrig Castle near Dumfries, where scientists convened to discuss the multiverse.
It was a decidedly surreal gathering, with Powerpoint presentations on complex physics amidst Louis XIV furniture, while massive portraits of the duke’s illustrious ancestors (including Charles II’s unruly illegitimate son the 1st Duke of Monmouth) looked on. When art historian Martin Kemp, opening the proceedings with a survey of spiral patterns, discussed the nature art of Andy Goldsworthy, only to have the artist himself pop up to explain his intentions, one had to wonder if we had already strayed into some parallel universe.
Martin Rees, Astronomer Royal and past President of the Royal Society, suggested that the multiverse theory represents a “fourth Copernican revolution”: the fourth time since Copernicus shoved the earth away from the centre of creation that we have been forced to downgrade our status in the heavens. Yet curiously, this latest perspective also gives our very existence a central role in any explanation of why the basic laws of nature are the way they are.
Here’s the problem. A quest for the much-vaunted Theory of Everything – a set of “simple” laws, or perhaps just a single equation, from which all the other principles of physics can be derived, and which will achieve the much-sought reconciliation of gravity and quantum theory – has landed us in the perplexing situation of having more alternatives to choose from than there are fundamental particles in the known universe. To be precise, the latest version of string theory, which many physicists who delve into these waters insist is the best candidate for a “final theory”, offers 10**500 (1 followed by 500 zeros) distinct solutions: that many possible variations on the laws of physics, with no obvious reason to prefer one over any other. Some are tempted to conclude that this is the fault of string theory, not of the universe, and so prefer to ditch the whole edifice, which without doubt is built on some debatable assumptions and remains far beyond our means to test directly for the foreseeable future.
If that were all there was to it, you might well wonder if indeed we should be wiping the board clean and starting again. But cosmology now suggests that this crazy proliferation of physical laws can be put to good use. The standard picture of the Big Bang – albeit not the one that all physicists embrace – posits that, a fraction of a second after the universe began to expand from its mysterious origin, it underwent a fleeting instant of expansion at an enormous rate, far faster than the speed of light, called inflation. This idea explains, in what might seem like but is not a paradox, both why the universe is so uniform everywhere we look and why it is not perfectly so. Inflation blew up the “fireball” to a cosmic scale before it had a chance to get too clumpy.
That primordial state would, however, have been unavoidably ruffled by the tiny chance variations that quantum physics creates. These fluctuations are now preserved at astronomical scales in slight differences in temperature of the cosmic microwave background radiation, the faint afterglow of the Big Bang itself that several satellite-based telescopes have now mapped out in fine detail. As astrophysicist Carlos Frenk explained at Drumlanrig, the match between the spectrum of temperature variations – their size at different distance scales – predicted by inflationary theory and that measured is so good that, were it not so well attested in so huge an international effort, it would probably arouse suspicions of data-rigging.

The temperature variations of the cosmic microwave background, as mapped by the European Space Agency’s Planck space telescope in 2013. The tiny variations correspond to regions of slightly different density in the very early universe that seeded the formation of clumps of matter – galaxies and stars – today.
What has this got to do with multiverses? Well, to put it one way: if you have a theory for how the Big Bang happened as a natural phenomenon, almost by definition you no longer have reason to regard it as a one-off event. The current view is that the Big Bang itself was a kind of condensation of energy-filled empty space – the “true vacuum” – out of an unstable medium called the “false vacuum”, much as mist condenses from the moist air of the Scottish hills. But this false vacuum, for reasons I won’t attempt to explain, should also be subject to a kind of inflation in which it expands at fantastic speed. Then our universe appears as a sort of growing “bubble” in the false vacuum. But others do too: not just 13.6 billion years ago (the age of our universe) but constantly. It’s a scenario called “eternal inflation”, as one of its pioneers, cosmologist Alex Vilenkin, explained at the meeting. In this view, there are many, perhaps infinitely many, universes appearing and growing all the time.
The reason this helps with string theory is that it relieves us of the need to select any one of the 10**500 solutions it yields. There are enough homes for all versions. That’s not just a matter of accommodating homeless solutions to an equation. One of the most puzzling questions of modern cosmology is why the vacuum is not stuffed full of unimaginable amounts of energy. Quantum theory predicts that empty space should be so full of particles popping in and out of existence all the time, just because they can, that it should hold far more energy than the interior of a star. Evidently it doesn’t, and for a long time it was simply assumed that some unknown effect must totally purge the vacuum of all this energy. But the discovery of dark energy in the late 1990s – which manifests itself as an acceleration of the expansion of our universe – forced cosmologists to accept that a tiny amount of that vacuum energy does in fact remain. In this view, that’s precisely what dark energy is. Yet it is so tiny an amount – 10**-122 of what is predicted – that it seems almost a cosmic joke that the cancellation should be so nearly complete but not quite.
But if there is a multiverse, this puzzle of “fine-tuning” goes away. We just happen to be living in one of the universes in which the laws of nature are, out of all the versions permitted by string theory, set up this way. Doesn’t that seem too much of an extraordinary good fortune? Well no, because without this near cancellation of the vacuum energy, atoms could not exist, and so neither could ordinary matter, stars – or us. In any universe in which these conditions pertain, intelligent beings might be scratching their heads over this piece of apparent providence. In those – far more numerous – where that’s not the case, there is no one to lament it.
The pieces of the puzzle, bringing together the latest ideas in cosmology and fundamental physics, seem suddenly to dovetail rather neatly. Too neatly for some, who say that such arguments are metaphysical sleight of hand – a kind of cheating in which we rescue ourselves from theoretical problems not by solving them but by dressing them up as their own solution. How can we test these assertions, they ask? And isn’t it defeatist to accept that there’s ultimately no fundamental reason why the fundamental constant of nature have the values they do, because in other universes they don’t?
But there’s no denying that, without the multiverse, the “fine-tuning” problem of dark energy alone looks tailor-made for a theologian’s “argument by design”. If you don’t want a God, astrophysicist Bernard Carr has quipped (only half-jokingly), you’d better have a multiverse. It’s not the first time a “plurality of worlds” has sparked theological debate, as philosopher of religion Mary-Jane Rubenstein reminded the Drumlanrig gathering – his interpretation (albeit not simply his assertion) of such a multiplicity was partly what got the Dominican friar Giordano Bruno burnt at the stake in 1600.
Do these questions drift beyond science into metaphysics? Perhaps – but why should we worry about that, Carr asked the meeting? At the very least, if true science must be testable, who is to say on what timescale it must happen? (The current realistic possibilities at CERN are certainly more modest, as its Director General Rolf Heuer explained – but even they don’t exclude an exploration of other types of multiverse ideas, such as a search for the mini-black holes predicted by some theories that invoke extra, “invisible” dimensions of space beyond our familiar three.)
Reclaiming the multiverse
How much of all this finds its way into Jencks’ Crawick Multiverse is another matter. In line with his thinking about the hierarchy of “cosmic patterns” through which we trace our place in the cosmos, many of the structures depict our immediate environment. Two corkscrew hillocks represent the Milky Way galaxy and its neighbour Andromeda, while the local “supercluster” of galaxies becomes a gaggle of rock-paved artificial drumlins. The Sun Amphitheatre, which can house 5,000 people (though it’s a brave soul who organizes outdoor performances on a Scottish hillside at any time of year), is designed to depict the crescent shapes of a solar eclipse. The Multiverse itself is a mound up which mudstone slabs trace a spiral path, some of them carved to symbolize the different kinds of universe the theory predicts.

The local universe represented in the Crawick Multiverse.
But why create a Multiverse on a Scottish hillside anyway? Because, Jencks says, “it is our metaphysics, or at least is fast becoming so. And all art aspires to the condition of its present metaphysics. That’s so true today, in the golden age of cosmology, when the boundaries of truth, nature, and culture are being rewritten and people are again wondering in creative ways about the big issues.” “I wanted to confront the basic question which so many cosmologists raise: why is our universe so well-balanced, and in so many ways? What does the apparent fine-tuning mean, how can we express it, make it comprehensible, palpable?”
“Apart from all this”, he adds, “if you have a 55-acre site, and almost half the available money has to go into decontamination alone, then you’d better have a big idea for 2000 free boulders.”

Charles Jencks introduces his multiverse. (Photo: Michael Benson.)
The sculptures and forms of the Crawick Multiverse reflect Jencks’ own unique and sometimes impressionistic take on the theories. For example, he prefers to replace “anthropic” reasoning that uses our own observation of the observable universe as an explanation of apparent contingencies with the notion that this universe (at least) has a tendency to spawn ever more complexity: his Principle of Increasing Complexity (PIC). He is critical of some of science’s “Pentagon metaphors – wimps and machos (candidates for the mysterious dark matter that exceeds the amount of ordinary visible matter by a factor of around four), selfish genes and so on. “The universe did not start in a big bang”, Jencks says. “It was smaller than a quark, and noise wasn’t its most significant quality.” He prefers the term “Hot Stretch”.
But his intention isn’t really pedagogical – it’s about giving some meaning to this former site of mining-induced desolation. “I hope to achieve, first, something for the economically depressed coal-mining towns in the area”, Jencks says. “Richard [Buccleuch] had an obligation to make good the desolation, and he feels this responsibility strongly. I wanted to create something that related to this local culture. Like Arte Povera it makes use of what is to hand: virtually everything comes from the site, or three miles away. Second, I was keen on getting an annual festival based on local culture – the pipers in the area, the Riding of the Marches, the performing artists, the schools.”
Visitors to the site seem likely to be offered only the briefest of introductions to the underlying cosmic themes. That’s probably as it should be, not only because the theories are so provisional (they’ll surely look quite different in 20 years time, when the earthworks have had a chance to bed themselves into the landscape) but because, just like the medieval cosmos encoded in the Gothic cathedrals, this sort of architecture is primarily symbolic. It will speak to us not like a lecture, but through what Martin Kemp has called “structural intuitions”, an innate familiarity with the patterns of the natural world. Some scientists might look askance at any suggestion that the Crawick Multiverse can be seen as a sacred place. But it’s hard to imagine how even the most secular of them, if they really take the inflationary multiverse seriously, could fail to find within it some of the awe that a peasant from the wheatfields of the Beauce must have experienced on entering the nave of Chartres Cathedral – a representation in stone of the medieval concept of an orderly Platonic universe – and stepping into its cosmic labyrinth.

Subscribe to:
Posts (Atom)