This article on Last Word On Nothing by Cassandra Willyard brought about a fascinating debate – at least if you’re a science writer or have an interest in that business. Some have criticized it as irredeemably philistine for a science writer – honestly, to not know Hubble refers to a telescope! (Well, many things bear Hubble’s name, so I really don’t see that as so deplorable.) This is shallow criticism – what surely matters is how well a writer does the job she does, not what gaps might exist in her knowledge that never come to light unless she admits to them. Know your limits, is the only corollary of that.
Indeed, it makes me think it would be fun to know what areas of science hold no charms for other science writers. No doubt everyone’s blind spots would horrify some others. I long struggled to work up any enthusiasm for human origins. What? How could I not be interested in where we came from? Well, it seemed to me we kind of know where we came from, and roughly when, give or take a million years. We evolved from more primitive hominids. The rest is just detail, right?
Oddly, it is only now that the detail has become so messy – what with Homos floresiensis and naledi and so forth – that I’ve become engaged. Perhaps there’s nothing quite so appealing as blithe complacency undermined. I can’t say I yet care enough to fret about where each branch of the hominid tree should divide, but it’s fun to see all these revelations tumble out, not least because of the drama of some of the new discoveries.
My heartbeat doesn’t race for much of particle and fundamental physics either. I suspect this is for more partisan, and more dishonourable, reasons: particle physics has somehow managed to nab all the glamour and public attention, to the point that most people think this is what all of physics is, whereas my own former field of condensed matter physics, which has a larger community, never gets a look in. Meanwhile, particle physicists take the great ideas from CMP (like symmetry breaking) and then claim they invented them. You can see how bitter and twisted I am. So I was rather indifferent about the Higgs – an indifference I know some condensed matter physicists shared.
Some other fields I want to stick up for merely because they’re the underdogs – cell biology and biophysics, say, in the face of genetic hegemony.
So if a science writer admits to being unmoved by space science, it really doesn’t seem occasion to get all affronted. I edited an awful lot of astronomy papers at Nature that made my eyes glaze, often because they seemed to be (like some of those fossil and protein structure papers) a catalogue of arbitrary specifics. (Though don’t worry, I do love a good protein structure.)
Where I’m more unsure about Cassandra’s article is in the discussion of “the human element”. I suppose this is because it sends a chill wind down my spine. If the only way for science communication to connect with a broad public is by telling human stories, then I’m done for. I’m just not that interested in doing that (as you might have noticed).
That’s not to say that one shouldn’t make the most of a human element when it’s there. If there’s a way of telling a science story through personalities, it’s generally worth taking. “I might not be interested in gravitational waves, but I am interested in science as a process”, Cassandra writes. “Humanize the process, and you’ll hook me every time.”
Fair enough. But what if there is no human element to speak of? Every science writer will tell you that for every researcher who dreamed from childhood of cracking the problem they have finally conquered, there are ten or perhaps a hundred who came to a problem just because it was a natural extension of what they worked on for their PhD – or because it was a hot topic at the time. And for every colourful maverick or quirky underdog, there are lots of scientists who are perfectly lovely people but really have nothing that distinguishes them from the crowd. It’s always good to ask what drew a researcher to the topic, but often the answers aren’t terribly edifying. And there’s only so many times you’re going to be able to tell a story about gravitational waves as a tale of grit and persistence of a few visionaries in the face of scepticism about whether the method would work.
I quickly grew to hate that brand of science writing popular in the early 1990s in which “Jed Raven, a sandy-haired Texan with a charm that would melt glaciers, strode into the lab and boomed ‘Let’s go to work, people!’” Chances are, in retrospect, that Jed Raven was probably harassing his female postdocs. But honestly, I couldn’t give a toss about how Jed grew up collecting beetles or learning to herd steers or whatever they call them in Texas.
The idea that a science story can be told only if you find the human angle is deadly, but probably quite widespread. Unless you happen to strike lucky, it is likely to make whole areas of science hard to write about at all: health, field anthropology and astronomy will probably do well, inorganic chemistry not so much.
But Cassandra is right to imply that there is sometimes a presumption in science writing (including my own) that this stuff is inherently so interesting that you don’t need a narrative attached – you don’t even need to relate it beyond its own terms. It’s easy to be far too complacent about that. As Tim Radford wisely once said, above every hack’s desk should hang the sign: “No one has to read this crap.”
So what’s the alternative to “the human angle”? I’ll paraphrase Cassandra for the way I see it:
“I might not be interested in X, but I am interested in elegant, beautiful writing. Write well, and you’ll hook me every time.”
Wednesday, November 01, 2017
Tuesday, October 31, 2017
What the Reformation really did for science
There’s something thrilling about how, five centuries ago, the rebel monk Martin Luther defied his accusers. At a council (‘Diet’) in the German city of Worms in 1521, his safety and possibly his life were on the line. But this – his supporters avowed – was how he concluded his defence before the representatives of the pope Leo X:
"I cannot and will not retract anything, since it is neither safe nor right to go against conscience. I cannot do otherwise. Here I stand, may God help me."
You have to admit he showed more guts than Galileo did a century later when the Catholic church insisted that he recant on his support for the heliocentric cosmos of Copernicus. Galileo, elderly and cowed by the veiled threat of torture, did what the cardinals ordered. The legend has it that he muttered “Still it [the earth] moves” as he rose from kneeling is probably apocryphal.
Yet is there any link between these two challenges to the authority of Rome? Protestantism was launched five hundred years ago this month when Luther, an Augustinian cleric, allegedly nailed his 95 “theses” to the church door in Wittenberg. Was it this theological revolution that turned the intellectual tide, ushering in the so-called Scientific Revolution that kicked off with Galileo?
Asking that question is, even now, a good way to spark an argument between historians. They don’t, in general, threaten one another with excommunication, the rack and the pyre – but the debate can still be as heated as arguments between Catholics and Protestants.
Yet it’s probably not only futile but also beside the point to ask who is right. The debate highlights how, at the dawn of early modern science, what people thought about the natural world was inflected by what they thought about tradition, knowledge and religion. It makes no sense to regard the Scientific Revolution as an invention of a few bright sparks, independent of other social and cultural forces. And while narratives with heroes and villains might make for good stories, they are usually bad history.
Disenchanting the world
The idea that science was boosted by Protestantism was fueled by a 1938 book by historian Robert Merton, who argued argued that an English strand of the religious movement called Puritanism helped foster science in England in the seventeenth century, such as the work of Isaac Newton and his peers at the Royal Society. “From the 1960s to the 80s, historians of science endlessly and inconclusively debated the Merton thesis”, says historian of science David Wootton of the University of York. “Those debates have fallen quiet, but the assumption is still widespread that Protestant religion and the new science were somehow inextricably intertwined.”
Merton’s idea tied in with a widespread perception of those early Protestants as progressive. That view in turn stemmed from early twentieth-century sociologist Max Weber’s argument that Western capitalism arose from the “Protestant work ethic”. In particular, says historian of science and religion Sachiko Kusukawa of the University of Cambridge, Weber proposed “what is now called the ‘disenchantment’ thesis – the idea that Protestants got rid of ‘superstition’”.
But this picture of Protestants as open forward-thinkers and Catholics as conservative, anti-science reactionaries is an old myth that has long been rejected by experts, says Kusukawa. She says this black-and-white view of the Catholic church was shaped by two 19th-century Americans with an agenda: educator Andrew Dickson White and chemist John William Draper. The Draper-White thesis, which presented science and religion as historical enemies, was consciously constructed by distorting history, and historians have been debunking it ever since.
In one view, then, religion was pretty irrelevant. “The Scientific Revolution would have gone ahead with or without the Reformation”, Wootton asserts. Historian and writer James Hannam agrees, saying that “evidence that the Reformation had any material effect on the rise of science is almost impossible to isolate from other effects.”
But historian Peter Harrison of the University of Queensland counters that “the Protestant Reformation was an important factor in the Scientific Revolution”. The Puritan religious values of some English Protestants, he says, “gave legitimacy to the scientific endeavour when it needed it.”
“No matter how much historical evidence and argumentation is brought in to oppose any such claims”, says historian of science John Henry of Edinburgh University, “there always remains an unshakable feeling that, after all, there really is something to the thesis that Protestantism stimulated scientific development.”
Questioning authority
The Draper-White “conflict thesis” between science and religion still has advocates today, especially among scientists. Evolutionary biologist Jerry Coyne has proposed that “if after the fall of Rome atheism [and not Christianity] had pervaded the Western world, science would have developed earlier and be far more advanced than it is now.”
Not only is this sheer speculation though; it also demands a highly selective view of the interactions between science and religion in history. It’s true that Christian worship was, for many people in the Renaissance, surrounded by what even priests of that time considered superstition. The communion host was believed to have magical healing powers, and the magic incantation “Hocus pocus” is suspected to be a corruption of the ecclesiastical Latin “Hoc est corpus meum”: This is my body.
But Catholic and Protestant theologians alike lamented this muddying of Christian doctrine by folk beliefs. Plenty of them saw no real conflict between their religious convictions and the study of the physical world. Some of the best astronomers in Italy in the early seventeenth century were Catholic Jesuits, such as the German Christopher Clavius and the Italian Orazio Grassi. What’s more, the church raised little objection to Nicolaus Copernicus’s book De revolutionibus, which challenged the earth-centred picture of the cosmos described by Ptolemy of Alexandria in the 2nd century AD, when it was published in 1543. Copernicus himself was a Catholic canon in Frombork (now in Poland), and he dedicated the book to the pope, Paul III.
Questioning the traditional knowledge about the world taught at the universities – the natural philosophy and of Aristotle, Hippocrates, Ptolemy, Galen and other ancient Greek and Roman writers – began well before the Reformation got underway. And that challenge was initiated largely in Italy, the seat of the Roman church, by late fifteenth-century scholars like Marsilio Ficino and Pico della Mirandola.
These men questioned whether something was true just because it was written in an old book. They and others started to argue that the most reliable way to get knowledge was from direct experience: to look for yourself. That view was supported by the sixteenth-century Swiss physician and alchemist Paracelsus, who even in his lifetime some called the “Luther of medicine”. Again it was in Italy that this recourse to experience – and ultimately to something resembling the idea of experiment – was often to be found. The physician Andreas Vesalius conducted human dissections (about which officials in Vesalius’s Padua were quite permissive) which led him to dispute Galen’s anatomy in his seminal 1543 book De humani corporis fabrica. In Naples in the 1550s, the polymath Giambattista della Porta began to experiment with lenses and optics, and he pretty much described the telescope well before Galileo, hearing of this instrument invented in Holland, made one to survey the heavens. Della Porta was persuaded in his old age to join the select group of young Italian natural philosophers called the Academy of Lynxes, of which Galileo also became a member.
It’s not hard, then, to build up a narrative of the emergence of science that makes barely any connection with the religious upheavals of the Reformation: leading from the Renaissance humanism of Ficino, through Vesalius to Galileo and early ‘scientific societies’, and culminating in the Scientific Revolution and the Royal Society in London, with luminaries like Robert Boyle, Robert Hooke and Isaac Newton whose discoveries are still used in science today.
But – this is history after all – it wasn’t that simple. “It would be remarkable if the tumultuous religious upheavals of the sixteenth century, and the subsequent schism between Catholics and Protestants, did not leave an indelible mark on an emerging modern science”, says Harrison. “So the real question is not whether these events influenced the rise of modern science, but how.”
Against reason
The Draper-White thesis relied on a caricature of history, particularly in regard to Galileo. The way the church treated him was surely appalling, but today historians recognize that a less provocative person might well have got away with publishing his heliocentric views. It didn’t help, for example, that the simpleton defending the old Ptolemaic universe in the book that caused all the trouble, Galileo’s Dialogue on the Two Chief World Systems (1632), was a thinly veiled portrait of (among others) the pope Urban VIII.
Not only did some Catholics study and support what we’d now call science – as various clerics had done throughout the Middle Ages – but there’s no reason to think that the Protestants were intrinsically more progressive or “scientific”. Martin Luther himself had a rather low opinion of Copernicus, whose ideas on the cosmos he heard about from other scholars in Wittenberg before De revolutionibus was published. Copernicus’s manuscript was patiently coaxed out of him by Georg Joachim Rheticus, a mathematician who was appointed at the University of Wittenberg by Luther’s righthand man Philip Melanchthon. Rheticus brought the book back from Frombork for publication in Nuremberg. Yet Luther called Copernicus a fool who sought “to reverse the entire science of astronomy”. What, Luther scoffed, about the Biblical story in which Joshua commanded the sun – and not the earth – to stand still?
For him, religious faith trumped everything. If anyone dared suggest that articles of Christian faith defied reason, Luther would blast them with stuff like this: “The Virgin birth was unreasonable; so was the Resurrection; so were the Gospels, the sacraments, the pontifical prerogatives, and the promise of life everlasting.” Reason, he argued, “is the devil’s harlot”. It was hubris and blasphemy to suppose that one could decode God’s handiwork. Men should not understand; they should only believe.
Martin Luther wasn’t after all seeking to reform natural philosophy, but Christian theology. He had seen how the Roman church was corrupt: practicing nepotism (especially in the infamous reign of the Borgia popes in the late fifteenth century), bewitching believers by intoning in a Latin that they didn’t understand, and making salvation contingent on the capricious authority of priests. Luther watched in dismay as the church raised funds by selling “indulgences”: documents guaranteeing the holder (or their relatives) time off the mild discomfort of Purgatory before being admitted to Heaven. Luther became convinced that salvation could be granted not by priests but by God alone, and that it was a private affair between God and believers that didn’t need the intervention of the clergy.
One particularly controversial issue (though it doesn’t seem terribly important to Catholic/Protestant tribal conflict today) was transubstantiation: the transformation of bread and wine into the body and blood of Christ in the ritual of communion. Luther maintained that this was largely a symbolic transformation, not a literal one. His objection to Aristotle’s natural philosophy – usually dogmatically asserted at the universities – was not so much because he thought it was scientifically wrong but because it was used (some might say misused) to defend the Catholic view of transubstantiation.
After the fall
As far as the effects of Protestantism on science are concerned, Harrison warns that “any simple story is likely to be wrong.” For one thing, there was never a single Reformation. Protestantism took root in Luther’s Germany, then a mosaic of small kingdoms and city-states, where eventually the religious and political tensions boiled over into the devastating Thirty Years War of 1618-1648. But a separate religious revolt, sharing many of Luther’s convictions, happened in the Swiss cantons in the 1530s led by the reformers Ulrich Zwingli in Zurich and Jean Calvin in Geneva. England’s break from the Roman church in the same decade had quite different origins: Henry VIII, having denounced Luther in the 1520s, was piqued by being denied a papal divorce from Catherine of Aragon, and he passed laws that led to the establishment of the Anglican church.
All of these movements had their own doctrines and politics, so it’s far too simplistic to portray Protestants as progressive and Catholics as repressive. Both sides saw radical new ideas in philosophy they didn’t like. But Catholics were more successful in suppressing them, says Henry, because they’d been around longer and had a more well-oiled machinery of censorship. “No doubt Luther and Calvin would have liked to have a similar set-up to the Inquisition and the Index [of banned books], but they just didn’t”, Henry says. “So a natural philosopher living in a Protestant country could get away with things that a philosopher living in a Catholic country could not.”
Galileo’s Dialogue, for example, was smuggled to the Elzevirs, a Dutch printing family (the origin of Elsevier Publishing) in Protestant Amsterdam, who were free to publish it in the face of the Inquisition. “I’ve no doubt any number of Italian printers would have published it if they thought they’d get away with it”, says Henry. By the same token, the philosopher René Descartes, himself a good Catholic but unsettled by Galileo’s fate, moved from France to the Netherlands before publishing his ideas on atomism, which he knew would get him into trouble with the Inquisition because of what it might seem to imply for transubstantiation.
But if you think this supports the “progressive” reputation of Protestantism, consider the case of Spanish physician Michael Servetus, who discovered the pulmonary circulation of the blood from the right to left ventricle via the lungs. He was imprisoned in Catholic France for his supposedly heretical religious views, but he managed to escape and fled to Geneva, on his way to Italy. There the Calvinists decided he was a heretic too – Servetus had previously argued bitterly with Calvin on points of doctrine – and they burnt him at the stake.
Despite such outrages, Henry thinks that Luther and his followers did stimulate a wider questioning of authority – including that of the ancient natural philosophers. “Luther spoke of a priesthood of all believers, and encouraged every man to read the Bible for himself”, he says (for which reason Luther made a printed vernacular translation in German: see Box). “This does seem to have stimulated Protestant intellectuals to reject the authority of the ancient Greeks approved of by the Catholic church. So they began to read what was called ‘God’s other book’ – the book of nature – for themselves”.
“Protestants do seem to have contributed more to observational and empirical science,” Henry adds. Johannes Kepler, sometimes called the “Luther of astronomy”, was one such; his mentor Tycho Brahe was another. (Both men, however, served as court astronomers for the unusually tolerant Holy Roman Emperor Rudolf II in Prague.) Harrison agrees that the Reformation could have “promoted a questioning of traditional authorities in a way that opened up possibilities for new forms of knowledge or new institutions for the pursuit of knowledge”.
That questioning mixed science with religion, though. For Protestants, the problem with Aristotle wasn’t merely his outright, demonstrable errors about how the world works, but that, as a pre-Christian, he had failed to factor in the consequences of the Fall of Man. This left humankind with diminished moral, cognitive and sensory capacities. “It is impossible that nature could be understood by human reason after the fall of Adam”, Luther wrote.
Yet after such views were filtered through seventeenth-century Anglicanism, they left Robert Hooke concluding that what we need are scientific instruments such as the microscope to make up for our defects. Systematic science, in the view of Francis Bacon, whose ideas were central to the approach of the Royal Society, could be a corrective, letting us recover the understanding and mastery of the world enjoyed by Adam. It was unwise to place too much trust in our naïve senses: careful observation and reason, as well as questioning and skepticism, were needed to get past the “common sense” view that the sun circled the earth. “Genesis narratives of creation and Fall motivated scientific activity, which came to be understood as a redemptive process that would both restore nature and extend human dominion over it”, says Harrison.
Bacon’s view of a scientific utopia, sketched out in New Atlantis (1627), portrayed a society ruled by a brotherhood of scientist-priests who wrought fantastical inventions. This was decidedly Protestant vision was nurtured in the court of Frederick V, Elector Palatine of the Rhine and head of the Protestant Union of German states, whose marriage to the daughter of James I of England cemented the alliance between England and the German Protestants. Frederick was offered the Bohemian crown by Protestant rebels in 1619, and when he was defeated by the Catholic Hapsburgs of Spain the following year, some Protestant scholars fled to England. Among them was Samuel Hartlib, who published his own utopian blueprint in 1641. He befriended John Wilkins and other founders of the Royal Society, and like Bacon he imagined a scientific brotherhood dedicated to the pursuit of knowledge. Hartlib called it an Invisible College, the term that Robert Boyle later used for the incipient Royal Society. For the Anglican Boyle, scientific investigation was a religious duty: we had an obligation to understand the world God had made.
What’s God got to do with it?
Boyle’s view was shared by some of his contemporaries – like John Ray, sometimes called the father of English botany, who argued that every creature is evidence of God’s design. “The over-riding emphasis among Lutherans was the importance of God’s ‘Providence’ – foresight and planning – in creation”, says Kusukawa.
Yet much the same view can be found among Catholics too. “In the book of nature things are written in only one way”, wrote Galileo – and that way was “in the language of mathematics.” Some of the cardinals who condemned him would have gladly agreed.
So did the theological disagreements really matter much for science? Any differences between the two sides’ outlook on natural philosophy were “actually relatively trivial”, says Hannam. “If radical religious thinkers in both directions, as well as middle-of-the-road conformers like Galileo, are all united in being very important natural philosophers, it is hard to see how their particular religious beliefs have much relevance.” What they shared was more important than how they differed: namely, a belief in a universe created by a consistent God who created laws that let it run as smooth as clockwork.
It’s not, then, science per se that’s at issue here, but authority. Galileo’s assertion that the Bible is not meant to be a book of natural philosophy was relatively uncontroversial to all but a few; today’s fundamentalism that denies evolution and the age of the earth is a peculiarly modern delusion. No one – not Copernicus, Galileo, Newton or Boyle – denied what Luther and the popes believed, which is that the ultimate authority lies with God. The arguments were about how best to represent and honour Him on earth, and not so much about the kind of earth He had made.
However you answer it, asking if the Reformation played a part in the birth of modern science shows that the interactions of science and religion in the past have been far more complex than a mutual antagonism. The Reformation and what followed from it makes a mockery of the idea that the Christian religion is a fixed, monolithic and unquestioning entity in contrast to science’s perpetual doubt and questioning. There were broad-minded proto-scientists, as well as reactionaries, amongst both Protestants and Catholics. Perhaps it doesn’t much matter what belief system you have, so much as what you do with it.
Box: Information Revolutions
There’s plenty to debate about whether Martin Luther was more “modern” than his papal accusers, but he sure caught on quickly to the possibility of the printing press for spreading his message of religious reform. His German translation of the New Testament, printed in 1522, sold out within a month. His supporters printed pamphlets and broadsheets announcing Luther’s message of salvation through faith alone, and criticizing the corruption of Rome.
Johannes Gutenberg, a metalworker by trade in the German city of Mainz, may have thought up the idea of a press with movable type as early as the 1430s, but it wasn’t until the early 1450s that he had a working machine. Naturally, one of the first books he printed was the Bible – it was still very pricey, but much less so than the hand-copied editions that were the sole previous source. Thanks to court disputes about ownership of the press, Gutenberg never made a fortune from his invention. But others later did, and print publication was thriving by the time of the Reformation.
Historian Elizabeth Eisenstein argued in her 1979 book The Printing Press as an Agent of Change that, by allowing information to be spread widely throughout European culture, the invention of printing transformed society, enabling the Reformation, the Renaissance and the Scientific Revolution. It not only disseminated but standardized knowledge, Eisenstein said, and so allowed the possibility of scientific consensus.
David Wootton agrees that printing was an important factor in the emergence of science in the sixteenth and seventeenth centuries. “The printing press brought about an information revolution”, he says. “Instead of commenting on a few canonical texts, intellectuals learnt to navigate whole libraries of information. In the process they invented the modern idea of the fact: reliable information that could be checked and tested.”
If so, what might be effect of the modern revolution in digital information, often compared to Gutenberg’s “disruptive” technology? Is it now destabilizing facts by making it so easy to communicate misinformation and “fake news”? Does it, in allegedly democratic projects like Wikipedia, challenge “old authorities”? Or is it creating a new hegemony, with Google and Facebook in place of the Encyclopedia Britannica and the scientific and technical literature – or, in an earlier age, of Aristotle and the church?
"I cannot and will not retract anything, since it is neither safe nor right to go against conscience. I cannot do otherwise. Here I stand, may God help me."
You have to admit he showed more guts than Galileo did a century later when the Catholic church insisted that he recant on his support for the heliocentric cosmos of Copernicus. Galileo, elderly and cowed by the veiled threat of torture, did what the cardinals ordered. The legend has it that he muttered “Still it [the earth] moves” as he rose from kneeling is probably apocryphal.
Yet is there any link between these two challenges to the authority of Rome? Protestantism was launched five hundred years ago this month when Luther, an Augustinian cleric, allegedly nailed his 95 “theses” to the church door in Wittenberg. Was it this theological revolution that turned the intellectual tide, ushering in the so-called Scientific Revolution that kicked off with Galileo?
Asking that question is, even now, a good way to spark an argument between historians. They don’t, in general, threaten one another with excommunication, the rack and the pyre – but the debate can still be as heated as arguments between Catholics and Protestants.
Yet it’s probably not only futile but also beside the point to ask who is right. The debate highlights how, at the dawn of early modern science, what people thought about the natural world was inflected by what they thought about tradition, knowledge and religion. It makes no sense to regard the Scientific Revolution as an invention of a few bright sparks, independent of other social and cultural forces. And while narratives with heroes and villains might make for good stories, they are usually bad history.
Disenchanting the world
The idea that science was boosted by Protestantism was fueled by a 1938 book by historian Robert Merton, who argued argued that an English strand of the religious movement called Puritanism helped foster science in England in the seventeenth century, such as the work of Isaac Newton and his peers at the Royal Society. “From the 1960s to the 80s, historians of science endlessly and inconclusively debated the Merton thesis”, says historian of science David Wootton of the University of York. “Those debates have fallen quiet, but the assumption is still widespread that Protestant religion and the new science were somehow inextricably intertwined.”
Merton’s idea tied in with a widespread perception of those early Protestants as progressive. That view in turn stemmed from early twentieth-century sociologist Max Weber’s argument that Western capitalism arose from the “Protestant work ethic”. In particular, says historian of science and religion Sachiko Kusukawa of the University of Cambridge, Weber proposed “what is now called the ‘disenchantment’ thesis – the idea that Protestants got rid of ‘superstition’”.
But this picture of Protestants as open forward-thinkers and Catholics as conservative, anti-science reactionaries is an old myth that has long been rejected by experts, says Kusukawa. She says this black-and-white view of the Catholic church was shaped by two 19th-century Americans with an agenda: educator Andrew Dickson White and chemist John William Draper. The Draper-White thesis, which presented science and religion as historical enemies, was consciously constructed by distorting history, and historians have been debunking it ever since.
In one view, then, religion was pretty irrelevant. “The Scientific Revolution would have gone ahead with or without the Reformation”, Wootton asserts. Historian and writer James Hannam agrees, saying that “evidence that the Reformation had any material effect on the rise of science is almost impossible to isolate from other effects.”
But historian Peter Harrison of the University of Queensland counters that “the Protestant Reformation was an important factor in the Scientific Revolution”. The Puritan religious values of some English Protestants, he says, “gave legitimacy to the scientific endeavour when it needed it.”
“No matter how much historical evidence and argumentation is brought in to oppose any such claims”, says historian of science John Henry of Edinburgh University, “there always remains an unshakable feeling that, after all, there really is something to the thesis that Protestantism stimulated scientific development.”
Questioning authority
The Draper-White “conflict thesis” between science and religion still has advocates today, especially among scientists. Evolutionary biologist Jerry Coyne has proposed that “if after the fall of Rome atheism [and not Christianity] had pervaded the Western world, science would have developed earlier and be far more advanced than it is now.”
Not only is this sheer speculation though; it also demands a highly selective view of the interactions between science and religion in history. It’s true that Christian worship was, for many people in the Renaissance, surrounded by what even priests of that time considered superstition. The communion host was believed to have magical healing powers, and the magic incantation “Hocus pocus” is suspected to be a corruption of the ecclesiastical Latin “Hoc est corpus meum”: This is my body.
But Catholic and Protestant theologians alike lamented this muddying of Christian doctrine by folk beliefs. Plenty of them saw no real conflict between their religious convictions and the study of the physical world. Some of the best astronomers in Italy in the early seventeenth century were Catholic Jesuits, such as the German Christopher Clavius and the Italian Orazio Grassi. What’s more, the church raised little objection to Nicolaus Copernicus’s book De revolutionibus, which challenged the earth-centred picture of the cosmos described by Ptolemy of Alexandria in the 2nd century AD, when it was published in 1543. Copernicus himself was a Catholic canon in Frombork (now in Poland), and he dedicated the book to the pope, Paul III.
Questioning the traditional knowledge about the world taught at the universities – the natural philosophy and of Aristotle, Hippocrates, Ptolemy, Galen and other ancient Greek and Roman writers – began well before the Reformation got underway. And that challenge was initiated largely in Italy, the seat of the Roman church, by late fifteenth-century scholars like Marsilio Ficino and Pico della Mirandola.
These men questioned whether something was true just because it was written in an old book. They and others started to argue that the most reliable way to get knowledge was from direct experience: to look for yourself. That view was supported by the sixteenth-century Swiss physician and alchemist Paracelsus, who even in his lifetime some called the “Luther of medicine”. Again it was in Italy that this recourse to experience – and ultimately to something resembling the idea of experiment – was often to be found. The physician Andreas Vesalius conducted human dissections (about which officials in Vesalius’s Padua were quite permissive) which led him to dispute Galen’s anatomy in his seminal 1543 book De humani corporis fabrica. In Naples in the 1550s, the polymath Giambattista della Porta began to experiment with lenses and optics, and he pretty much described the telescope well before Galileo, hearing of this instrument invented in Holland, made one to survey the heavens. Della Porta was persuaded in his old age to join the select group of young Italian natural philosophers called the Academy of Lynxes, of which Galileo also became a member.
It’s not hard, then, to build up a narrative of the emergence of science that makes barely any connection with the religious upheavals of the Reformation: leading from the Renaissance humanism of Ficino, through Vesalius to Galileo and early ‘scientific societies’, and culminating in the Scientific Revolution and the Royal Society in London, with luminaries like Robert Boyle, Robert Hooke and Isaac Newton whose discoveries are still used in science today.
But – this is history after all – it wasn’t that simple. “It would be remarkable if the tumultuous religious upheavals of the sixteenth century, and the subsequent schism between Catholics and Protestants, did not leave an indelible mark on an emerging modern science”, says Harrison. “So the real question is not whether these events influenced the rise of modern science, but how.”
Against reason
The Draper-White thesis relied on a caricature of history, particularly in regard to Galileo. The way the church treated him was surely appalling, but today historians recognize that a less provocative person might well have got away with publishing his heliocentric views. It didn’t help, for example, that the simpleton defending the old Ptolemaic universe in the book that caused all the trouble, Galileo’s Dialogue on the Two Chief World Systems (1632), was a thinly veiled portrait of (among others) the pope Urban VIII.
Not only did some Catholics study and support what we’d now call science – as various clerics had done throughout the Middle Ages – but there’s no reason to think that the Protestants were intrinsically more progressive or “scientific”. Martin Luther himself had a rather low opinion of Copernicus, whose ideas on the cosmos he heard about from other scholars in Wittenberg before De revolutionibus was published. Copernicus’s manuscript was patiently coaxed out of him by Georg Joachim Rheticus, a mathematician who was appointed at the University of Wittenberg by Luther’s righthand man Philip Melanchthon. Rheticus brought the book back from Frombork for publication in Nuremberg. Yet Luther called Copernicus a fool who sought “to reverse the entire science of astronomy”. What, Luther scoffed, about the Biblical story in which Joshua commanded the sun – and not the earth – to stand still?
For him, religious faith trumped everything. If anyone dared suggest that articles of Christian faith defied reason, Luther would blast them with stuff like this: “The Virgin birth was unreasonable; so was the Resurrection; so were the Gospels, the sacraments, the pontifical prerogatives, and the promise of life everlasting.” Reason, he argued, “is the devil’s harlot”. It was hubris and blasphemy to suppose that one could decode God’s handiwork. Men should not understand; they should only believe.
Martin Luther wasn’t after all seeking to reform natural philosophy, but Christian theology. He had seen how the Roman church was corrupt: practicing nepotism (especially in the infamous reign of the Borgia popes in the late fifteenth century), bewitching believers by intoning in a Latin that they didn’t understand, and making salvation contingent on the capricious authority of priests. Luther watched in dismay as the church raised funds by selling “indulgences”: documents guaranteeing the holder (or their relatives) time off the mild discomfort of Purgatory before being admitted to Heaven. Luther became convinced that salvation could be granted not by priests but by God alone, and that it was a private affair between God and believers that didn’t need the intervention of the clergy.
One particularly controversial issue (though it doesn’t seem terribly important to Catholic/Protestant tribal conflict today) was transubstantiation: the transformation of bread and wine into the body and blood of Christ in the ritual of communion. Luther maintained that this was largely a symbolic transformation, not a literal one. His objection to Aristotle’s natural philosophy – usually dogmatically asserted at the universities – was not so much because he thought it was scientifically wrong but because it was used (some might say misused) to defend the Catholic view of transubstantiation.
After the fall
As far as the effects of Protestantism on science are concerned, Harrison warns that “any simple story is likely to be wrong.” For one thing, there was never a single Reformation. Protestantism took root in Luther’s Germany, then a mosaic of small kingdoms and city-states, where eventually the religious and political tensions boiled over into the devastating Thirty Years War of 1618-1648. But a separate religious revolt, sharing many of Luther’s convictions, happened in the Swiss cantons in the 1530s led by the reformers Ulrich Zwingli in Zurich and Jean Calvin in Geneva. England’s break from the Roman church in the same decade had quite different origins: Henry VIII, having denounced Luther in the 1520s, was piqued by being denied a papal divorce from Catherine of Aragon, and he passed laws that led to the establishment of the Anglican church.
All of these movements had their own doctrines and politics, so it’s far too simplistic to portray Protestants as progressive and Catholics as repressive. Both sides saw radical new ideas in philosophy they didn’t like. But Catholics were more successful in suppressing them, says Henry, because they’d been around longer and had a more well-oiled machinery of censorship. “No doubt Luther and Calvin would have liked to have a similar set-up to the Inquisition and the Index [of banned books], but they just didn’t”, Henry says. “So a natural philosopher living in a Protestant country could get away with things that a philosopher living in a Catholic country could not.”
Galileo’s Dialogue, for example, was smuggled to the Elzevirs, a Dutch printing family (the origin of Elsevier Publishing) in Protestant Amsterdam, who were free to publish it in the face of the Inquisition. “I’ve no doubt any number of Italian printers would have published it if they thought they’d get away with it”, says Henry. By the same token, the philosopher René Descartes, himself a good Catholic but unsettled by Galileo’s fate, moved from France to the Netherlands before publishing his ideas on atomism, which he knew would get him into trouble with the Inquisition because of what it might seem to imply for transubstantiation.
But if you think this supports the “progressive” reputation of Protestantism, consider the case of Spanish physician Michael Servetus, who discovered the pulmonary circulation of the blood from the right to left ventricle via the lungs. He was imprisoned in Catholic France for his supposedly heretical religious views, but he managed to escape and fled to Geneva, on his way to Italy. There the Calvinists decided he was a heretic too – Servetus had previously argued bitterly with Calvin on points of doctrine – and they burnt him at the stake.
Despite such outrages, Henry thinks that Luther and his followers did stimulate a wider questioning of authority – including that of the ancient natural philosophers. “Luther spoke of a priesthood of all believers, and encouraged every man to read the Bible for himself”, he says (for which reason Luther made a printed vernacular translation in German: see Box). “This does seem to have stimulated Protestant intellectuals to reject the authority of the ancient Greeks approved of by the Catholic church. So they began to read what was called ‘God’s other book’ – the book of nature – for themselves”.
“Protestants do seem to have contributed more to observational and empirical science,” Henry adds. Johannes Kepler, sometimes called the “Luther of astronomy”, was one such; his mentor Tycho Brahe was another. (Both men, however, served as court astronomers for the unusually tolerant Holy Roman Emperor Rudolf II in Prague.) Harrison agrees that the Reformation could have “promoted a questioning of traditional authorities in a way that opened up possibilities for new forms of knowledge or new institutions for the pursuit of knowledge”.
That questioning mixed science with religion, though. For Protestants, the problem with Aristotle wasn’t merely his outright, demonstrable errors about how the world works, but that, as a pre-Christian, he had failed to factor in the consequences of the Fall of Man. This left humankind with diminished moral, cognitive and sensory capacities. “It is impossible that nature could be understood by human reason after the fall of Adam”, Luther wrote.
Yet after such views were filtered through seventeenth-century Anglicanism, they left Robert Hooke concluding that what we need are scientific instruments such as the microscope to make up for our defects. Systematic science, in the view of Francis Bacon, whose ideas were central to the approach of the Royal Society, could be a corrective, letting us recover the understanding and mastery of the world enjoyed by Adam. It was unwise to place too much trust in our naïve senses: careful observation and reason, as well as questioning and skepticism, were needed to get past the “common sense” view that the sun circled the earth. “Genesis narratives of creation and Fall motivated scientific activity, which came to be understood as a redemptive process that would both restore nature and extend human dominion over it”, says Harrison.
Bacon’s view of a scientific utopia, sketched out in New Atlantis (1627), portrayed a society ruled by a brotherhood of scientist-priests who wrought fantastical inventions. This was decidedly Protestant vision was nurtured in the court of Frederick V, Elector Palatine of the Rhine and head of the Protestant Union of German states, whose marriage to the daughter of James I of England cemented the alliance between England and the German Protestants. Frederick was offered the Bohemian crown by Protestant rebels in 1619, and when he was defeated by the Catholic Hapsburgs of Spain the following year, some Protestant scholars fled to England. Among them was Samuel Hartlib, who published his own utopian blueprint in 1641. He befriended John Wilkins and other founders of the Royal Society, and like Bacon he imagined a scientific brotherhood dedicated to the pursuit of knowledge. Hartlib called it an Invisible College, the term that Robert Boyle later used for the incipient Royal Society. For the Anglican Boyle, scientific investigation was a religious duty: we had an obligation to understand the world God had made.
What’s God got to do with it?
Boyle’s view was shared by some of his contemporaries – like John Ray, sometimes called the father of English botany, who argued that every creature is evidence of God’s design. “The over-riding emphasis among Lutherans was the importance of God’s ‘Providence’ – foresight and planning – in creation”, says Kusukawa.
Yet much the same view can be found among Catholics too. “In the book of nature things are written in only one way”, wrote Galileo – and that way was “in the language of mathematics.” Some of the cardinals who condemned him would have gladly agreed.
So did the theological disagreements really matter much for science? Any differences between the two sides’ outlook on natural philosophy were “actually relatively trivial”, says Hannam. “If radical religious thinkers in both directions, as well as middle-of-the-road conformers like Galileo, are all united in being very important natural philosophers, it is hard to see how their particular religious beliefs have much relevance.” What they shared was more important than how they differed: namely, a belief in a universe created by a consistent God who created laws that let it run as smooth as clockwork.
It’s not, then, science per se that’s at issue here, but authority. Galileo’s assertion that the Bible is not meant to be a book of natural philosophy was relatively uncontroversial to all but a few; today’s fundamentalism that denies evolution and the age of the earth is a peculiarly modern delusion. No one – not Copernicus, Galileo, Newton or Boyle – denied what Luther and the popes believed, which is that the ultimate authority lies with God. The arguments were about how best to represent and honour Him on earth, and not so much about the kind of earth He had made.
However you answer it, asking if the Reformation played a part in the birth of modern science shows that the interactions of science and religion in the past have been far more complex than a mutual antagonism. The Reformation and what followed from it makes a mockery of the idea that the Christian religion is a fixed, monolithic and unquestioning entity in contrast to science’s perpetual doubt and questioning. There were broad-minded proto-scientists, as well as reactionaries, amongst both Protestants and Catholics. Perhaps it doesn’t much matter what belief system you have, so much as what you do with it.
Box: Information Revolutions
There’s plenty to debate about whether Martin Luther was more “modern” than his papal accusers, but he sure caught on quickly to the possibility of the printing press for spreading his message of religious reform. His German translation of the New Testament, printed in 1522, sold out within a month. His supporters printed pamphlets and broadsheets announcing Luther’s message of salvation through faith alone, and criticizing the corruption of Rome.
Johannes Gutenberg, a metalworker by trade in the German city of Mainz, may have thought up the idea of a press with movable type as early as the 1430s, but it wasn’t until the early 1450s that he had a working machine. Naturally, one of the first books he printed was the Bible – it was still very pricey, but much less so than the hand-copied editions that were the sole previous source. Thanks to court disputes about ownership of the press, Gutenberg never made a fortune from his invention. But others later did, and print publication was thriving by the time of the Reformation.
Historian Elizabeth Eisenstein argued in her 1979 book The Printing Press as an Agent of Change that, by allowing information to be spread widely throughout European culture, the invention of printing transformed society, enabling the Reformation, the Renaissance and the Scientific Revolution. It not only disseminated but standardized knowledge, Eisenstein said, and so allowed the possibility of scientific consensus.
David Wootton agrees that printing was an important factor in the emergence of science in the sixteenth and seventeenth centuries. “The printing press brought about an information revolution”, he says. “Instead of commenting on a few canonical texts, intellectuals learnt to navigate whole libraries of information. In the process they invented the modern idea of the fact: reliable information that could be checked and tested.”
If so, what might be effect of the modern revolution in digital information, often compared to Gutenberg’s “disruptive” technology? Is it now destabilizing facts by making it so easy to communicate misinformation and “fake news”? Does it, in allegedly democratic projects like Wikipedia, challenge “old authorities”? Or is it creating a new hegemony, with Google and Facebook in place of the Encyclopedia Britannica and the scientific and technical literature – or, in an earlier age, of Aristotle and the church?
Thursday, September 14, 2017
Bright Earth in China
This is the introduction to a forthcoming Chinese edition of my book Bright Earth.
________________________________________________________
I have seen the Great Wall, the Forbidden City, Hangzhou’s wondrous West Lake, the gardens of Suzhou and the ancient waterworks of Dujiangyan in Sichuan. But somehow my travels in China have never yet brought me to Xi’an to see the tomb of the First Emperor Qin Shi Huangdi and his Terracotta Army. It is most certainly on my list.
But I know that none of us now can ever see the ranks of clay soldiers in their full glory, because the paints that once adorned them have long since flaked off the surface. As I say in Bright Earth of the temples and statues of ancient Greece, they leave us with the impression that the ancient world was more drab than was really the case. These statues were once brightly coloured, as we know from archaeological work on the excavations at Xi’an – for a few fragments of the pigments still adhere to the terracotta.
Some of these pigments are familiar from elsewhere in the ancient world. Red cinnabar, for example – the mineral form of mercury sulfide – is found throughout Asia and the Middle East during the period of the Qin and Han dynasties. Cinnabar was plentiful in China: Sha’anxi alone contains a fifth of the country’s reserves, and it was mined for use not just in pigments but in medicines too. Chinese legend tells of one Huang An, who prolonged his life for at least 10,000 years by eating cinnabar, and Qin Shi Huangdi was said to have consumed wine and honey laden with the mineral, thinking it would prolong his life. (Some historians have speculated that it might instead have hastened his death, for it is never a good idea, of course, to ingest mercury.) According to the Han historian Sima Qian, the First Emperor’s tomb contained a scale model of his empire with rivers made of mercury – possibly from the ancient mines in Xunyang county in southern Sha’anxi.
But some of the pigments on the Terracotta Army are unique to China. This is hardly surprising, since it is widely acknowledged now that chemistry in ancient China – alchemy, as it was then – was a sophisticated craft, used to make a variety of medicines and other substances for daily life. This was true also of ancient Egypt, where chemistry produced glass, cosmetics, ointments and colours for artists. One of the most celebrated colours of the Egyptians is simply now called Egyptian blue, and as Bright Earth explains, it is probably an offshoot of glass-making. It is a blue silicate material, its tint conferred by the element copper. China in the Qin and Han periods, and earlier during the Warring States period of around 479-221 BC, did not use Egyptian blue, but had its own version, now known as Han blue or (because after all it predates the Han) simply Chinese blue. Whereas Egyptian blue has the chemical name calcium copper silicate, Chinese blue substitutes calcium for the element barium.
The ancient Chinese chemists discovered also that, during the production of this blue pigment they could create a purple version, which has the same chemical elements but combined in somewhat different ratios. That was a real innovation, because purple pigments have been hard to make throughout the history of the “invention of colour” – and in the West there was no good, stable purple pigment until the nineteenth century. Even more impressively, Chinese purple contains two copper atoms linked by a chemical bond, making it – of course, the makers had no knowledge of this – the earliest known synthetic substance with such a so-called “metal-metal bond”, a unit of great significance to modern chemists.
Although I’ve not seen the Terracotta Army, several years ago I visited Professor Heinz Berke of the University of Zurich in Switzerland, who has worked on analyzing their remaining scraps of pigment. Heinz was kind enough to give me a sample of the Chinese blue pigment that he had made in his laboratory; I have it in front of me now as I write these words. “The invention of Chinese Blue and Chinese Purple”, Heinz has written, “is an admirable technical-chemical feat [and an] excellent example of the positive influence of science and technology on society.”
My sample of modern Chinese blue, made by Heinz Berke (Zurich).
You can perhaps see, then, why I am so delighted by the publication of a Chinese edition of Bright Earth – for it combines three of my passions: chemistry, colour and China. I always regretted that I was not able to say more in the book about art outside the West, but perhaps one day I shall have the resolve to attempt it. The invention of colour in China has a rather different narrative, not least because the tradition of landscape painting – shanshuihua – places less emphasis on colour and more on form, composition and the art of brushwork. Yet that tradition has captivated me since my youth, and it played a big part in inducing me to begin exploring China in 1992. This artistic tradition, of course, in no way lessened the significance of colour in Chinese culture; it was, after all, an aspect of the correspondences attached to the system of the Five Elements (wu xing). And one can hardly visit China without becoming aware of the vibrancy of colour in its traditional culture, not least in the glorious dyes used for silk. I hope and trust, therefore, that Bright Earth will find plenty of resonance among Chinese readers.
Li Gongnian (c.1120), Winter Evening Landscape, and detail.
________________________________________________________
I have seen the Great Wall, the Forbidden City, Hangzhou’s wondrous West Lake, the gardens of Suzhou and the ancient waterworks of Dujiangyan in Sichuan. But somehow my travels in China have never yet brought me to Xi’an to see the tomb of the First Emperor Qin Shi Huangdi and his Terracotta Army. It is most certainly on my list.
But I know that none of us now can ever see the ranks of clay soldiers in their full glory, because the paints that once adorned them have long since flaked off the surface. As I say in Bright Earth of the temples and statues of ancient Greece, they leave us with the impression that the ancient world was more drab than was really the case. These statues were once brightly coloured, as we know from archaeological work on the excavations at Xi’an – for a few fragments of the pigments still adhere to the terracotta.

Some of these pigments are familiar from elsewhere in the ancient world. Red cinnabar, for example – the mineral form of mercury sulfide – is found throughout Asia and the Middle East during the period of the Qin and Han dynasties. Cinnabar was plentiful in China: Sha’anxi alone contains a fifth of the country’s reserves, and it was mined for use not just in pigments but in medicines too. Chinese legend tells of one Huang An, who prolonged his life for at least 10,000 years by eating cinnabar, and Qin Shi Huangdi was said to have consumed wine and honey laden with the mineral, thinking it would prolong his life. (Some historians have speculated that it might instead have hastened his death, for it is never a good idea, of course, to ingest mercury.) According to the Han historian Sima Qian, the First Emperor’s tomb contained a scale model of his empire with rivers made of mercury – possibly from the ancient mines in Xunyang county in southern Sha’anxi.
But some of the pigments on the Terracotta Army are unique to China. This is hardly surprising, since it is widely acknowledged now that chemistry in ancient China – alchemy, as it was then – was a sophisticated craft, used to make a variety of medicines and other substances for daily life. This was true also of ancient Egypt, where chemistry produced glass, cosmetics, ointments and colours for artists. One of the most celebrated colours of the Egyptians is simply now called Egyptian blue, and as Bright Earth explains, it is probably an offshoot of glass-making. It is a blue silicate material, its tint conferred by the element copper. China in the Qin and Han periods, and earlier during the Warring States period of around 479-221 BC, did not use Egyptian blue, but had its own version, now known as Han blue or (because after all it predates the Han) simply Chinese blue. Whereas Egyptian blue has the chemical name calcium copper silicate, Chinese blue substitutes calcium for the element barium.
The ancient Chinese chemists discovered also that, during the production of this blue pigment they could create a purple version, which has the same chemical elements but combined in somewhat different ratios. That was a real innovation, because purple pigments have been hard to make throughout the history of the “invention of colour” – and in the West there was no good, stable purple pigment until the nineteenth century. Even more impressively, Chinese purple contains two copper atoms linked by a chemical bond, making it – of course, the makers had no knowledge of this – the earliest known synthetic substance with such a so-called “metal-metal bond”, a unit of great significance to modern chemists.
Although I’ve not seen the Terracotta Army, several years ago I visited Professor Heinz Berke of the University of Zurich in Switzerland, who has worked on analyzing their remaining scraps of pigment. Heinz was kind enough to give me a sample of the Chinese blue pigment that he had made in his laboratory; I have it in front of me now as I write these words. “The invention of Chinese Blue and Chinese Purple”, Heinz has written, “is an admirable technical-chemical feat [and an] excellent example of the positive influence of science and technology on society.”

My sample of modern Chinese blue, made by Heinz Berke (Zurich).
You can perhaps see, then, why I am so delighted by the publication of a Chinese edition of Bright Earth – for it combines three of my passions: chemistry, colour and China. I always regretted that I was not able to say more in the book about art outside the West, but perhaps one day I shall have the resolve to attempt it. The invention of colour in China has a rather different narrative, not least because the tradition of landscape painting – shanshuihua – places less emphasis on colour and more on form, composition and the art of brushwork. Yet that tradition has captivated me since my youth, and it played a big part in inducing me to begin exploring China in 1992. This artistic tradition, of course, in no way lessened the significance of colour in Chinese culture; it was, after all, an aspect of the correspondences attached to the system of the Five Elements (wu xing). And one can hardly visit China without becoming aware of the vibrancy of colour in its traditional culture, not least in the glorious dyes used for silk. I hope and trust, therefore, that Bright Earth will find plenty of resonance among Chinese readers.

Li Gongnian (c.1120), Winter Evening Landscape, and detail.
Wednesday, September 13, 2017
On being patriotic
It’s interesting now to go back (such a long time ago!) and read the White Paper released by the UK government in February on exiting the EU. It beggared belief then; it still does now.
Here is Theresa May, in her barely literate foreword, on the national sentiment: “after all the division and discord, the country is coming together.” It is hard to know which is worse: that she might genuinely think this is so, or that she knows it is not. Either way, she and her supporters have been assiduous in their efforts to prevent it.
An illustration of that is Norman Lamont’s article in the latest issue of Prospect. That it is moderately and elegantly worded makes it no less contemptible.
Lamont wants to perpetuate the picture of a “cosmopolitan elite” sneering at the cheap patriotism of the masses: “my cosmopolitanism is superior to your parochialism”. “Many intellectuals”, he says, “sneer at patriotism.”
So there’s your choice (once again): get behind Brexit and be a patriot and, or oppose it and be unpatriotic. Loyalty to country (and thereby to “democracy”), or loyalty to the EU: it’s one or the other.
This is frankly repulsive. It is the choice of the Daily Mail. Lamont quotes Siegfried Sassoon: “We write our lines out of our bones and out of the soil our forefathers cultivated.” Sassoon was of course writing in another time, before the nationalistic notion of a “blood and soil” fatherland had the connotations it does today. Lamont is writing now, and that he can unproblematically invoke a quote like this gives us an idea of the kind of sensibility we’re dealing with.
What Lamont illustrates, though, is well documented human behaviour: you must prove your in-group loyalty not merely by statement of it but by active and perhaps destructive rejection of the Other. Robert Sapolsky talks about it in his excellent new book Behave. In the extreme cases, gangland members establish their fealty by executing a rival, and child soldiers are trained to show their allegiance to the movement by their willingness to kill family members. And they do.
It is Us or Them, and nothing else. The notion that feelings of kinship can be multiple and overlapping seems alien to Lamont. The European identity, he asserts, is “extremely shallow.” He can speak for himself, but he has no business intimating that he speaks for us all. I share a sense of identity with Londoners (defying the ugly political climate that washes all around us in southern England). I share it with my local community, and with Englishness (yes! – more of this below), and Britishness, and also with Europeans, and with all of humanity. These feelings of kinship have different complexions in each case, but I don’t feel conflicted by them. The customs, languages, histories of Europeans are varied, and of course there are big differences between nations, just as there are between national regions. But there is also a great deal of shared culture and history, and when I am in (say) the Czech Republic I feel I am still in some sense within a familiar “homeland” that I can’t claim to feel in Tokyo. How small and withered Lamont’s sense of belonging must be if it is invoked only in the Cotswolds and not in Paris or Prague.
It’s a peculiarity of the Brexit vote that it has deepened my love of England and Britain. Deepened, that is, those aspects of it that I value all the more for seeing them vanish: the tolerance, good humour, open-mindedness, invention. More than once my partner and I have discussed emigrating to escape the poison and political apathy that has overtaken these isles. And each time I’ve resisted the idea because there is so much here that I love.
Shortly after the referendum, I found a copy of Defoe’s A Tour Thro’ the Whole Island of Great Britain abandoned on a wall on my way home, and picked it up – and felt immense sadness that the Union, still finding its feet in Defoe’s day, might soon be shattered. If that looks a bit less likely now, it is because the people of Scotland have more sense than those of England in letting economic interests be a part of decisions about trans-national partnership. All the same, Defoe’s book sharpened the pangs of feeling that my home nation has lost something it may never recover in my lifetime.
Lamont implies that we pro-Europeans (so he’s anti-European then?) don’t accuse Scottish nationalists of being xenophobic in the way we do English nationalists. Could this be because Scotland doesn’t seem to have a problem with the rest of Europe, perhaps? There is plenty of ahistorical sentimentality in Scottish nationalism, as well as some kneejerk (as opposed to totally understandable) anti-English sentiment. But can anyone blame Scotland for resenting the fact that, having voted overwhelmingly to stay in the EU, it is being dragged out of it by the English?
Lamont talks about a “fusion” of national identity into the “larger whole” of the EU. This is utter nonsense. Does anyone believe that France, Germany, Spain, Italy, have lost one whit of their national identity? Nor have they ceded their sovereign power. Which brings us to that notorious line in the White Paper: “Whilst Parliament has remained sovereign throughout our membership of the EU, it has not always felt like that.” Here’s the factual content of that sentence: “Parliament has remained sovereign throughout our membership of the EU.” The rest is intangible and subjective sentiment. It hasn’t felt like it to whom? In what way? But “feeling”, without any tangible benefit, is all Lamont can offer for a position that looks daily more catastrophic.
His yoking of Brexit to the health of democracy is particularly shameful. “To weaken the nation state is to weaken democracy,” he says. But if “Parliament has remained sovereign throughout our membership of the EU”, how has our nation state been weakened? Well, maybe it hasn’t. But maybe, to Lamont, it just felt like that. What does weaken a nation state is the collapse of its currency, the loss of trade deals, and the blind pursuit of an extremely divisive national policy, complete with a determination to label the half of the nation who didn’t want it traitors and “saboteurs”. What weakens democracy is to invoke ancient clauses that allow ministers to bypass parliament, and to attempt to impose on parliament decisions that the courts have, now on three occasions, ruled as incompatible with the laws of the nation, some of them put in place in the wake of the Civil War in order to ensure good and stable governance. What weakens democracy is a parliament so craven that it will not even stand up for its own rights (and obligations) on such occasions, making it necessary for a brave private citizen to do so. What weakens a nation is a government that refuses to condemn press attacks on the very legitimacy of the judiciary. What weakens a nation is to portray doubts about the wisdom of a course of action as a lack of patriotism.
If this is the best defence of Brexit that Lamont can offer, we are truly shafted.
If it is patriotic to take pride in the values that our neighbours used to praise us for, and to want to see one’s country economically healthy and presenting a confident face to the world, willing to engage in international institutions, then I am a patriot. But for Lamont, as for German nationalists in 1933, it seems that patriotism demands a rejection of internationalism: we can only “be ourselves” by rejecting Others. So if patriotism now demands that one endorses the pursuit – in the most shambolic manner imaginable – a course that is isolationist, financially and economically damaging (potentially ruinous) and, to most outside observers beyond the far-right fringes, is a barely comprehensible act of self-harm, all on the basis of sentimentality about what “feels” right, then you will have to count me out.
Here is Theresa May, in her barely literate foreword, on the national sentiment: “after all the division and discord, the country is coming together.” It is hard to know which is worse: that she might genuinely think this is so, or that she knows it is not. Either way, she and her supporters have been assiduous in their efforts to prevent it.
An illustration of that is Norman Lamont’s article in the latest issue of Prospect. That it is moderately and elegantly worded makes it no less contemptible.
Lamont wants to perpetuate the picture of a “cosmopolitan elite” sneering at the cheap patriotism of the masses: “my cosmopolitanism is superior to your parochialism”. “Many intellectuals”, he says, “sneer at patriotism.”
So there’s your choice (once again): get behind Brexit and be a patriot and, or oppose it and be unpatriotic. Loyalty to country (and thereby to “democracy”), or loyalty to the EU: it’s one or the other.
This is frankly repulsive. It is the choice of the Daily Mail. Lamont quotes Siegfried Sassoon: “We write our lines out of our bones and out of the soil our forefathers cultivated.” Sassoon was of course writing in another time, before the nationalistic notion of a “blood and soil” fatherland had the connotations it does today. Lamont is writing now, and that he can unproblematically invoke a quote like this gives us an idea of the kind of sensibility we’re dealing with.
What Lamont illustrates, though, is well documented human behaviour: you must prove your in-group loyalty not merely by statement of it but by active and perhaps destructive rejection of the Other. Robert Sapolsky talks about it in his excellent new book Behave. In the extreme cases, gangland members establish their fealty by executing a rival, and child soldiers are trained to show their allegiance to the movement by their willingness to kill family members. And they do.
It is Us or Them, and nothing else. The notion that feelings of kinship can be multiple and overlapping seems alien to Lamont. The European identity, he asserts, is “extremely shallow.” He can speak for himself, but he has no business intimating that he speaks for us all. I share a sense of identity with Londoners (defying the ugly political climate that washes all around us in southern England). I share it with my local community, and with Englishness (yes! – more of this below), and Britishness, and also with Europeans, and with all of humanity. These feelings of kinship have different complexions in each case, but I don’t feel conflicted by them. The customs, languages, histories of Europeans are varied, and of course there are big differences between nations, just as there are between national regions. But there is also a great deal of shared culture and history, and when I am in (say) the Czech Republic I feel I am still in some sense within a familiar “homeland” that I can’t claim to feel in Tokyo. How small and withered Lamont’s sense of belonging must be if it is invoked only in the Cotswolds and not in Paris or Prague.
It’s a peculiarity of the Brexit vote that it has deepened my love of England and Britain. Deepened, that is, those aspects of it that I value all the more for seeing them vanish: the tolerance, good humour, open-mindedness, invention. More than once my partner and I have discussed emigrating to escape the poison and political apathy that has overtaken these isles. And each time I’ve resisted the idea because there is so much here that I love.
Shortly after the referendum, I found a copy of Defoe’s A Tour Thro’ the Whole Island of Great Britain abandoned on a wall on my way home, and picked it up – and felt immense sadness that the Union, still finding its feet in Defoe’s day, might soon be shattered. If that looks a bit less likely now, it is because the people of Scotland have more sense than those of England in letting economic interests be a part of decisions about trans-national partnership. All the same, Defoe’s book sharpened the pangs of feeling that my home nation has lost something it may never recover in my lifetime.
Lamont implies that we pro-Europeans (so he’s anti-European then?) don’t accuse Scottish nationalists of being xenophobic in the way we do English nationalists. Could this be because Scotland doesn’t seem to have a problem with the rest of Europe, perhaps? There is plenty of ahistorical sentimentality in Scottish nationalism, as well as some kneejerk (as opposed to totally understandable) anti-English sentiment. But can anyone blame Scotland for resenting the fact that, having voted overwhelmingly to stay in the EU, it is being dragged out of it by the English?
Lamont talks about a “fusion” of national identity into the “larger whole” of the EU. This is utter nonsense. Does anyone believe that France, Germany, Spain, Italy, have lost one whit of their national identity? Nor have they ceded their sovereign power. Which brings us to that notorious line in the White Paper: “Whilst Parliament has remained sovereign throughout our membership of the EU, it has not always felt like that.” Here’s the factual content of that sentence: “Parliament has remained sovereign throughout our membership of the EU.” The rest is intangible and subjective sentiment. It hasn’t felt like it to whom? In what way? But “feeling”, without any tangible benefit, is all Lamont can offer for a position that looks daily more catastrophic.
His yoking of Brexit to the health of democracy is particularly shameful. “To weaken the nation state is to weaken democracy,” he says. But if “Parliament has remained sovereign throughout our membership of the EU”, how has our nation state been weakened? Well, maybe it hasn’t. But maybe, to Lamont, it just felt like that. What does weaken a nation state is the collapse of its currency, the loss of trade deals, and the blind pursuit of an extremely divisive national policy, complete with a determination to label the half of the nation who didn’t want it traitors and “saboteurs”. What weakens democracy is to invoke ancient clauses that allow ministers to bypass parliament, and to attempt to impose on parliament decisions that the courts have, now on three occasions, ruled as incompatible with the laws of the nation, some of them put in place in the wake of the Civil War in order to ensure good and stable governance. What weakens democracy is a parliament so craven that it will not even stand up for its own rights (and obligations) on such occasions, making it necessary for a brave private citizen to do so. What weakens a nation is a government that refuses to condemn press attacks on the very legitimacy of the judiciary. What weakens a nation is to portray doubts about the wisdom of a course of action as a lack of patriotism.
If this is the best defence of Brexit that Lamont can offer, we are truly shafted.
If it is patriotic to take pride in the values that our neighbours used to praise us for, and to want to see one’s country economically healthy and presenting a confident face to the world, willing to engage in international institutions, then I am a patriot. But for Lamont, as for German nationalists in 1933, it seems that patriotism demands a rejection of internationalism: we can only “be ourselves” by rejecting Others. So if patriotism now demands that one endorses the pursuit – in the most shambolic manner imaginable – a course that is isolationist, financially and economically damaging (potentially ruinous) and, to most outside observers beyond the far-right fringes, is a barely comprehensible act of self-harm, all on the basis of sentimentality about what “feels” right, then you will have to count me out.
Wednesday, August 16, 2017
Teleportation redux
I’ve had an illuminating discussion with Mateus Araujo after he blogged about my piece for Nature on quantum teleportation. Mateus pointed out my error in suggesting that the protocol requires (rather than merely permits) the teleported state to be unknown, for which I’m very grateful. I’ve amended the story to put this right.
This doesn’t mean Mateus is now happy about all of the article – and given our (friendly and useful) exchange, I didn’t think he would be! He was also concerned that I was perpetuating the notion of some kind of magical, superluminal transmission of information between the original and target particles in teleportation. I was puzzled by that, because part of the point of the piece was to call out that very misconception. But then it transpired that Mateus’s concern stemmed from this paragraph:
“A common view is that quantum teleportation is a new way of transmitting information: a kind of high-speed quantum Wi-Fi. What’s amazing about it is that the quantum ‘information’ is ‘sent’ instantaneously — faster than light — because that is how two entangled particles communicate.”
I explained to Mateus that I had imagined this statement would be obviously intended ironically: that “amazing” refers to the breathless visions of the “common view”. It seems Mateus still feels I’m claiming that “something unobservable is going on faster than light, and that there is some kind of conspiracy by Nature to cover that up.” I’m not sure if Mateus thinks I still believe this, or simply that my piece still implies it regardless. So to be clear: I don’t think anything of the sort is happening at all. It seems to me that the causal language in which something here (Alice’s measurement) influences something there (the state of Bob’s particle) is precisely the wrong one to describe quantum entanglement. (I agree with David Mermin’s comment that entanglement presents us with correlations for which there is no “explanation”.)
This does raise the issue of whether rhetorical devices like irony should be used in science writing, where on general one tries to be as clear as possible (often to an audience whose first language is not English). It’s a question I do ponder over, and I’ll be writing about it soon for Chemistry World. I’ll say here only that I have been surprised (and not a little alarmed) before to discover that some scientists (not Mateus, however) seem unaware that such devices even exist. A little slice of Two Cultures Pie, perhaps?
Mateus originally stated that I also invoked in my article the “discredited notion of wavefunction collapse”. I made no reference to that at all, but he argues that it was implicit in the discussion: the idea of Alice’s actions having any kind of instantaneous consequence – perhaps better to say, instantaneous implication – for Bob’s particle requires wavefunction collapse. It was not my intention to do so, and perhaps by disavowing any sort of “instantaneous information transfer” in a physical sense I clear that up. But at any rate, to say that the notion of wavefunction collapse is now “discredited” (Mateus has suggested that no serious scientist now talks in those terms) is simply wrong. Indeed, the book chapter on which my article on the “measurement problem” (which Mateus liked) was based was taken seriously to task by another expert who decided that (my actual text to the contrary) it claimed to do away with collapse altogether.
The idea that decoherence completely “replaces” wavefunction collapse is not mainstream at all. Even some of those who think that it can – such as Roland Omnès, who says collapse can now be seen as “a convenience, not a necessity” – are careful to point out that decoherence doesn’t clear up everything. For it doesn’t explain the uniqueness of measurement outcomes. That, says Omnès, still needs to be added in what is essentially an axiomatic manner: unique facts exist.
So on the one hand I think her Mateus’s comment is an example of a more general tendency I’ve noticed among folks who think deeply about quantum foundations to suggest that, not only is their preferred interpretation the only one that makes sense, but it is the only one taken seriously at all.
On the other hand, this suggestion that “wavefunction collapse” is an obsolete idea raises the interesting question of what we mean by it in the first place. Some, of course – like Roger Penrose – think that this collapse is an actual physical process, just as decoherence is. And it is, moreover, a process that breaks the unitary behaviour rigorously observed by the Schrödinger equation (and which poses such dilemmas in connecting the theory to experience). This is not a mainstream view either, but it is a respectable one, pursued by some leading scientists – for it has the advantage, perhaps uniquely among quantum “interpretations”, of being empirically testable in principle.
For the likes of Bohr, wavefunction collapse was indeed a vague and problematic notion (or so it seems to me, though I’m wary of saying anything about what Bohr meant). But most physicists who talk about it don’t see it as some physical process; rather, it is a part of the mathematical formulation of quantum theory. It is, moreover, a necessary part: it amounts to applying the Born rule in order to make probabilistic predictions about the outcomes of experiments, and so it is needed to make any kind of connection at all between the theory and empirical experience. In other words, it’s a mathematical process. Some don’t think it is very helpful to talk about that process in the “physicalist” language of collapse; others have no problem in doing so. Everettians claim that the Many Worlds view does away with collapse at all, but then they face the problem of explaining why we need the Born rule to actually use quantum mechanics to make predictions. (Some, like Sean Carroll here, have claimed to derive the Born rule within the Many Worlds framework using choice theory, but this requires one to invoke the notion of a rational observer with a well-defined conscious experience that is continuous in time, which seems to be precisely what the Everettian view renders incoherent, unless it is simply imposed by fiat.)
So wavefunction collapse hasn’t gone away – though how nice it would be if we could make it do so.
This doesn’t mean Mateus is now happy about all of the article – and given our (friendly and useful) exchange, I didn’t think he would be! He was also concerned that I was perpetuating the notion of some kind of magical, superluminal transmission of information between the original and target particles in teleportation. I was puzzled by that, because part of the point of the piece was to call out that very misconception. But then it transpired that Mateus’s concern stemmed from this paragraph:
“A common view is that quantum teleportation is a new way of transmitting information: a kind of high-speed quantum Wi-Fi. What’s amazing about it is that the quantum ‘information’ is ‘sent’ instantaneously — faster than light — because that is how two entangled particles communicate.”
I explained to Mateus that I had imagined this statement would be obviously intended ironically: that “amazing” refers to the breathless visions of the “common view”. It seems Mateus still feels I’m claiming that “something unobservable is going on faster than light, and that there is some kind of conspiracy by Nature to cover that up.” I’m not sure if Mateus thinks I still believe this, or simply that my piece still implies it regardless. So to be clear: I don’t think anything of the sort is happening at all. It seems to me that the causal language in which something here (Alice’s measurement) influences something there (the state of Bob’s particle) is precisely the wrong one to describe quantum entanglement. (I agree with David Mermin’s comment that entanglement presents us with correlations for which there is no “explanation”.)
This does raise the issue of whether rhetorical devices like irony should be used in science writing, where on general one tries to be as clear as possible (often to an audience whose first language is not English). It’s a question I do ponder over, and I’ll be writing about it soon for Chemistry World. I’ll say here only that I have been surprised (and not a little alarmed) before to discover that some scientists (not Mateus, however) seem unaware that such devices even exist. A little slice of Two Cultures Pie, perhaps?
Mateus originally stated that I also invoked in my article the “discredited notion of wavefunction collapse”. I made no reference to that at all, but he argues that it was implicit in the discussion: the idea of Alice’s actions having any kind of instantaneous consequence – perhaps better to say, instantaneous implication – for Bob’s particle requires wavefunction collapse. It was not my intention to do so, and perhaps by disavowing any sort of “instantaneous information transfer” in a physical sense I clear that up. But at any rate, to say that the notion of wavefunction collapse is now “discredited” (Mateus has suggested that no serious scientist now talks in those terms) is simply wrong. Indeed, the book chapter on which my article on the “measurement problem” (which Mateus liked) was based was taken seriously to task by another expert who decided that (my actual text to the contrary) it claimed to do away with collapse altogether.
The idea that decoherence completely “replaces” wavefunction collapse is not mainstream at all. Even some of those who think that it can – such as Roland Omnès, who says collapse can now be seen as “a convenience, not a necessity” – are careful to point out that decoherence doesn’t clear up everything. For it doesn’t explain the uniqueness of measurement outcomes. That, says Omnès, still needs to be added in what is essentially an axiomatic manner: unique facts exist.
So on the one hand I think her Mateus’s comment is an example of a more general tendency I’ve noticed among folks who think deeply about quantum foundations to suggest that, not only is their preferred interpretation the only one that makes sense, but it is the only one taken seriously at all.
On the other hand, this suggestion that “wavefunction collapse” is an obsolete idea raises the interesting question of what we mean by it in the first place. Some, of course – like Roger Penrose – think that this collapse is an actual physical process, just as decoherence is. And it is, moreover, a process that breaks the unitary behaviour rigorously observed by the Schrödinger equation (and which poses such dilemmas in connecting the theory to experience). This is not a mainstream view either, but it is a respectable one, pursued by some leading scientists – for it has the advantage, perhaps uniquely among quantum “interpretations”, of being empirically testable in principle.
For the likes of Bohr, wavefunction collapse was indeed a vague and problematic notion (or so it seems to me, though I’m wary of saying anything about what Bohr meant). But most physicists who talk about it don’t see it as some physical process; rather, it is a part of the mathematical formulation of quantum theory. It is, moreover, a necessary part: it amounts to applying the Born rule in order to make probabilistic predictions about the outcomes of experiments, and so it is needed to make any kind of connection at all between the theory and empirical experience. In other words, it’s a mathematical process. Some don’t think it is very helpful to talk about that process in the “physicalist” language of collapse; others have no problem in doing so. Everettians claim that the Many Worlds view does away with collapse at all, but then they face the problem of explaining why we need the Born rule to actually use quantum mechanics to make predictions. (Some, like Sean Carroll here, have claimed to derive the Born rule within the Many Worlds framework using choice theory, but this requires one to invoke the notion of a rational observer with a well-defined conscious experience that is continuous in time, which seems to be precisely what the Everettian view renders incoherent, unless it is simply imposed by fiat.)
So wavefunction collapse hasn’t gone away – though how nice it would be if we could make it do so.
Monday, March 27, 2017
Do you believe in miracles?
Tristan Casabianca has kindly drawn my attention to an article he published last year which discussed the case for the authenticity of the Turin Shroud – by which I mean the claim that it is not just an artifact made during the period traditionally ascribed to the life of Jesus Christ but that it was the cloth used to wrap his body between the Crucifixion and the Resurrection. It’s a thoughtful and provocative article, but I don’t agree with very much of it.
I won’t go into the evidence for and against the age and provenance of the Shroud here (see here instead). Suffice to say, it is still hotly contested, with several researchers arguing that the radiocarbon dating performed in 1988, which placed the Shroud in the 13th-14th century, was flawed in some respect or another. I’ve not seen convincing evidence to doubt that very careful study, but I do wish it could be repeated. I also think however that, based on the evidence we have to date, it is very hard to understand how the image of a bearded man was formed on the linen. It doesn’t seem to be painted on. It’s deeply intriguing, tantalizing question. In the interests of full disclosure, I don’t believe that Jesus of Nazareth was the resurrected son of God, and I find it extremely unlikely that this artifact which turned up in 14th-century France had anything to do with him. But that is just my opinion.
Casabianca’s article is concerned not so much with weighing up the arguments as with establishing the framework within which we should think about them. In particular, he takes issue with my comment in a 2008 column in Nature Materials that “the two attributes central to the shroud’s alleged religious significance – that it wrapped the body of Jesus, and is of supernatural origin – are precisely those neither science nor history can ever prove.” Casabianca in effect asks: really? Ever?
And in this much he is right: saying such and such can never happen is, when viewed philosophically, a contentious claim. It amounts to ruling out possibilities that we can’t be sure of. To take an extreme example: we might say that time travel contravenes the laws of physics as we currently know them, but can we really state as a philosophical absolute that there will never come a time when it becomes possible to travel back in time and witness at first hand the events that took place in Palestine around 33 AD? It sounds absurd to suggest such a thing (outside of Michael Moorcock’s splendid Behold The Man), but I’m not sure that a philosopher would accept such a ban as a rigorous principle, any more than we could deny the possibility that any other feature of (or indeed all of) our current understanding of the universe is utterly mistaken. I’m not sure that it is terribly meaningful to leave such possibilities open, though – in general when we say something is impossible, we mean it seems impossible according to our current understanding of the universe, and what more could we expect of such a statement than that?
But Casabianca is more specific. He says that of course we do come to accept some historical truths, even about the distant past. We accept that tomb KV62 discovered by Howard Carter is the tomb of Tutankhamen. So why should we consider it a theoretical impossibility that we could prove the Shroud to be the burial shroud of Jesus of Nazareth (even setting aside for the moment his theological status)?
Again, philosophically I don’t see how one could exclude that theoretical possibility. But could it ever happen, given what we have to go on? There is a possibility that Jesus of Nazareth was a real person – this seems rather likely to me, though I have no deep knowledge of the matter. How might we link this object to him? We could perhaps establish that the previous dating study was wrong, and find good reason to believe the Shroud was in fact made within, say, the two centuries bracketing the time Jesus is supposed to have lived. We might find pretty compelling evidence that it came from the Middle East, perhaps being able to localize it fairly well to Palestine, and also that it was probably used in a burial ritual. To be clear, none of this has been by any means proved right now, and some evidence argues against it – but in principle it seems plausible that it could happen.
What then? Casabianca offers no line of argument that could link this artifact to the person of Christ. Might we find his name inscribed on it somewhere? No, we will not. Might we be able to link the style of weaving to one specific to Nazareth at that time? If that were possible, surely it would have been done already. It seems to me that you have to think about what might be demonstrated historically in the light of the capacity of the artifact in question to hold the information required for that demonstration. I see no reason to think that the Shroud contains the kind of evidence needed to make such a definitive identification of provenance, and more than a random pot excavated in Birmingham can be linked to a specific Iron Age Brummie beer maker named Noddy. Whether one can exclude that as a “theoretical philosophical possibility” seems pretty irrelevant.
Casabianca goes on to point out that several historians do claim that there is good evidence for concluding that the Turin Shroud is the authentic burial wrapping of Jesus. And indeed they do. But it seems a very curious argument to say that it is valid to make this historical claim simply because some people do so. Simply, such claims are made; whether there are, or can be, adequate grounds for making them is another matter entirely.
Casabianca certainly goes too far, though, when he proposes that “to explain the image on the Turin Shroud, the Resurrection hypothesis is the most likely of all the hypotheses, even when compared with natural hypotheses.” There are several problems with this suggestion.
Casabianca suggests that it follows from “a historiographical approach (the ‘Minimal Facts Approach’)”, which I take to be some kind of Occam’s razor position. Even if you buy the usefulness in Occam’s razor for determining the preferred solution to a body of facts (and there is no philosophical or empirical justification for it), the idea becomes meaningless here. There is no calculus that allows you to make a quantitative comparison between a natural explanation of events that stays within the laws of physics and a supernatural explanation that does not. Is the explanation “God did it” economical because you can say it in three very short words? Or (as I think) does the idea that the laws of physics can be arbitrarily suspended by some unknown entity in fact incur an overhead of hypotheticals compared to which the demands of string theory look like a trifling concession? However you look at it, to afford supernatural explanations so casually doesn’t look like careful reasoning to me.
That’s all the more so given that there are so many unknowns and uncertainties about the Shroud image in the first place. Reports are contradictory and confused, technical issues are challenged, and quite frankly it has been pretty much impossible to perform careful, well checked science on this material at all, since the Roman church has made access to the samples so restricted. Put simply, we can’t be sure what facts we are proposing to explain.
Coming back to Casabianca’s contention, could science ever prove that the Shroud is of supernatural origin? Of course, scientists will rightly say that this is a semantic contradiction, since if new knowledge shows that what we have previously considered “supernatural” actually happens, it then just becomes part of the “natural”. But the real issue here is whether there could ever be incontrovertible evidence that such things as God, resurrections and divinely ordained virgin births may happen. Casabianca mentions the example of the stars spontaneously forming the sentence “God exists” in the sky. I for one am happy to say that, were that to happen, I would be given pause. My hierarchy of explanations would then be something like: It is a hoax or weird illusion; I have lost my mind; it is aliens; it is the Supreme Being saying hello. I have no problem of principle with working my way through that progression. Yes, I’m open to persuasion that God exists and that Christ rose from the dead and left his imprint in a cloth through supernatural means. Which rational person could not be?
But to accept such things on the basis of fuzzy and often rather poor science conducted on a jealously guarded scrap of old linen doesn’t seem terribly logical to me. To believe that a supreme being would have set us a puzzle of this kind, so hazily written and laced with red herrings, false trails and contradictions, to test our faith seems positively perverse. You would almost need to believe that He had set out not to challenge science but to traduce it. Such a God can’t be logically excluded from existence, but He does not interest me.
I won’t go into the evidence for and against the age and provenance of the Shroud here (see here instead). Suffice to say, it is still hotly contested, with several researchers arguing that the radiocarbon dating performed in 1988, which placed the Shroud in the 13th-14th century, was flawed in some respect or another. I’ve not seen convincing evidence to doubt that very careful study, but I do wish it could be repeated. I also think however that, based on the evidence we have to date, it is very hard to understand how the image of a bearded man was formed on the linen. It doesn’t seem to be painted on. It’s deeply intriguing, tantalizing question. In the interests of full disclosure, I don’t believe that Jesus of Nazareth was the resurrected son of God, and I find it extremely unlikely that this artifact which turned up in 14th-century France had anything to do with him. But that is just my opinion.
Casabianca’s article is concerned not so much with weighing up the arguments as with establishing the framework within which we should think about them. In particular, he takes issue with my comment in a 2008 column in Nature Materials that “the two attributes central to the shroud’s alleged religious significance – that it wrapped the body of Jesus, and is of supernatural origin – are precisely those neither science nor history can ever prove.” Casabianca in effect asks: really? Ever?
And in this much he is right: saying such and such can never happen is, when viewed philosophically, a contentious claim. It amounts to ruling out possibilities that we can’t be sure of. To take an extreme example: we might say that time travel contravenes the laws of physics as we currently know them, but can we really state as a philosophical absolute that there will never come a time when it becomes possible to travel back in time and witness at first hand the events that took place in Palestine around 33 AD? It sounds absurd to suggest such a thing (outside of Michael Moorcock’s splendid Behold The Man), but I’m not sure that a philosopher would accept such a ban as a rigorous principle, any more than we could deny the possibility that any other feature of (or indeed all of) our current understanding of the universe is utterly mistaken. I’m not sure that it is terribly meaningful to leave such possibilities open, though – in general when we say something is impossible, we mean it seems impossible according to our current understanding of the universe, and what more could we expect of such a statement than that?
But Casabianca is more specific. He says that of course we do come to accept some historical truths, even about the distant past. We accept that tomb KV62 discovered by Howard Carter is the tomb of Tutankhamen. So why should we consider it a theoretical impossibility that we could prove the Shroud to be the burial shroud of Jesus of Nazareth (even setting aside for the moment his theological status)?
Again, philosophically I don’t see how one could exclude that theoretical possibility. But could it ever happen, given what we have to go on? There is a possibility that Jesus of Nazareth was a real person – this seems rather likely to me, though I have no deep knowledge of the matter. How might we link this object to him? We could perhaps establish that the previous dating study was wrong, and find good reason to believe the Shroud was in fact made within, say, the two centuries bracketing the time Jesus is supposed to have lived. We might find pretty compelling evidence that it came from the Middle East, perhaps being able to localize it fairly well to Palestine, and also that it was probably used in a burial ritual. To be clear, none of this has been by any means proved right now, and some evidence argues against it – but in principle it seems plausible that it could happen.
What then? Casabianca offers no line of argument that could link this artifact to the person of Christ. Might we find his name inscribed on it somewhere? No, we will not. Might we be able to link the style of weaving to one specific to Nazareth at that time? If that were possible, surely it would have been done already. It seems to me that you have to think about what might be demonstrated historically in the light of the capacity of the artifact in question to hold the information required for that demonstration. I see no reason to think that the Shroud contains the kind of evidence needed to make such a definitive identification of provenance, and more than a random pot excavated in Birmingham can be linked to a specific Iron Age Brummie beer maker named Noddy. Whether one can exclude that as a “theoretical philosophical possibility” seems pretty irrelevant.
Casabianca goes on to point out that several historians do claim that there is good evidence for concluding that the Turin Shroud is the authentic burial wrapping of Jesus. And indeed they do. But it seems a very curious argument to say that it is valid to make this historical claim simply because some people do so. Simply, such claims are made; whether there are, or can be, adequate grounds for making them is another matter entirely.
Casabianca certainly goes too far, though, when he proposes that “to explain the image on the Turin Shroud, the Resurrection hypothesis is the most likely of all the hypotheses, even when compared with natural hypotheses.” There are several problems with this suggestion.
Casabianca suggests that it follows from “a historiographical approach (the ‘Minimal Facts Approach’)”, which I take to be some kind of Occam’s razor position. Even if you buy the usefulness in Occam’s razor for determining the preferred solution to a body of facts (and there is no philosophical or empirical justification for it), the idea becomes meaningless here. There is no calculus that allows you to make a quantitative comparison between a natural explanation of events that stays within the laws of physics and a supernatural explanation that does not. Is the explanation “God did it” economical because you can say it in three very short words? Or (as I think) does the idea that the laws of physics can be arbitrarily suspended by some unknown entity in fact incur an overhead of hypotheticals compared to which the demands of string theory look like a trifling concession? However you look at it, to afford supernatural explanations so casually doesn’t look like careful reasoning to me.
That’s all the more so given that there are so many unknowns and uncertainties about the Shroud image in the first place. Reports are contradictory and confused, technical issues are challenged, and quite frankly it has been pretty much impossible to perform careful, well checked science on this material at all, since the Roman church has made access to the samples so restricted. Put simply, we can’t be sure what facts we are proposing to explain.
Coming back to Casabianca’s contention, could science ever prove that the Shroud is of supernatural origin? Of course, scientists will rightly say that this is a semantic contradiction, since if new knowledge shows that what we have previously considered “supernatural” actually happens, it then just becomes part of the “natural”. But the real issue here is whether there could ever be incontrovertible evidence that such things as God, resurrections and divinely ordained virgin births may happen. Casabianca mentions the example of the stars spontaneously forming the sentence “God exists” in the sky. I for one am happy to say that, were that to happen, I would be given pause. My hierarchy of explanations would then be something like: It is a hoax or weird illusion; I have lost my mind; it is aliens; it is the Supreme Being saying hello. I have no problem of principle with working my way through that progression. Yes, I’m open to persuasion that God exists and that Christ rose from the dead and left his imprint in a cloth through supernatural means. Which rational person could not be?
But to accept such things on the basis of fuzzy and often rather poor science conducted on a jealously guarded scrap of old linen doesn’t seem terribly logical to me. To believe that a supreme being would have set us a puzzle of this kind, so hazily written and laced with red herrings, false trails and contradictions, to test our faith seems positively perverse. You would almost need to believe that He had set out not to challenge science but to traduce it. Such a God can’t be logically excluded from existence, but He does not interest me.
Tuesday, January 24, 2017
Killing the cat?

This graphic from New Scientist, and conversations last night at the Science Museum, got me thinking. Using Schrödinger’s cat as a way to illustrate the differences between interpretations of quantum theory is a nice idea. But it suffers from the flaw that challenges the entire thought experiment. In order to be able to talk about the scenario in quantum terms, we need to be able to express it in quantum terms. But we can’t, because “live cat” and “dead cat” are not well-defined quantum states.
What, you can’t tell a live cat from a dead cat? Nonsense! Well yes, it is; but that’s not we’re asking here. What quantum property is it, exactly, that characterizes the superposition state, and that will enable you, unambiguously and in a single shot, to distinguish the two classical states? Live and dead are not quantum variables, and I’m not at all sure that they can be correlated even in principle with quantum variables that can be placed in superposition states.
Schrödinger’s point was not, in any case, that these are two different states of a macroscopic object, but that they are logically exclusive states. The paradox lies not in “two states at once”, but in “two contradictory states at once”. He was pointing not to “weird behaviour” predicted by quantum theory, but to logical paradoxes.
And this is why the Many Worlds Interpretation doesn’t resolve the problem. Yes, it looks as though it does: both outcomes are true! As New Scientist puts it here, “The universe splits. Your cat is dead, but in a parallel world it remains alive.” (Or, as Rowan Hooper points out, vice versa.) But wait: your cat? Who is you? Whose cat is it in the other world?
Brian Greene, in The Hidden Reality, tells us: that is you too! They are both you. Oh, so that sentence reads “Your cat is dead, but your cat remains alive.” Greene isn’t troubled by the fact that this is not how “you” works. But nevertheless, this is not how “you” works.
David Deutsch and Max Tegmark say, ah language! What should we trust more, language or maths? Contingent sounds, or timeless equations? But here language is articulating something that underpins maths, which is logic. Schrödinger realized that, but his point seems to be forgotten (by some). I don’t have time to go into it here (my forthcoming book will), but individual identity is a logical construct. You can’t wish it away with fantasies about “other yous”. I am trying to resist the topical urge to suggest that the Many Worlds interpretation offers us “alternative facts”, but that is terribly hard to do. So folks, the second option here is far more problematic than it looks.
What about the first? Let me say first of all that in neither the Copenhagen nor the Many Worlds interpretation is the cat “simultaneously alive and dead”. Not only is there no way of expressing that in quantum mechanics (at least, no one has articulated one), but in any event the proper statement of the situation is that “We can say nothing about the state of the cat, other than that live and dead are both possible outcomes of an observation”. That might sound like a pedantic distinction, but it will not be possible to make sense of quantum mechanics without it.
Now, I would hesitate to call the Copenhagen interpretation the “standard” interpretation, since there is no consensus, nor even a majority view, about which is the correct interpretation of quantum mechanics, at least among those who think about foundational issues. What’s more, the “Copenhagen interpretation” is not a single thing: Heisenberg expressed it differently to Bohr, and Wheeler had his own view too, as did others. However, I think Bohr would have said something like this: after observation, we have acquired now information that has changed our view of the cat’s condition (assuming it can be expressed in quantum terms at all) from an indeterminate to a determinate one. Some Copenhagenists, such as Pascual Jordan, spoke of this in causative terms: our observations produce the results. In that view, it seems acceptable to say that “Your measurement killed the cat” (although since we cannot say that it was previously alive, we might need to say more strictly “Your measurement elicited a dead cat”). But I’m not at all sure that Bohr would have seen causation at work in the measurement, as if “wavefunction reduction” is a physical effect that kills the cat. (That’s really the third, “objective collapse” option, which is given the least problematic representation here.) I think Bohr might have said something along the lines that “Observation allows us to speak about the classical state of the cat. And look, it is a dead one!”
So, which way will you vote? Bear in mind, however, that there are other option available, not all of them mutually exclusive. And that you won’t be able to prove that you’re right, of course.
Thursday, December 15, 2016
More alternative heroes
It was fun to write this piece for Nautilus on who would have made some of the great discoveries in science if their actual discoverers had not lived. And very nice to see it is provoking discussion, as I’d hoped – there is nothing definitive in my suggestions. Here are two more case histories, for which there was not room in the final article.
____________________________________________________________________________
Fullerenes – Wolfgang Krätschmer and Donald Huffman
In 1985, British spectroscopist Harry Kroto visited physical chemists Richard Smalley and Robert Curl at Rice University in Houston, Texas, to see if their machine for making clusters of atoms could produce some of the exotic carbon molecules Kroto thought might be formed in space. Their experiments led to the discovery of hollow, spherical molecules called C60 or buckminsterfullerene, and of a whole family of related hollow-shell carbon molecules called fullerenes. They were awarded the 1996 Nobel prize in chemistry for the work.
Fullerenes had been seen before 1985; they just hadn’t been recognized as such. They can in fact be formed in ordinary candle flames, but the most systematic experiments were conducted in 1982-3 by experimental physicist Wolfgang Krätschmer at the Max Planck Institute for Nuclear Physics in Heidelberg, Germany. Krätschmer had teamed up with physicist Donald Huffman of the University of Arizona, for they both were, like Kroto, interested in the constituents of interstellar space.
Huffman studied dust grains scattered through the cosmos from which stars may form. He and Krätschmer began collaborating in the 1970s while Huffman was on sabbatical in Stuttgart, and initially they looked at tiny particles of silicate minerals. But Huffman believed that some of the absorption of starlight by grains in the interstellar medium could be due to tiny particles of something like soot in the mix: basically, flakes of graphite-like carbon.
In 1982 he visited Krätschmer to carry out experiments in which they heated graphite rods in a vacuum and measured the light absorbed by the sooty debris. They made and saw C60, which absorbs ultraviolet light at a particular wavelength. But they didn’t realize what it was, and decided their apparatus was just making unintelligible carbon “junk”.
It wasn’t until the duo saw the paper by Kroto and colleagues in 1985 that the penny dropped. But if it hadn’t been for that, the interest of astronomers in interstellar dust would probably have returned scrutiny anyway to those experiments in Heidelberg, and the truth would have emerged. As it was, the graphite-vaporizing equipment of Krätschmer and Huffman offered a way to mass-produce fullerenes more cheaply and simply than the Rice cluster machine. Once this was understood in 1990, fullerene research exploded worldwide.
Continental drift – Roberto Mantovani, or…
There are discoveries for the time seems right, and others for which it’s just the opposite. For one reason or another they are rejected by the prevailing scientific opinion, offering us the retrospective, appealingly tragic tale of the lone maverick who was spurned only vindicated much later, perhaps posthumously. That’s pretty much how it was for Alfred Wegener’s theory of continental drift. In the 1930s, Wegener, a German meteorologist (so what did he know about geology?), proposed that the Earth’s surface was not fixed, but that the continental land masses wander over time into different configurations, and were in the distant past disposed far from where they stand today. To doubt the evident solidity of the planetary surface seemed absurd, and it wasn’t until the discovery of seafloor spreading – the formation of fresh ocean crust by volcanic activity – in the 1960s that continental drift became the paradigm for geology.
In such circumstances, it seems rather unlikely that anyone else would have come up with Wegener’s unorthodox idea in his own era. But they did. Not just one individual but several others imagined something like a theory of plate tectonics in the early twentieth century.
The most immediate sign of continental drift on the world map is the suspiciously close fit of the east coast of South America with the west coast of Africa. But that line of argument, advanced by American geologist Frank Bursley Taylor in 1908, seems almost too simplistic. Taylor got other things right too, such as the way the collision of continents pushes up mountain ranges. But his claim that the movements were caused by the close approach of the moon when it was suddenly captured by the Earth in the Cretaceous period was rather too baroque for his contemporaries.
In 1911, an amateur American geologist named Howard Baker also proposed that the continents are fragments of an epicene supercontinent that was torn apart. His mechanism was even more bizarre than Taylor’s: the moon was once a part of the Earth that got ripped off by its rapid spinning, and the continents moved to fill the gap.
In comparison, the theory of Italian geologist (and violinist) Roberto Mantovani, first published in 1889 and developed over the next three decades, was rather easier to swallow. He too argued that the continents were originally a single landmass that was pulled apart thanks to an expansion of the Earth driven by volcanic activity. Wegener acknowledged some “astonishingly close” correspondences between Mantovani’s reconstruction and his own.
All of these ideas contain tantalizing truths: breakup of an ancient supercontinent (now called Pangea), opening of ocean basins, mountain building and volcanism as the driving force. (Even Baker’s idea that the moon was once a part of the Earth is now widely believed, albeit for totally different reasons.) But like a reconstruction of Pangea from today’s map, the parts didn’t fit without gaps, and no one, including Wegener, could find a plausible mechanism for the continental movements. If we didn’t have Wegener, then Mantovani, or even Taylor or Baker, could step into the same foundational narrative of the neglected savant. All intuited some element of the truth, and their stories show that there’s often an element of arbitrariness in what counts as a discovery and who gets the credit.
____________________________________________________________________________
Fullerenes – Wolfgang Krätschmer and Donald Huffman
In 1985, British spectroscopist Harry Kroto visited physical chemists Richard Smalley and Robert Curl at Rice University in Houston, Texas, to see if their machine for making clusters of atoms could produce some of the exotic carbon molecules Kroto thought might be formed in space. Their experiments led to the discovery of hollow, spherical molecules called C60 or buckminsterfullerene, and of a whole family of related hollow-shell carbon molecules called fullerenes. They were awarded the 1996 Nobel prize in chemistry for the work.
Fullerenes had been seen before 1985; they just hadn’t been recognized as such. They can in fact be formed in ordinary candle flames, but the most systematic experiments were conducted in 1982-3 by experimental physicist Wolfgang Krätschmer at the Max Planck Institute for Nuclear Physics in Heidelberg, Germany. Krätschmer had teamed up with physicist Donald Huffman of the University of Arizona, for they both were, like Kroto, interested in the constituents of interstellar space.
Huffman studied dust grains scattered through the cosmos from which stars may form. He and Krätschmer began collaborating in the 1970s while Huffman was on sabbatical in Stuttgart, and initially they looked at tiny particles of silicate minerals. But Huffman believed that some of the absorption of starlight by grains in the interstellar medium could be due to tiny particles of something like soot in the mix: basically, flakes of graphite-like carbon.
In 1982 he visited Krätschmer to carry out experiments in which they heated graphite rods in a vacuum and measured the light absorbed by the sooty debris. They made and saw C60, which absorbs ultraviolet light at a particular wavelength. But they didn’t realize what it was, and decided their apparatus was just making unintelligible carbon “junk”.
It wasn’t until the duo saw the paper by Kroto and colleagues in 1985 that the penny dropped. But if it hadn’t been for that, the interest of astronomers in interstellar dust would probably have returned scrutiny anyway to those experiments in Heidelberg, and the truth would have emerged. As it was, the graphite-vaporizing equipment of Krätschmer and Huffman offered a way to mass-produce fullerenes more cheaply and simply than the Rice cluster machine. Once this was understood in 1990, fullerene research exploded worldwide.
Continental drift – Roberto Mantovani, or…
There are discoveries for the time seems right, and others for which it’s just the opposite. For one reason or another they are rejected by the prevailing scientific opinion, offering us the retrospective, appealingly tragic tale of the lone maverick who was spurned only vindicated much later, perhaps posthumously. That’s pretty much how it was for Alfred Wegener’s theory of continental drift. In the 1930s, Wegener, a German meteorologist (so what did he know about geology?), proposed that the Earth’s surface was not fixed, but that the continental land masses wander over time into different configurations, and were in the distant past disposed far from where they stand today. To doubt the evident solidity of the planetary surface seemed absurd, and it wasn’t until the discovery of seafloor spreading – the formation of fresh ocean crust by volcanic activity – in the 1960s that continental drift became the paradigm for geology.
In such circumstances, it seems rather unlikely that anyone else would have come up with Wegener’s unorthodox idea in his own era. But they did. Not just one individual but several others imagined something like a theory of plate tectonics in the early twentieth century.
The most immediate sign of continental drift on the world map is the suspiciously close fit of the east coast of South America with the west coast of Africa. But that line of argument, advanced by American geologist Frank Bursley Taylor in 1908, seems almost too simplistic. Taylor got other things right too, such as the way the collision of continents pushes up mountain ranges. But his claim that the movements were caused by the close approach of the moon when it was suddenly captured by the Earth in the Cretaceous period was rather too baroque for his contemporaries.
In 1911, an amateur American geologist named Howard Baker also proposed that the continents are fragments of an epicene supercontinent that was torn apart. His mechanism was even more bizarre than Taylor’s: the moon was once a part of the Earth that got ripped off by its rapid spinning, and the continents moved to fill the gap.
In comparison, the theory of Italian geologist (and violinist) Roberto Mantovani, first published in 1889 and developed over the next three decades, was rather easier to swallow. He too argued that the continents were originally a single landmass that was pulled apart thanks to an expansion of the Earth driven by volcanic activity. Wegener acknowledged some “astonishingly close” correspondences between Mantovani’s reconstruction and his own.
All of these ideas contain tantalizing truths: breakup of an ancient supercontinent (now called Pangea), opening of ocean basins, mountain building and volcanism as the driving force. (Even Baker’s idea that the moon was once a part of the Earth is now widely believed, albeit for totally different reasons.) But like a reconstruction of Pangea from today’s map, the parts didn’t fit without gaps, and no one, including Wegener, could find a plausible mechanism for the continental movements. If we didn’t have Wegener, then Mantovani, or even Taylor or Baker, could step into the same foundational narrative of the neglected savant. All intuited some element of the truth, and their stories show that there’s often an element of arbitrariness in what counts as a discovery and who gets the credit.
Saturday, November 26, 2016
The Return by Hisham Matar: why it's a special book

These were my comments on Hisham Matar’s book The Return for the Baillie Gifford Prize award event on 15 November. The prize, for which I was a judge, was awarded to Hisham’s close friend Philippe Sands for his extraordinary book East West Street.
___________________________________________________________________________
When we produced our shortlist, and indeed our longlist, I felt pleased with and proud of it. But as my acquaintance with the shortlisted books has deepened, and perhaps particularly in the light of the political climate into which they emerge, I have felt something more than that. I’ve become passionate about them.
But it was passion that I felt about Hisham Matar’s book from the first reading. It tells of his quest to find out what happened to his father Jaballa in Libya during the Qaddafi dictatorship, after Jaballa was imprisoned in the notorious Abu Salim jail for his principled opposition to the regime. The Return of the title is Hisham’s return to Libya in 2012, 33 years after his family was exiled, when the Qaddafi regime had been overthrown. That was during what we now know to be a tragically brief period of grace before a descent into social and economic chaos created by the power vacuum.
Yes, the subject sounds difficult and bleak, but please believe me that this book is not that, not only that. It is wise and funny, it is perceptive to absurdity, to beauty and to friendship, as well as to terror and cruelty. Several times it was said in our judging meetings that Hisham’s book has a novelistic quality,
If this story were not factual, I would expect to see The Return on the Man Booker shortlist, and novelists could learn a great deal from Hisham’s impeccable handling of every scene, each of which unfolds at just the rate and in just the order it should, with precisely the words it needs and no more.
But calling the book novelistic could sound like a double-edged comment, as if to imply that perhaps the truth is sometimes held hostage to a nice turn of phrase. That is absolutely not the case. It feels hard to do justice to the brilliant construction of the book, the masterful handling of plot, suspense and intrigue, without seeming to reduce the magnitude of the subject to the dimensions of a thriller. But these aspects are really a mark of the achievement here, because even as they make the book a totally engrossing read, not once do they obscure the moral and artistic integrity of what Hisham has created.
Of course, he is an acclaimed novelist himself, but here he shows that there are qualities in literature far more significant than the apparent division between fact and fiction.
But it is factual. That is a sad and terrible thing, but it also makes The Return a sort of gift, an honouring of the history and suffering of individuals and a country.
There is something in it that brings to my mind Primo Levi’s testament If This is a Man. Like that book, this one can’t use art to expunge the awful, inhuman events that motivated it. But, in its quiet dignity, it shows us why we persist, and in the end, I think, why we prevail, in spite of them.
Sunday, October 16, 2016
Did the Qin emperor need Western help? I don't think so.

Did the First Emperor of China import sculptors from classical Greece to help build the Terracotta Army? That’s the intriguing hypothesis explored in an entertaining BBC documentary called The Greatest Tomb on Earth, presented by Dan Snow, Alice Roberts and Albert Lin. (See also here.)
If it was true, it would revolutionize our view of the early history of China. It’s widely assumed that there was no significant, direct contact between China and the West until the time of Marco Polo (although you would not have guessed from this programme that diffusion of artifacts along trade routes happened much earlier, certainly in Roman times around the first century AD).
But I didn’t buy the story for a moment. It turned out to be a classic example of building up a case by an accumulation of weak, speculative evidence and then implying that somehow they add up to more than the sum of the parts. Look at each piece of evidence alone, and there’s virtually nothing there. But repeat often enough that they fit together into a convincing story and people might start to believe you.
Archaeologist Albert Lin adduced evidence of the ancient road that connected that ancient capital of present-day Xi’an, near the site of the mausoleum of the Qin emperor Qin Shi Huangdi, to the West, perhaps via Alexander’s empire in India. Well, at least, it was claimed that “there was probably a road reaching [from the tomb] at least to Lintao” on the borders of the Qin Empire. Buy what Lin actually found was a short section of undated track – it looked maybe a kilometre or so long – heading northwest through farmland within the confines of the tomb complex in Sha’anxi. Lintao is almost 400 km away. Later in the programme Dan Snow claimed that on this basis “We have evidence of an ancient road network that could have brought Westerners to China”. No, they really don’t. (And why do we need to find an ancient physical road anyway, given that it does seem clear that trade was happening all the way from the Mediterranean region to China at least in Roman times?)
Another strand of evidence was the notion that large-scale, lifelike figurines suddenly appeared in the Qin tomb, looking somewhat like those of classical Greece, when nothing like this had been seen before in China. How else could this artistic leap have been made, if not with the assistance of Greek sculptors imported by the emperor? That, at least, was the case argued by Lukas Nickel of the University of Vienna, based solely on asserted coincidences of artistic styles. We were offered no indication of how the Qin emperor – who, until he became ruler of “all” of China extending more or less to present-day Sichuan, was king of the state of Qin in the Wei valley – how this emperor somehow knew that there were barbarians nigh on 2000 miles further west across the Tibetan plateau who had advanced sculptural skills.
There were some puzzles, to be sure. To make some of their bronze castings, the Qin metalworkers seemed to have used something like the so-called “lost-wax technique”, using reinforcing rods, of which examples are known in ancient Egypt. “It’s clear this process is too complex to stumble on by accident”, said Snow. But obviously it was stumbled on by accident – how else was it ever invented anywhere? Given the known metallurgical skills of the ancient Chinese – bronze casting began in the Shang era, a millennium and a half before the Qin dynasty, and some of the Shang artifacts are exquisite – how can we know what they had achieved by the third century BC? Besides, I was left unsure what was so exciting about seeing a lost-wax method in the Qin artifacts, given that we already know this technique was known in China by the 6th century BC. Still, Snow concluded that “We now have strong evidence of Western metalworkers in China in the third century BC”. No, we don’t.
Then a skull from the mausoleum site, apparently of a sacrificed concubine of the emperor, was said to look unlike a typically East Asian skull. Like, perhaps, the more Caucasoid skull types of the minority races in what is today Xinjiang? That’s consistent with the data – the skull is certainly not Western in its proportions, said Alice. It could come from further afield too, on the basis of this data – but there’s absolutely no reason to suppose it did. Still, we were left with the hint that the emperor might have employed workers brought in from far outside the border of his empire. There was no support for that idea.
We were also introduced to an apparently recent paper reporting evidence of DNA of Western lineage in people from Xinjiang. Quite apart from the fact that this says nothing about the import of Western artistic techniques in China during the Qin dynasty, it was very odd to see it offered as a new discovery. The notion that there were people of Western, Caucasoid origin in Xinjiang long, long ago has been discussed for decades, ever since the discovery in the early twentieth century of mummified bodies of distinctly non-Chinese – indeed, virtually Celtic – appearance, with blond to red hair and “Europoid” body shapes in the Tarim basin of Xinjiang. The existence of a proto-European or Indo-European culture in this region from around 1800 BC has been particularly promoted since the 1990s by American sinologist Victor Mair. DNA testing from the early 2000s confirmed that the mummies seem to have had at least a partly European origin.
What is particularly odd about the neglect of the Tarim mummies in the context of this programme is that Mair and others have even suggested that this Indo-European culture may have brought Western metallurgical technology from west to east long before the Qin era, by the usual processes of cultural diffusion. They think that the bronze technology of the Shang era might have been stimulated this way. Others say that ironworking might have been transmitted via this culture around the tenth century BC, when it first appears in Xinjiang (see V. C. Piggott, The Archaeometallurgy of the Asian Old World, 1999).
I enjoyed the programme a lot. It identifies some interesting questions. But the idea of West-East cultural influence in the ancient world is not at all as new as was implied, and to my eye the evidence for direct import of Western “expertise” by Qin Shi Huangdi to make his army for the afterlife is extremely flimsy at this point. It would make a great story, but right now a story is all it is.
Incidentally, several folks on Twitter spoke about the popular idea that the Qin emperor’s mausoleum contains lakes of mercury. You can read more about that particular issue here.
Thursday, October 06, 2016
Making paint work: Vik Muniz's Metachromes

This is the catalogue essay to accompany the exhibition Metachromes by Brazilian artist Vik Muniz at Ben Brown Fine Arts in London, 6 October to 12 November.
____________________________________________________________
Why did so many artists abandon painting over the course of the twentieth century? There is no point looking for a single answer, but among the ones we might consider is that painters lost their trust in paint. It’s something rarely talked about, this relationship of painters to paint – or at least, it is rarely talked about except by painters themselves, to whom it is paramount. Paint represents the graft and the craft of painting, and for that very reason it is all too often neglected by art critics and historians, who have tended to regard it merely as a somewhat messy means to a sublime end. But many leading artists since Matisse have been making art not with paint but about paint, and in the process displaying their uneasy relationship with it.
No one put this better than Frank Stella: “I tried to keep the paint as good as it is in the can.” Two things leap out here, as British artist David Batchelor suggests in his book Chromophobia. First, for Stella paint comes in cans, not in tubes (it is an industrial mass product). Second, it looks good in the can. Indeed, perhaps it looks better in the can than it will once you start trying to apply it. The challenge of a blank canvas is familiar: it demands that the painter find something to fill up that blankness, something that will have been worth the effort. Blankness means it’s up to you. But paint in a can is a challenge of a different order. Here it is, already sensual, beautiful and pure – qualities that the artist might hope to retain in the finished work, but the paint sitting in the can says ‘you think you can do better than this?’ Probably not.
Paint had become too perfect. Anyone who has tried to make paint the way a Renaissance master (or more probably, his apprentices) would have done will know that it emerges as unpromising stuff: sticky, gritty, oily. It was the artist’s task to wrestle beauty from this raw earth, which must have seemed a noble and mysterious thing. In the Middle Ages there was barely time even to take note of the paint: blended from pigments and egg yolk, it dried in minutes, so you had better get to work and not sit there admiring it in the dish. But industrialization changed all that. Pigment was machine-ground with the power of horses or steam until the powder was fine and smooth. It was mixed with oils and additives in great vats like those one can still see in the factories of artists’ suppliers such as Winsor and Newton: an almost obscene orgy of viscous colour. Cheaper pigments and new binding media led to the production of colour by the can, made not for daubing onto canvas but for brushing in flat swathes over walls and ceilings. These were no longer the rust-reds and dirty yellows of Victorian décor, but deep pinks, azure, viridian, the colours of sunsets and forests and named for them too.
That makes it sound as though artists were spoilt for choice, and in a sense they were: the range of colours expanded enormously, and most of this rainbow was cheap. But not all the colours were reliable: they might fade or discolour within weeks or years. Instability of paint is a problem as old as painting. But in the past painters knew their materials: they knew what colours they could mix and which they should not, which are prone to ageing and which withstand time. From the early nineteenth century, however, painters became ever less familiar with what was in their materials. These were substances made in chemicals factories, and not even the paint vendors understood them. Even if the technical experts (then called colourmen) guaranteed them for five years, how would they look in fifty? At first, paint manufacturers had little idea about such matters either, and they did not seem to care very much. Disastrous errors of judgement were made at least until the 1960s, as anyone who has seen what became of Mark Rothko’s Harvard murals will attest: valued at $100,000 when completed in 1962, they were in too embarrassing a state to remain on display by 1979.
But it was not just this lack of technical understanding that led painters to distrust paint. Every medium had its message, and the message of oil paint was now deemed a bourgeois one, to be disowned by any self-respecting artistic radical. It “smacked of garrets and starving artists”, according to British artist John Hoyland. Any sign of a brushstroke spoke of painterly traditionalism, and was to be avoided at all costs. The impassive matt finish of acrylics was the thing: it gave the artist a neutral colour field to play with, unencumbered (so they liked to think) by history. For some, this embracing of new paint media arose out of economic necessity: commercial paints bound in synthetic resins were cheaper, especially if you planned (as many did) to work on a colossal scale. For others, new media offered new styles: Jackson Pollock needed a “liquid, flowing kind of paint”, Stella seized on metallic radiator paints to step beyond the prismatic rainbow. But paints made from plastics also spoke of modernity. Nitrocellulose enamel spray paints are used on cars and toasters, so why not, as Richard Hamilton decided, use them for paintings of cars and toasters? “It’s meant to be a car”, he said, “so I thought it was appropriate to use car colour.”
The idea, then, was that the artist would no longer try to hide the materials in the manner of a nineteenth-century French academician like Ingres, but was constantly referring to them, reminding the viewer that the picture is made from stuff. That’s true even of the flat anonymity of the household paints used by an artist like Patrick Caulfield, which at first seem to be concealing their identity as ‘paint’ at all: they’re saying ‘this is only a surface coated with colour, you know’ – or as Caulfield puts it, “I’m not Rembrandt.” The paint is not pretending to be anything else.
Part of the pleasure of Vik Muniz’s works is that they often do pretend to be something else, but so transparently that you notice and relish the medium even more. “Oh, those are diamonds! That’s chocolate, that’s trash, those are flowers.” His Metachrome series is particularly rich in allusion, because the works confront this issue of the material of painting in ways that highlight several of the problems of paint, which have vexed and in the end sometimes inspired painters. They leave the medium – pastel sticks – literally embedded in the work, and not as accidental remnants but as constructive elements. On the one hand this creates a Brechtian sense of ‘here’s how it was done’, a denial of illusion. These become not just images of something, but works about creating art. It reminds us of the disarming honesty of French painter and sculptor Jean Dubuffet’s remark: “There is no such thing as colour, only coloured materials.” By reconstructing the paintings of famous artists with the pigment-saturated tools still visible, Muniz demystifies the original objects. And art itself then becomes more humble, but also more valuable: not an idea, nor a commodity, nor an icon, but a product of human craft and ingenuity.
This wouldn’t count for so much if the aesthetic element weren’t respected either. The solidity and chromatic depth of these pieces of coloured material are richly satisfying. We can enjoy them as substance, source and subject. They seem to invite us to pluck them up, and to start making marks ourselves.
Points of colour
One thing brought to mind by Metachromes is the specks and lumps of colour and crystal seen in cross-sectional micrographs that art conservators use routinely to study the multilayered techniques of the Old Masters. Those images reveal that the colour may be surprisingly dispersed: sometimes the grains are as sparse as raisins in a fruit bun, so that it seems a wonder the paint layer does not look patchy or translucent. (The effect may be cumulative: in Titian or van Eyck the bold colours come from the painstaking application of layer after layer.) But what these micrographs also reveal is colour unmixed: greens broken down into blues and yellows, flesh tones a harlequin jumble of hues mixed with white and black. They remind us that the rich hues of the Old Masters are an optical illusion conjured from a very limited palette: they had only a few greens to play with, their blues were sparser still. The illusion is sustained by scale: the flecks are so small that the eye can’t distinguish them unaided, and they blend into uniformity. When this optical mixture produces so great a perceptual shift as yellow and blue to green, the effect seems like alchemy. We get accustomed to this method of making green in the nursery, but still it seems odd to be confronted by such stark evidence that our eyes are deceiving us, that there is only yellow and blue at the root of it all.

Colour mixing is genuinely perplexing. Isaac Newton explained it in 1665, but the explanation made no sense to artists. Yellow, he said, comes from mixing red and green. Add blue and you get white. This was clearly not the way paints behaved.
Newton’s experiment is commonly misunderstood. He did not show for the first time that sunlight could be split into the rainbow spectrum; that had been known since time immemorial, for all you needed was a block of clear glass or crystal. Some suspected, however, that this colouration of sunlight might be the result of a transformation performed by the prism itself. Newton showed that if one of the coloured rays of the spectrum – red, say – is passed through a second prism, it emerges unchanged: these are, he said, “uncompounded colours”, irreducible to anything else. And if the entire spectrum is squeezed back through a focusing lens, it reconstitutes the original white ray. Colour, then, comes from plucking this rainbow, extracting some rays and reflecting others. Black objects swallow them all, white objects reject them.
According to Newton, the colours we see are in the light that conveys them. Goethe (whose antipathy to Newton I have never really fathomed) spoke for many when he said that thanks to Newton “the theory of colour has been forced to enter a realm where it does not belong, to appear before the judgement seat of the mathematician.” Dyers, he says, were the first to perceive the inadequacy of Newton’s theory, because “phenomena forcefully confront the true practitioner, the producer of goods, every day. He experiences the application of his ideas as profit or less.” And so he knew much better than to waste valuable dyes by attempting to make yellow from red and green.
Goethe’s largely misconceived theory of colour continues to exert a strange appeal, but what he did not appreciate is that there are two kinds of colour mixing. One works for dyes and pigments, and it involves the removal of prismatic wedges from sunlight: take away the reds and violets, and you are left with green. This is subtractive mixing. The other works for light rays, and it involves a gradual reconstitution of the spectrum from its component rays. The retina tickled with red and green light, for example, reports back to the brain in just the same way as it does when struck by pure yellow light. This is additive mixing. The Scottish physicist James Clerk Maxwell explained this in 1855, and his discoveries quickly filtered down in popularized forms to painters, some of whom dreamed of finding within them a prescription for filling their canvases with sunlight.
Additive mixing can be achieved in various ways. In television screens, trios of light-emitting pixels in the additive primaries – crudely speaking, red, green and blue-violet – are juxtaposed at too small a scale for the eye to resolve from a normal viewing distance, and their light is blended on the retina. Maxwell showed that he could achieve the same thing with pigments by painting segments of disks and spinning them at great speed, something that the polymathic Englishman Thomas Young had also done at the beginning of the nineteenth century. The Impressionists took away the message that all visual experience is constructed from spectral colours, so that these were the only ones they should use – no more ochres or umbers. They tried to banish black from the palette, and while white remained indispensable, their whitewashed walls and snow were broken up into bright primaries. When Claude Monet needed browns and blacks to depict the smoky train station at Saint-Lazare, he mixed them from primaries, although you would never guess it.
Paul Signac and Georges Seurat went further. They hoped to do away with subtractive mixing entirely, seeing that it almost inevitably degraded the brightness of the colours. Instead, their pointillist style, with small dots of pure pigments placed alongside each other like meadow flowers scattered among grass, was intended to let the colours mix optically, on the retina rather than on the canvas, when viewed from the right distance. These Neo-Impressionists believed that this would give their works greater luminosity.
Curiously, Newton himself had described something akin to this. He said that a mixture of dry pigment containing yellow orpiment, bright purple, light green and blue looked at a distance of several paces like brilliant white. The scattering of coloured grains in Muniz’s pictures hint at the same thing. There are places where the colour seems uncertain, treacherous, on the verge of shifting: it depends on where you’re standing, and might change as you step back.
But for the Neo-Impressionists, pointillist optical mixing did not really work. Partly this was because they had only a hazy grasp of the new colour theory, corrupted by an old notion about the yellow-orange colour of sunlight. Partly they were inconsistent in applying the technique, varying the size of their tache strokes. Signac admitted his disappointment with the experiment: “red and green dots make an aggregate which is grey and colourless” – although this pearly sheen is now an aspect of what we enjoy in the atmosphere of their peinture optique. As it drifted from its scientific origins, pointillism became just another mannerism, a style rather than an experiment. It’s a style that Muniz plays with here, the strokes and marks of the artist sometimes playfully but effectively substituted by the pastel sticks themselves. They are Titian’s grains of colour writ large, confessing their illusionism – but still performing it anyway, if we let them.
Work with dirt
“Sometimes we want to know how things are made. Sometimes we don’t,” says Muniz. Metachromes forces us to confront the fact that painting is made from what Philip Guston called ‘coloured dirt’. It is not smoothed flat or artfully raised into ridges by brush or knife: the dirt is simply there, heaped and glued onto the surface, and provoking us to wonder what this stuff actually is.

Many art historians and critics don’t care much for that. They will talk of ‘cobalt’ as a shorthand for a strong blue, as though they think this is the hue of that silvery metal itself – and never mind the fact that the blue cobalt-based pigment of van Dyck had very little to do with the cobalt blue of van Gogh – the latter a nineteenth-century innovation that the artist called a “divine colour”.
Paint disguises this grainy minerality. Seeing it restored in Muniz’s images, you can’t help but wonder about the chemistry. Colours like this are rare in the earth, for even gorgeous gems such as sapphire and ruby turn pale and disappointing when finely ground. When geology alone supplied the palette, reds were rusty and yellows were tawny, both of them ochres. Malachite, a copper ore, gave a pleasant bluish green, but not the vibrant green that light-harvesting chlorophyll brings to grass and the leaves of flowers. Of purple there was almost nothing – a rare manganese mineral, if you were lucky; or dyestuffs that blanched in the light. For orange you took your life in your hands with realgar, the highly toxic sulfide of arsenic.
Blue alone is well served by nature: it could be extracted, at immense cost and labour, from lapis lazuli, a mineral mined in Afghanistan, and brought across the seas – or as the name has it, ultra marina. Mere grinding wasn’t enough to turn lapis into this midnight blue: the blue component (called lazurite) had to be extracted from the impurities, which otherwise made the powder greyish. This was done by mixing the powder with wax into a dough and kneading it repeatedly in water to flush out the blue. The best ultramarine cost more than its weight in gold, and was reserved only for the most venerated of subjects; skies made do with cheaper blues, unless you were Giotto. When Johannes Itten, the mercurial colour theorist of the Bauhaus, insists that blue connotes meekness and profundity in medieval images of the Virgin, he forgets that to the medieval artist symbolism embraced the material too: the ultramarine robes of the mother of Christ honour her through their vast expense.

Wetness transforms
Today ultramarine is produced by the tonne – not from lapis lazuli, which is still very costly, but as an industrial chemical made in a furnace from soda, sand, alumina and sulfur. This was a triumph of nineteenth-century chemistry. Some of the leading chemists of that age were assigned the task of finding good synthetic substitutes for ultramarine, and they succeeded with cobalt blue; but it did not match the real thing. In 1824 the Society for the Encouragement of National Industry in France offered a prize of 6000 francs to anyone who could devise a way of making ultramarine synthetically. Its elemental constituents had been deduced in 1806, but the curious thing is that, unlike most pigments, ultramarine does not derive its colour from the presence of a particular metal in its crystal lattice (iron, copper, lead, mercury, chromium, cobalt and zinc are among the usual sources). Here, sulfur is the unlikely origin of the rich blue, and to understand it properly you need quantum chemistry.
The prize drew plenty of charlatans, but within four years it was claimed by the colour-maker Jean-Baptiste Guimet from Toulouse – a claim that was challenged, with justification but without success, by a German chemist at Tübingen. ‘French’ ultramarine offered Giotto’s glories for a fraction of the cost, although at first artists could not bring themselves to believe that it could be as good as the natural material.
The ultramarine you will buy in the tube today is made with this synthetic product, and it is probably better than the gritty grindings Titian used. But you should see the pigment before it becomes paint. It seems to emit a glow just beyond the visible range, it has a depth and velvety lustre that the liquid binder can only diminish. It is a colour to gaze on for long moments. Here is Frank Stella’s dilemma redoubled: if you think the paint looks good in the can, you should see pigment before it becomes paint.
That was what bothered Yves Klein in the 1950s. “What clarity and lustre, what ancient brilliance”, he said of raw pigments. But the touch of a binding medium is fatal to this texture: “The affective magic of the colour had vanished. Each grain of powder seemed to have been extinguished individually by the glue or whatever material was supposed to fix it to the other grains as well as to the support.” What goes wrong? The way light bounces off the pigment particles is modified by the medium, even if it is perfectly transparent, because light entering it cannot but be refracted, the rays bent as they are in water. This effect of the medium depends in its refractive index – a scientific measure, if you like, of its ray-bending power. So the same pigments may look different in different media: vermilion mixed with egg yolk is a rich orange-scarlet, but when Renaissance painters began mixing it with oils the result was less impressive, and soon they turned to other reds, to the crimsons and magentas of red lakes.
By the 1950s, painters already had several alternative binders to oil at their disposal – nitrocellulose, acrylics, alkyds, most of them petrochemical-based resins. Was there a binder, Klein wondered, that would fix the pigment particles in place without destroying their lustre? This was a chemical matter, for which the artist needed technical assistance. He got it from his architect friend Bernadette Allain, and most importantly from a Parisian manufacturer of paint, Édouard Adam. In 1955 they found that a resin produced by the chemicals company Rhône-Poulenc, called Rhodopas M60A, thinned with ethanol and ethyl acetate, had the desired effect. In Klein’s words, “it allowed total freedom to the specks of pigment such as they are found in powder form, perhaps combined with each other but nevertheless autonomous.”
Klein used this binder with the pigment that best deserved it: ultramarine. He premiered his brilliant blue sculpture-canvases in Milan in 1957 with an exhibition called ‘Proclamation of the Blue Epoch’. This blue became his trademark: coating blocks, impregnating sponges, covering twigs and body casts. It was International Klein Blue, patented to preserve its integrity in 1960, two years before the artist’s untimely death.
Another solution was to simply refuse to degrade the pure pigment with any kind of binder. In his Ex Voto for the Shrine of St Rita (1961), Klein encases ultramarine powder along with a synthetic rose pigment and gold leaf in clear plastic boxes. In Metachromes, Muniz offers a homage to Klein’s pigment triptych, this celebration of raw, synthetic “coloured dirt”.

Like Klein’s works, Metachromes poses the question: when does the material “leave the can” and become a work of art? It’s not a question that needs an answer. It’s there to remind us that “what is made” should require us to consider too “how it is made” and “what it is made of” – that we are not merely Homo sapiens but Homo faber, and that it is because we are both that we survive.


Subscribe to:
Posts (Atom)