I have a piece in the July issue of Prospect on DNA nanotechnology. This is the pre-edited version.
_______________________________________________________________
“It has not escaped our notice that the specific pairing we have postulated immediately suggests a possible copying mechanism for the genetic material”. The arch remark that concluded James Watson and Francis Crick’s paper in Nature on the structure of DNA, published 60 years ago, anticipated the entire basis of modern genetics. The structure they postulated is both iconic and beautiful: two strands of conjoined molecular building bocks, entwined in a double helix. The twin strands are zipped together by chemical bonds that rely on a perfect, ‘complementary’ match between their sequences of building blocks, and these sequences encode genetic information that is passed on when the molecule is replicated.
Yet although Watson and Crick were undoubtedly right to depict DNA as a kind of replicating molecular data base, the beautiful elegance of their vision of genetics – and in a sense, of the whole of biology and evolution – as the read-out of a set of digital instructions on DNA, basically using chemistry for computation, is now looking too simplistic, even misleading. It is no longer clear that DNA is the ultimate focus of all molecular processes controlling the development and evolution of organisms: it is an essential but incomplete database, more an aide-memoire than a blueprint. Watson and Crick’s picture of a molecule that can be programmed to zip up only with the right partner is finding expression in its purest and most satisfying form not in biology – which is always messier than we imagine – but in the field of nanotechnology, which is concerned with engineering matter at the scale of nanometres (millionths of a millimetre) – the dimensions of molecules.
A key motivation for nanotechnology is the miniaturization of transistors and other devices in microelectronic circuitry: they are now so small that conventional methods of carving and shaping materials are stretched to the limits of their finesse. To replace such ‘top-down’ fabrication with ‘bottom-up’, nanotechnologists need to be able to control exactly how atoms and molecules stick together, and perhaps to dictate their movements.
DNA might be the answer. Chemists have now created molecular machines from bespoke pieces of DNA that can move and walk along surfaces. They have made molecular-sized cubes and meshes, and have figured out how to persuade DNA strands to fold up into almost any shape imaginable, including Chinese characters and maps of the world smaller than a single virus. They are devising DNA computers that solve problems mechanically, not unlike an abacus, by the patching together of little ‘sticky’ tiles. They are using DNA tagging to hitch other molecules and tiny particles into unions that would otherwise be extremely difficult to arrange, enabling the chemical synthesis of new materials and devices. In short, they are finding DNA to be the ideal nanotechnological construction material, limitlessly malleable and capable of being programmed to assemble itself into structures with a precision and complexity otherwise unattainable.
Although this research places DNA in roles quite unlike those it occupies in living cells, it all comes from direct application of Watson and Crick’s insight. A single strand of DNA is composed of four types of molecule, strung together like beads on a thread. Each building block contains a unit called a base which dangles from the backbone. There are four kinds of base, whose chemical names are shortened to the labels A, T, C and G. The bases can stick to one another, but in the double helix they tolerate only one kind of partner: A pairs with T, and C with G. This means that the sequence of bases on one strand exactly complements that on the other, and the pairs of bases provide the zip that holds the strands together.
So a DNA strand will only pair up securely with another if their base sequences are complementary. If there are mismatches of bases along the double helix, the resulting bulges or distortions make the double strand prone to falling apart. This pickiness about base-pairing means that a DNA strand can find the right partner from a mixture containing many different sequences.
Chemical methods for making artificial DNA, first developed in the 1970s, have now reached the point at which strands containing millions of A, T, G and C bases can be assembled in any sequence you want. These techniques, developed for genetic engineering and biotechnology, are now used by nanotechnologists to create DNA strands designed to assemble themselves into exotic shapes.
The potential of the approach was demonstrated in the early 1990s by chemist Nadrian Seeman of New York University and his collaborators. They created DNA strands designed such that, when they are mixed together, they twist around one another not in a single helical coil but to make the struts of tiny, cube-shaped cages. No one had any particular use for a DNA cube; Seeman was demonstrating a proof of principle, showing that a molecular shape that would be extremely hard to fashion using conventional chemistry could be engineered by figuring out how to program the components to lace themselves together spontaneously.
Although regarded for some years as little more than a clever curiosity, Seeman’s work was visionary. It showed a way to tackle nanotechnology’s challenge of building very small objects from the bottom up, starting with individual atoms and molecules. If DNA is the construction fabric, the base sequence can provide the assembly instructions: unlike most molecules, DNA will do what it is told.
DNA origami could, for instance, provide scaffolding on which electronic components are arranged. One might tag the components with strands that pair up with a particular location on a DNA scaffold with the complementary sequence: in effect, an instruction saying “stick here”. Researchers have worked out how to program DNA strands to weave themselves into webs and grids, like a chicken-wire mesh, on which other molecules or objects can be precisely attached. Last February a team at Marshall University in West Virginia showed that giant molecules called carbon nanotubes – nanometre-scale tubes of carbon which conduct electricity and have been proposed as ultrasmall electronic devices – can be arranged in evenly spaced, parallel pairs along a ribbon made by DNA origami. The carbon nanotubes were wrapped with single-stranded DNA, which lashed them onto the ribbon at the designated sites.
The same approach of DNA tagging could be used to assemble complicated polymers (long chains of linked-up molecules) and other complex molecules piece by piece, much as, in living cells, DNA’s cousin RNA uses genetically encoded information to direct the formation of protein molecules, tagging and assembling the amino-acid building blocks in the right order.
The astonishing versatility of DNA origami was revealed in 2006 when Paul Rothemund at the California Institute of Technology in Pasadena unveiled a new scheme for determining the way it folds. His approach was to make single strands programmed to stick to itself in back-and-forth hairpin-like turns that create a two-dimensional shape, pinning the folds in place with ‘staples’ made from short DNA strands with appropriate complementary sequences.
Rothemund developed a computer algorithm that could work out the sequence and stapling needed to define any folding pattern, and showed experimental examples ranging from smiley faces and stars to a map of the world about a hundred nanometres across (a scale of 1:200,000,000,000,000). These complex shapes could take several days to fold up properly, allowing all kinks and mistakes to be ironed out, but researchers in Germany reported last December that each shape has an optimal folding temperature (typically around 50-60 oC) at which folding takes just a few minutes – a speed-up that could be vital for applications.
Last March, Hao Yan of the University of Arizona took the complexity of DNA origami to a new level. He showed how the design principles pioneered by Seeman and Rothemund can be tweaked to make curved shapes in two and three dimensions, such as hollow spheres just tens of nanometres wide. Meanwhile, in 2009 a Danish team saw a way to put DNA cubes to use: they made larger versions than Seeman’s, about 30 nanometres across, with lids that could be opened an closed with a ‘gene key’ – a potential way to store drug molecules until an appropriate genetic signal releases them for action.
As aficionados of Lego and Meccano know, once you have a construction kit the temptation is irresistible to give it moving parts: to add motors. Molecular-scale motors are well-known in biology: they make muscles contract and allow bacteria to swim. These biological motors are made of protein, but researchers have figured out how to produce controlled movement in artificial DNA assemblies too. One approach, championed by Bernie Yurke of Bell Laboratories in New Jersey and Andrew Turberfield at the University of Oxford, is to make a DNA ‘pincer’ that closes when ‘fueled’ with a complementary strand that sticks to the arms and pulls them together. A second ‘fuel strand’ strips away the first and opens the arms wide again. Using similar principles, Turberfield and Seeman have made two-legged ‘DNA walkers’ that stride step by step along DNA tracks, while a ‘DNA robot’ devised by Turberfield and colleagues can negotiate a particular path through a network of such tracks, directed by fuel strands that prompt a right- or left-hand turn at branching points.
They and others are working out how to implement the principles of DNA self-assembly to do computing. For example, pieces of folded-up DNA representing binary 1’s and 0’s can be programmed to stick together like arrays of tiles to encode information, and can be shuffled around to carry out calculations – a sort of mechanical, abacus-like computer at the molecular scale.
Underlying all this work is Watson and Crick’s comment about DNA replication. There is the tantalizing – some might say scary – possibility that DNA structures and machines could be programmed not only to self-assemble but to copy themselves. It’s not outrageous to imagine at least some products of DNA nanotechnology acquiring this life-like ability to reproduce, and perhaps to mutate into better forms. Right now such speculations recede rapidly into science fiction – but then, no one guessed 60 years ago where the secrets of DNA self-assembly would take us today.
Thursday, June 20, 2013
Friday, June 14, 2013
Nasty, brutish and short
John Gray once said to me, wary that we might find consensus on the topic we were discussing, that “there can be too much agreement”. That is the perpetual fear of the contrarian, I suppose. But I find myself ever more in agreement with him nonetheless. His splendid demolition of the mythical Enlightenment in this week’s New Statesman (not online, it seems) is a case in point, which prompts me to stick up here my notes from a panel discussion (“Nasty, Brutish and Short”) at the How The Light Gets In festival at Hay a couple of weeks ago. They asked for a critique of the Draper-White view; I was happy to oblige.
___________________________________________________________
I’ve been trying to parse the title of this discussion ever since I saw it. The blurb says “The Enlightenment taught us to believe in the optimistic values of humanism, truth and progress” – but of course the title, which sounds a much more pessimistic note, comes from Thomas Hobbes’ Leviathan, and yet Hobbes too is very much a part of the early Enlightenment. You might recall that it was Hobbes’ description of life under what he called the State of Nature: the way people live if left to their own devices, without any overarching authority to temper their instincts to exploit one another.
That scenario established the motivation for Hobbes’ attempt to deduce the most reliable way to produce a stable society. And what marks out Hobbes’ book as a key product of the Enlightenment is that he tried to develop his argument not, as previous political philosophies going back to Plato had done, according to preconceptions and prejudices, but according to strict, quasi-mathematical logic. Hobbes’ Commonwealth is a Newtonian one – or rather, to avoid being anachronistic, a Galilean one, because he attempted to generalize his reasoning from Galileo’s law of motion. This was to be a Commonwealth governed by reason. And let me remind you that what this reason led Hobbes to conclude is that the best form of government is a dictatorship.
Now of course, this sort of exercise depends crucially one what you assume about human nature from the outset. If, like Hobbes, you see people as basically selfish and acquisitive, you’re likely to end up concluding that those instincts have to be curbed by drastic measures. If you believe, like John Locke, that humankind’s violent instincts are already curbed by an intrinsic faculty of reason, then it becomes possible to imagine some kind of more liberal, communal form of self-government – although of course Locke then argued that state authority is needed to safeguard the private property that individuals accrue from their efforts.
Perhaps the most perceptive view was that of Rousseau, who argued in effect that there is no need for some inbuilt form of inhibition to prevent people acting anti-socially, because they will see that it is in their best interests to cooperate. That’s why agreeing to abide by a rule of law administered by a government is not, as in Hobbes’ case, an abdication of personal freedom, but something that people will choose freely: it is the citizen’s part of the social contract, while the government is bound by this contract to act with justice and restraint. This is, in effect, precisely the kind of emergence of cooperation that is found in modern game theory.
My point here is that reasoning about governance during the Enlightenment could lead to all kinds of conclusions, depending on your assumptions. That’s just one illustration of the fact that the Enlightenment doesn’t have anything clear to say about what people are like or how communities and nations should be run. In this way and in many others, the Enlightenment has no message for us – it was too diverse, but more importantly, it was much too immersed in the preoccupations of its times, just like any other period of history. This is one reason why I get so frustrated about the way the Enlightenment is used today as a kind of shorthand for a particular vision of humanity and society. What is most annoying of all is that that vision so often has very little connection with the Enlightenment itself, but is a modern construct. Most often, when people today talk about Enlightenment values, they are probably arguing in favour of a secular, tolerant liberal democracy in which scientific reason is afforded a special status in decision-making. I happen to be one of those people who rather likes the idea of a state of that kind, and perhaps it is for this reason that I wish others would stop trying to yoke it to the false idol of some kind of imaginary Enlightenment.
To state the bleedin’ obvious, there were no secular liberal democracies in the modern sense in eighteenth century Europe. And the heroes of the Enlightenment had no intention of introducing them. Take Voltaire, one of the icons of the Enlightenment. Voltaire had some attractive ideas about religious tolerance and separation of church and state. But he was representative of such thinkers in opposing any idea that reason should become a universal basis for thought. It was grand for the ruling classes, but far too dangerous to advocate for the lower orders, who needed to be kept in ignorance for the sake of the social order. Here’s what he said about that: “the rabble… are not worthy of being enlightened and are apt for every yoke”.
What about religion? Let’s first of all dispose of the idea that the Enlightenment was strongly secular. Atheism was very rare, and condemned by almost all philosophers as a danger to social stability. Rousseau calls for religious tolerance, but not for atheists, who should be banished from the state because their lack of fear of divine punishment means that they can’t be trusted to obey the laws. And even people who affirm the religious dogmas of the state but then act as if they don’t believe them should be put to death.
Voltaire has been said to be a deist, which means that he believed in a God whose existence can be deduced by reason rather than revelation, and who made the world according to rational principles. According to deists, God created the world but then left it alone – he wasn’t constantly intervening to produce miracles. It’s sometimes implied that Enlightenment deism was the first step towards secularism. But contrary to common assertions, there wasn’t any widespread deist movement in Europe at that time. And again, even ideas like this had to be confined to the better classes: the message of the church should be kept simple for the lower orders, so that they didn’t get confused. Voltaire said that complex ideas such as deism are suited only “among the well-bred, among those who wish to think.”
Enough Enlightenment-bashing, perhaps. But why, then, do we have this myth of what these people thought? Partly that comes from the source of most of our historical myths, which is Victorian scholarship. The simple idea that the Enlightenment was some great Age of Reason is now rejected by most historians, but the popular conception is still caught up with a polemical view developed in particular by two nineteenth-century Americans, John William Draper and Andrew Dickson White. Draper was a scientist who decided that scientific principles could be applied to history, and his 1862 book The History of Intellectual Development in Europe was a classic example of Whiggish history in which humankind makes a long journey out of ignorance and superstition, through an Age of Faith, into a modern Age of Reason. But where we really enter the battleground is with Draper’s 1874 book History of the Conflict between Religion and Science, in which we get the stereotypical picture of science having to struggle against the blinkered dogmatism of faith – or rather, because Draper’s main target was actually Catholicism, against the views of Rome, because Protestantism was largely exonerated. White, who founded Cornell University, gave much the same story in his 1896 book A History of the Warfare of Science with Theology in Christendom. It’s books like this that gave us the simplistic views on the persecution of Galileo that get endlessly recycled today, as well as myths such as the martyrdom of Giordano Bruno for his belief in the Copernican system. (Bruno was burnt at the stake, but not for that reason.)
The so-called “conflict thesis” of Draper and White has been discredited now, but it still forms a part of the popular view of the Enlightenment as the precursor to secular modernity and to the triumph of science and reason over religious dogma.
But why, if these things are so lacking in historical support, do intelligent people still invoke the Enlightenment trope today whenever they fear that irrational forces are threatening to undermine science? Well, I guess we all know that our critical standards tend to plummet when we encounter idea that confirm our preconceptions. But it’s more than this. It is one thing to argue for how we would prefer things to be, but far more effective to suggest that things were once like that, and that this wonderful state of affairs is now being undermined by ignorant and barbaric hordes. It’s the powerful image of the Golden Age, and the rhetoric of a call to arms to defend all that is precious to us. What seems so regrettable and ironic is that the casualty here is truth, specifically the historical truth, which of course is always messy and complex and hard to put into service to defend particular ideas.
Should we be optimistic or pessimistic about human nature? Well – big news! – we should be both, and that’s what history really shows us. And if we want to find ways of encouraging the best of our natures and minimizing the worst, we need to start with the here and now, and not by appeal to some imagined set of values that we have chosen to impose on history.
___________________________________________________________
I’ve been trying to parse the title of this discussion ever since I saw it. The blurb says “The Enlightenment taught us to believe in the optimistic values of humanism, truth and progress” – but of course the title, which sounds a much more pessimistic note, comes from Thomas Hobbes’ Leviathan, and yet Hobbes too is very much a part of the early Enlightenment. You might recall that it was Hobbes’ description of life under what he called the State of Nature: the way people live if left to their own devices, without any overarching authority to temper their instincts to exploit one another.
That scenario established the motivation for Hobbes’ attempt to deduce the most reliable way to produce a stable society. And what marks out Hobbes’ book as a key product of the Enlightenment is that he tried to develop his argument not, as previous political philosophies going back to Plato had done, according to preconceptions and prejudices, but according to strict, quasi-mathematical logic. Hobbes’ Commonwealth is a Newtonian one – or rather, to avoid being anachronistic, a Galilean one, because he attempted to generalize his reasoning from Galileo’s law of motion. This was to be a Commonwealth governed by reason. And let me remind you that what this reason led Hobbes to conclude is that the best form of government is a dictatorship.
Now of course, this sort of exercise depends crucially one what you assume about human nature from the outset. If, like Hobbes, you see people as basically selfish and acquisitive, you’re likely to end up concluding that those instincts have to be curbed by drastic measures. If you believe, like John Locke, that humankind’s violent instincts are already curbed by an intrinsic faculty of reason, then it becomes possible to imagine some kind of more liberal, communal form of self-government – although of course Locke then argued that state authority is needed to safeguard the private property that individuals accrue from their efforts.
Perhaps the most perceptive view was that of Rousseau, who argued in effect that there is no need for some inbuilt form of inhibition to prevent people acting anti-socially, because they will see that it is in their best interests to cooperate. That’s why agreeing to abide by a rule of law administered by a government is not, as in Hobbes’ case, an abdication of personal freedom, but something that people will choose freely: it is the citizen’s part of the social contract, while the government is bound by this contract to act with justice and restraint. This is, in effect, precisely the kind of emergence of cooperation that is found in modern game theory.
My point here is that reasoning about governance during the Enlightenment could lead to all kinds of conclusions, depending on your assumptions. That’s just one illustration of the fact that the Enlightenment doesn’t have anything clear to say about what people are like or how communities and nations should be run. In this way and in many others, the Enlightenment has no message for us – it was too diverse, but more importantly, it was much too immersed in the preoccupations of its times, just like any other period of history. This is one reason why I get so frustrated about the way the Enlightenment is used today as a kind of shorthand for a particular vision of humanity and society. What is most annoying of all is that that vision so often has very little connection with the Enlightenment itself, but is a modern construct. Most often, when people today talk about Enlightenment values, they are probably arguing in favour of a secular, tolerant liberal democracy in which scientific reason is afforded a special status in decision-making. I happen to be one of those people who rather likes the idea of a state of that kind, and perhaps it is for this reason that I wish others would stop trying to yoke it to the false idol of some kind of imaginary Enlightenment.
To state the bleedin’ obvious, there were no secular liberal democracies in the modern sense in eighteenth century Europe. And the heroes of the Enlightenment had no intention of introducing them. Take Voltaire, one of the icons of the Enlightenment. Voltaire had some attractive ideas about religious tolerance and separation of church and state. But he was representative of such thinkers in opposing any idea that reason should become a universal basis for thought. It was grand for the ruling classes, but far too dangerous to advocate for the lower orders, who needed to be kept in ignorance for the sake of the social order. Here’s what he said about that: “the rabble… are not worthy of being enlightened and are apt for every yoke”.
What about religion? Let’s first of all dispose of the idea that the Enlightenment was strongly secular. Atheism was very rare, and condemned by almost all philosophers as a danger to social stability. Rousseau calls for religious tolerance, but not for atheists, who should be banished from the state because their lack of fear of divine punishment means that they can’t be trusted to obey the laws. And even people who affirm the religious dogmas of the state but then act as if they don’t believe them should be put to death.
Voltaire has been said to be a deist, which means that he believed in a God whose existence can be deduced by reason rather than revelation, and who made the world according to rational principles. According to deists, God created the world but then left it alone – he wasn’t constantly intervening to produce miracles. It’s sometimes implied that Enlightenment deism was the first step towards secularism. But contrary to common assertions, there wasn’t any widespread deist movement in Europe at that time. And again, even ideas like this had to be confined to the better classes: the message of the church should be kept simple for the lower orders, so that they didn’t get confused. Voltaire said that complex ideas such as deism are suited only “among the well-bred, among those who wish to think.”
Enough Enlightenment-bashing, perhaps. But why, then, do we have this myth of what these people thought? Partly that comes from the source of most of our historical myths, which is Victorian scholarship. The simple idea that the Enlightenment was some great Age of Reason is now rejected by most historians, but the popular conception is still caught up with a polemical view developed in particular by two nineteenth-century Americans, John William Draper and Andrew Dickson White. Draper was a scientist who decided that scientific principles could be applied to history, and his 1862 book The History of Intellectual Development in Europe was a classic example of Whiggish history in which humankind makes a long journey out of ignorance and superstition, through an Age of Faith, into a modern Age of Reason. But where we really enter the battleground is with Draper’s 1874 book History of the Conflict between Religion and Science, in which we get the stereotypical picture of science having to struggle against the blinkered dogmatism of faith – or rather, because Draper’s main target was actually Catholicism, against the views of Rome, because Protestantism was largely exonerated. White, who founded Cornell University, gave much the same story in his 1896 book A History of the Warfare of Science with Theology in Christendom. It’s books like this that gave us the simplistic views on the persecution of Galileo that get endlessly recycled today, as well as myths such as the martyrdom of Giordano Bruno for his belief in the Copernican system. (Bruno was burnt at the stake, but not for that reason.)
The so-called “conflict thesis” of Draper and White has been discredited now, but it still forms a part of the popular view of the Enlightenment as the precursor to secular modernity and to the triumph of science and reason over religious dogma.
But why, if these things are so lacking in historical support, do intelligent people still invoke the Enlightenment trope today whenever they fear that irrational forces are threatening to undermine science? Well, I guess we all know that our critical standards tend to plummet when we encounter idea that confirm our preconceptions. But it’s more than this. It is one thing to argue for how we would prefer things to be, but far more effective to suggest that things were once like that, and that this wonderful state of affairs is now being undermined by ignorant and barbaric hordes. It’s the powerful image of the Golden Age, and the rhetoric of a call to arms to defend all that is precious to us. What seems so regrettable and ironic is that the casualty here is truth, specifically the historical truth, which of course is always messy and complex and hard to put into service to defend particular ideas.
Should we be optimistic or pessimistic about human nature? Well – big news! – we should be both, and that’s what history really shows us. And if we want to find ways of encouraging the best of our natures and minimizing the worst, we need to start with the here and now, and not by appeal to some imagined set of values that we have chosen to impose on history.
The first glaze
Here’s my latest piece for BBC Future. And this seems an opportune place to advertise Jo Marchant’s latest book The Shadow King (Da Capo), on Tutankhamun’s mummy. It looks set to be covering some fascinating material, and is published at the end of June.
_____________________________________________________________
In the absence of anything like real science to guide them, most useful technologies in the ancient world were probably discovered by chance. But that doesn’t seem to bode well for understanding how, when and where these often transformative discoveries took place. Can we ever hope to know how, say, the Stone Age became the Bronze Age became the Iron Age?
Modern archaeologists are an optimistic and inventive lot, however. They figure that, even if the details are buried in the sands of time, we can make some good guesses by trying to reconstruct what the ancients were capable of, using the techniques and materials of the time. Researchers have, for example, built copies of ancient iron- and glass-making furnaces to figure out whether descriptions and recipes from those times really work.
One of the latest efforts in this field of experimental archaeology now proposes that the production of glazed stones and ceramics – an innovation that profoundly affected trade across the globe – could have been made possible by the natural saltiness of cow dung: a property that makes it the vital ingredient in a recipe assembled by serendipity.
The earliest glazes, dating from the late fifth millennium BC and found in the Near East, Egypt and the Indus Valley, were used for coating natural stones made from minerals such as quartz and soapstone (talc). As the technology advanced, the stones were often exquisitely carved before being coated with a blue copper-based glaze to make objects now known as faience. By the second millennium BC Egyptian faience was being traded throughout Europe.
Because these copper glazes appear during the so-called Chalcolithic period – the ‘Copper Age’ that preceded the Bronze Age – it has been long thought that they were discovered as an offshoot of the smelting of copper ores such as malachite to make the metal. The glazes are forms of copper silicate, made as copper combines with the silicate minerals in the high temperature of a kiln. These compounds can range from green (like malachite itself, a kind of copper carbonate) through turquoise to rich blue, depending on how much salt (more specifically, how much chloride) is incorporated into the mix: the more of it, the greener the glaze.
Sometimes these copper glazes are crystalline, with regularly ordered arrays of atoms. But they can also be glassy, meaning that the atoms are rather disordered. In fact, it seems likely that copper smelting stimulated not only glazing but the production of glass itself, as well as the pigment known as Egyptian blue, which is a ground-up copper silicate glass. In other words, a whole cluster of valuable technologies might share a common root in the making of copper metal.
The basic idea, put forward by Egyptologists such as the Englishman William Flinders Petrie in the early twentieth century, was that other materials might have found their way by accident into copper-smelting kilns and been transformed in the heat. Glass, for instance, is little more than melted sand (mostly fine-grained quartz). To melt pure sand requires temperatures higher than ancient kilns could achieve, but the melting point is lowered if there is some alkaline substance present. This could have been provided by wood ash, although some later recipes in the Middle East used the mineral natron (sodium carbonate).
How exactly might a blue glaze have been made this way? Another early Egyptologist, Alfred Lucas, who worked with Howard Carter, proposed that perhaps a piece of quartz used to grind up malachite to make eye-paint found its way into a kiln, where the heat and alkali could have converted residues of the copper mineral into a blue film. But that would make the discovery independent of copper manufacture itself, and it’s not obvious how a grinding stone could slip into a kiln. Yet why else should a copper compound come to be on the surface of a lump of quartz?
Last year, Mehran Matin and his daughter Moujan Matin, working in the research laboratory of the Shex Porcelain Company in Saveh, Iran, showed that these materials didn’t need to be in physical contact at all [M. Matin & M. Matin, J. Archaeological Science 39, 763 (2012)]. A copper compound such as copper scale – the corrosion product of copper metal, typically containing copper hydroxide – can be vaporized in a kiln and then, in the presence of vaporized alkali oxides, be deposited on the surface of a silicate such as quartz to form a bluish glaze. All that would require is for a bit of quartz, an ubiquitous mineral in the Middle East, to have been lying around in a copper-smelting kiln.
Or does it? To get the rich turquoise blue, you also need other ingredients, such as salt. So Moujan Matin, now at the Department of Archaeology and Art History at the University of Oxford in England, has undertaken a series of experiments with different mixtures to see if she can reproduce the shiny blue appearance of the earliest blue-glazed stones. She used a modern kiln fired up to the kind of temperatures ancient kilns could generate – between 850 and 980 oC – in which lumps of quartz were placed on a pedestal above a glazing mixture made from copper scale and other ingredients.
Rock salt (sodium chloride) was known in the ancient world, since there are deposits around the Mediterranean and in the Middle East. Matin found that copper scale and rock salt alone covered the quartz surface with a rather pale, greenish, dull and rough coating: not at all like ancient blue glaze. An extra ingredient – calcium carbonate, or common chalk, which the Egyptians used as a white pigment among other things – made all the difference, producing a rich, shiny turquoise-blue glaze above 950 oC.
That looked good – but it forces one to assume that salt, chalk and quartz all somehow got into the kiln along with the copper scale. It’s not impossible, but as Matin points out, such accidents probably had to happen several times before anyone took much notice. However, there’s no need for the least likely of these ingredients, rock salt. Matin reasoned that dried cattle dung, which contains significant amounts of both alkalis and salt (chloride), was widely used as a fuel since the beginnings of animal domestication in the eighth millennium BC. So she tried another mixture: copper scale, calcium carbonate and the ash of burnt cattle dung. This too produced a nice, shiny (albeit slightly paler) blue glaze.
Of course, there’s nothing that proves this was the way glazing began. But it supplies a story that is entirely plausible, and narrows the options for what will and won’t do the job.
Reference: M. Matin, Archaeometry advance online publication doi:10.111/arcm.12039.
Wednesday, June 12, 2013
Vanishing cats
Here’s my latest news story for Nature.
__________________________________________________________
A new ‘invisibility cloak’ can hide animals
A cat climbs into a glass box and vanishes, while the scene behind the box remains perfectly visible through the glass. This latest addition to the science of invisibility cloaks is one of the simplest implementations so far, but there’s no denying its striking impact.
The ‘box of invisibility’ has been designed by a team of researchers at Zhejiang University in Hangzhou, China, led by Hongsheng Chen, and their coworkers. The box is basically a set of prisms made from high-quality optical glass that bend light around any object in the opening around which the prisms are arrayed [1].
As such, the trick is arguably closer to ‘disappearances’ staged in Victorian music hall using arrangements of slanted mirrors than to the modern use of substances called metamaterials to achieve invisibility by guiding light rays in unnatural ways.
But Chen and colleagues have forged a conceptual link between the two. Metamaterials – made from arrays of electrically conducting components that interact with light so as to create new optical effects such as negative refractive index – are needed if an invisibility cloak is to achieve ‘perfect’ cloaking, being invisible itself and preserving the phase relationships between the light waves moving through it [2].
Metamaterials that work at the wavelengths of visible light are very hard to make, however. Chen’s coworker Baile Zhang of Nanyang Technological University in Singapore [3], as well as John Pendry at Imperial College in London [4], and their coworkers have shown that a compromise of partial visible-light cloaking of macroscopic objects can be attained using blocks of transparent, optically anisotropic materials such as calcite crystal, in which light propagates at different speeds in different directions.
These partial cloaks will hide objects but remain visible themselves. “Everyone would like to have a cloak that hides big real world objects from visible light, but achieving this demands some compromises of the ideal theory”, Pendry explains.
He says that Chen and colleagues have now gone “further than most” with such compromises by abandoning any concern to preserve phase relationships in the transmitted light. “As a result the authors can report quite a large cloak that operates over most of the visible spectrum”, he says.
Chen and colleagues say that such a simplification is warranted for many applications, because there’s no need to preserve the phase. “Living creatures cannot sense the phase of light”, they say.
Chen and his coworker Bin Zheng first unveiled the principle last year with a hexagonal arrangement of triangular prisms that could hide small objects [5]. But he has now found a more spectacular demonstration of what this approach can achieve.
In the researchers’ first example, they use a similar but larger hexagon of prisms placed in a fish tank. As a fish swims through the central hole, it disappears while the pondweed behind the cloak remains perfectly visible.
The second example uses a square arrangement of eight prisms with a central cavity large enough for a cat to climb inside. The researchers project a movie of a field of flowers, with a butterfly flitting between them, onto a screen behind the cloak. Seen from the front, parts of the cat vanish as it sits in the cavity or pokes its head inside, while the scene behind can be seen through the glass.
As well as being visible themselves, these cloaks only work for certain viewing directions. All the same, the researchers say that they might find uses, for example in security and surveillance, where one might imagine hiding an observer in a glass compartment that looks empty.
References
1. Chen, H. et al., preprint arxiv.org/1306.1780 (2013).
2. Schurig, D. et al., Science 314, 977-980 (2006).
3. Zhang, B., Luo, Y., Liu, X. & Barbastathis, G. Phys. Rev. Lett. 106, 033901 (2011).
4. Chen, X. et al., Nat. Commun. 2, 176 (2011).
5. Chen, H. & Zheng, B. Sci. Rep. 2, 255 (2012).
Monday, June 10, 2013
In the genes?
Yes, but then you turn to Carole Cadwalladr’s article in the Observer Review on having her genome sequenced. It made me seethe.
The article itself is fine – she does a good job of relating what she was told. But some of this genomics stuff is starting to smell strongly of quackery. Cadwalladr went to a symposium organized by the biotech company Illumina, which – surprise! – is selling sequencing machines. This is what the senior VP of the company said: “You’ll be able to surf your genome and find out everything about yourself.” Everything. One can, apparently, make such a blatantly, dangerously misleading statement and confidently expect no challenge from the assembled crowd of faithful geneticists.
Well, here’s the thing. I happened to be doing an event on Saturday at a literary festival with Steve Jones, and Steve said a great deal about genetics and predestiny. What he said was a vitally needed corrective to the sort of propaganda that Illumina is seemingly spouting. “Genetics is a field in retreat”, he admitted, saying that he has resisted producing a revised version of his classic The Language of the Genes because the field has just become so complicated and confusing since it first came out in 2000. He pointed out that a huge amount of our destiny is of course set by our environment and experience (I never knew, until Steve told me, that Mo Farah has an identical twin who is a car mechanic in Somalia). We discussed the idiocy of the “gene for” trope (the cover of the New Review has Cadwalladr saying “I don’t have a gene for conscientiousness” – but neither does any single bloody person on the planet).
There’s a huge amount of useful stuff that will come from the genomics revolution, and some people might indeed discover some medically valuable information from their genome. But the most common killer diseases, such as heart disease, will not be read out of your genome. I saw recently that at least 500 genes have been associated so far with some types of diabetes. We have 23,000 genes in total, so it goes without saying that those 500+ genes are not solely linked to functions that affect diabetes. The scientists and technologists are still grossly mis-selling the picture of what genes ‘do’, implying still that there is this one-to-one relationship between genes and particular phenotypic attributes. Steve pointed out that we still can’t even account, in genetic terms, for more than about 10 percent – the figure might even have been less, I don’t remember – of the inheritability of human height, even though it clearly does have a strong inherited influence. This is one of the issues I wanted to point to in my recent Nature article – we have little idea how most of our genome works.
One of the most invidious aspects of Cadwalladr’s piece comes from the way the folks at the symposium discussed BRCA1, the “Angelina gene”. There was no mistaking the excitement of the first speaker, Eric Topol of Scripps, who apparently said “This is the moment that will propel genomic medicine forward. It’s incredibly important symbolically.” In other words, “my field of research just got a fantastic celebrity endorsement.” But did anyone at the meeting ask if Jolie had actually made the right choice? It was an extremely difficult choice, but a cancer specialist at NIH I spoke to recently told me that he would not have recommended such a drastic measure. Steve Jones had a similar view, saying that there are drugs that are now routinely taken by women with this genetic predisposition. The good thing about such genomic information is that it could motivate frequent testing for people in such a position, to spot the onset of symptoms at the earliest opportunity (early diagnosis is the most significant factor for a successful treatment of most types of cancer). But Jolie’s case shows how a distorted message about genetic determinism, which the companies involved in this business seem still to be giving out, can skew the nature of the choices people will make. There’s a huge potential problem brewing here – not because of the technology itself, which is amazing, but because of the false confidence with which scientists and technologists are selling it, metaphorically and literally.
The article itself is fine – she does a good job of relating what she was told. But some of this genomics stuff is starting to smell strongly of quackery. Cadwalladr went to a symposium organized by the biotech company Illumina, which – surprise! – is selling sequencing machines. This is what the senior VP of the company said: “You’ll be able to surf your genome and find out everything about yourself.” Everything. One can, apparently, make such a blatantly, dangerously misleading statement and confidently expect no challenge from the assembled crowd of faithful geneticists.
Well, here’s the thing. I happened to be doing an event on Saturday at a literary festival with Steve Jones, and Steve said a great deal about genetics and predestiny. What he said was a vitally needed corrective to the sort of propaganda that Illumina is seemingly spouting. “Genetics is a field in retreat”, he admitted, saying that he has resisted producing a revised version of his classic The Language of the Genes because the field has just become so complicated and confusing since it first came out in 2000. He pointed out that a huge amount of our destiny is of course set by our environment and experience (I never knew, until Steve told me, that Mo Farah has an identical twin who is a car mechanic in Somalia). We discussed the idiocy of the “gene for” trope (the cover of the New Review has Cadwalladr saying “I don’t have a gene for conscientiousness” – but neither does any single bloody person on the planet).
There’s a huge amount of useful stuff that will come from the genomics revolution, and some people might indeed discover some medically valuable information from their genome. But the most common killer diseases, such as heart disease, will not be read out of your genome. I saw recently that at least 500 genes have been associated so far with some types of diabetes. We have 23,000 genes in total, so it goes without saying that those 500+ genes are not solely linked to functions that affect diabetes. The scientists and technologists are still grossly mis-selling the picture of what genes ‘do’, implying still that there is this one-to-one relationship between genes and particular phenotypic attributes. Steve pointed out that we still can’t even account, in genetic terms, for more than about 10 percent – the figure might even have been less, I don’t remember – of the inheritability of human height, even though it clearly does have a strong inherited influence. This is one of the issues I wanted to point to in my recent Nature article – we have little idea how most of our genome works.
One of the most invidious aspects of Cadwalladr’s piece comes from the way the folks at the symposium discussed BRCA1, the “Angelina gene”. There was no mistaking the excitement of the first speaker, Eric Topol of Scripps, who apparently said “This is the moment that will propel genomic medicine forward. It’s incredibly important symbolically.” In other words, “my field of research just got a fantastic celebrity endorsement.” But did anyone at the meeting ask if Jolie had actually made the right choice? It was an extremely difficult choice, but a cancer specialist at NIH I spoke to recently told me that he would not have recommended such a drastic measure. Steve Jones had a similar view, saying that there are drugs that are now routinely taken by women with this genetic predisposition. The good thing about such genomic information is that it could motivate frequent testing for people in such a position, to spot the onset of symptoms at the earliest opportunity (early diagnosis is the most significant factor for a successful treatment of most types of cancer). But Jolie’s case shows how a distorted message about genetic determinism, which the companies involved in this business seem still to be giving out, can skew the nature of the choices people will make. There’s a huge potential problem brewing here – not because of the technology itself, which is amazing, but because of the false confidence with which scientists and technologists are selling it, metaphorically and literally.
Sunday, June 09, 2013
About time
Here’s a book review of mine that appears in today’s Observer.
___________________________________________________________
Time Reborn: From the Crisis of Physics of the Future of the Universe
Lee Smolin
Penguin, 2013
ISBN 978-1-846-14299-4
319 pages, £20.00
Farewell to Reality: How Fairytale Physics Betrays the Search for Scientific Truth
Jim Baggott
Constable, 2013
ISBN 978-1-78033-492-9
338 pages, £12.99
At an inter-disciplinary gathering of academics discussing the concept of time, I once heard a scientist tell the assembled humanities scholars that physics can now replace all their woolly notions of time with one that is unique, precise and true. Such scientism is rightly undermined by theoretical physicist Lee Smolin in Time Reborn, which shows that the scientific view of time is up for grabs more than ever before. The source of the disagreement could hardly be more fundamental: is time real or illusory? Until recently physics has drifted toward the latter view, but Smolin insists that many of the deepest puzzles about the universe might be solved by realigning physics with our everyday intuition that the passage of time is very real indeed.
Clocks tick; seasons change; we get older. How could science have ever asserted this is all an illusion? It begins, Smolin says, with the idea that nature is governed by eternal laws, such as Newton’s law of gravity: governing principles that stand outside of time. The dream of a ‘theory of everything’, which might explain all of history from the instant of the Big Bang, assumes a law that preceded time itself. And by making the clock’s tick relative – what happens simultaneously for one observer might seem sequential to another – Einstein’s theory of special relativity not only destroyed any notion of absolute time but made time equivalent to a dimension in space: the future is already out there waiting for us, we just can’t see it until we get there.
This view is a logical and metaphysical dead end, says Smolin. Even if there was a theory of everything (which looks unlikely), we’d be left asking “why this theory?” Or equivalently, why this universe, and not one of the infinite others that seem possible? Most of all, why one in which life can exist? A favourite trick of cosmologists is to beg the question by arguing that it only gets asked in universes where life is possible – the so-called anthropic principle. Smolin will have none of that. He argues that because life-supporting universes are generally also ones in which black holes can form, and because black holes can spawn new universes, a form of cosmic natural selection can make a succession of universes evolve towards ones like ours.
In this scenario, not only is time real, but the laws of physics must themselves change over time. So there’s constant novelty, and no future until it becomes the present. The possible price you pay is that then space, not time, becomes illusory. That might seem an empty bargain, but Smolin asserts that not only could it solve many problems in fundamental physics and cosmology but it is also more amenable to testing than current ‘timeless’ theories.
That attribute might endear Smolin’s speculative ideas to physicist-turned-writer Jim Baggott. Smolin caused grumbling among his colleagues with his 2006 assault on string theory, The Trouble With Physics. In Farewell to Reality, Baggott now castigates theoretical physicists for indulging a whole industry of “fairy-tale physics” – strings, supersymmetry, brane worlds, M-theory, the anthropic principle – that not only pile one unwarranted assumption after another but are beyond the reach of experimental tests for the foreseeable future. He recalls the acerbic comment attributed to Richard Feynman: “string theorists don’t make predictions, they make excuses”.
Baggott has a point, and he makes it well, although his target is as much the way this science is marketed as what it contains. But such criticisms need to be handled with care. Imaginative speculation is the wellspring of science, as Baggott’s hero Einstein demonstrated. In one of my favourite passages of Time Reborn, Smolin sits in a café and dreams up a truly outré idea (that fundamental particles follow a principle of precedent rather than timeless laws), and then sees where the idea takes him. In creative minds, such conjecture injects vitality into science. The basic problem – that the institutional, professional and social structures of science can inflate such dreams into entire faddish disciplines before asking if nature agrees with them – is one that Baggott doesn’t quite get to.
___________________________________________________________
Time Reborn: From the Crisis of Physics of the Future of the Universe
Lee Smolin
Penguin, 2013
ISBN 978-1-846-14299-4
319 pages, £20.00
Farewell to Reality: How Fairytale Physics Betrays the Search for Scientific Truth
Jim Baggott
Constable, 2013
ISBN 978-1-78033-492-9
338 pages, £12.99
At an inter-disciplinary gathering of academics discussing the concept of time, I once heard a scientist tell the assembled humanities scholars that physics can now replace all their woolly notions of time with one that is unique, precise and true. Such scientism is rightly undermined by theoretical physicist Lee Smolin in Time Reborn, which shows that the scientific view of time is up for grabs more than ever before. The source of the disagreement could hardly be more fundamental: is time real or illusory? Until recently physics has drifted toward the latter view, but Smolin insists that many of the deepest puzzles about the universe might be solved by realigning physics with our everyday intuition that the passage of time is very real indeed.
Clocks tick; seasons change; we get older. How could science have ever asserted this is all an illusion? It begins, Smolin says, with the idea that nature is governed by eternal laws, such as Newton’s law of gravity: governing principles that stand outside of time. The dream of a ‘theory of everything’, which might explain all of history from the instant of the Big Bang, assumes a law that preceded time itself. And by making the clock’s tick relative – what happens simultaneously for one observer might seem sequential to another – Einstein’s theory of special relativity not only destroyed any notion of absolute time but made time equivalent to a dimension in space: the future is already out there waiting for us, we just can’t see it until we get there.
This view is a logical and metaphysical dead end, says Smolin. Even if there was a theory of everything (which looks unlikely), we’d be left asking “why this theory?” Or equivalently, why this universe, and not one of the infinite others that seem possible? Most of all, why one in which life can exist? A favourite trick of cosmologists is to beg the question by arguing that it only gets asked in universes where life is possible – the so-called anthropic principle. Smolin will have none of that. He argues that because life-supporting universes are generally also ones in which black holes can form, and because black holes can spawn new universes, a form of cosmic natural selection can make a succession of universes evolve towards ones like ours.
In this scenario, not only is time real, but the laws of physics must themselves change over time. So there’s constant novelty, and no future until it becomes the present. The possible price you pay is that then space, not time, becomes illusory. That might seem an empty bargain, but Smolin asserts that not only could it solve many problems in fundamental physics and cosmology but it is also more amenable to testing than current ‘timeless’ theories.
That attribute might endear Smolin’s speculative ideas to physicist-turned-writer Jim Baggott. Smolin caused grumbling among his colleagues with his 2006 assault on string theory, The Trouble With Physics. In Farewell to Reality, Baggott now castigates theoretical physicists for indulging a whole industry of “fairy-tale physics” – strings, supersymmetry, brane worlds, M-theory, the anthropic principle – that not only pile one unwarranted assumption after another but are beyond the reach of experimental tests for the foreseeable future. He recalls the acerbic comment attributed to Richard Feynman: “string theorists don’t make predictions, they make excuses”.
Baggott has a point, and he makes it well, although his target is as much the way this science is marketed as what it contains. But such criticisms need to be handled with care. Imaginative speculation is the wellspring of science, as Baggott’s hero Einstein demonstrated. In one of my favourite passages of Time Reborn, Smolin sits in a café and dreams up a truly outré idea (that fundamental particles follow a principle of precedent rather than timeless laws), and then sees where the idea takes him. In creative minds, such conjecture injects vitality into science. The basic problem – that the institutional, professional and social structures of science can inflate such dreams into entire faddish disciplines before asking if nature agrees with them – is one that Baggott doesn’t quite get to.
Thursday, June 06, 2013
The legacy of On Growth and Form
The special issue of Interdisciplinary Science Reviews on the work and influence of D’Arcy Thompson that I have coedited with Matthew Jarron is finally published. It makes a nice collection – not quite as broad as originally hoped, due to some dropouts, but still a satisfying slice of the way Thompson’s ideas have been received in art and science. It won’t be freely available online, sadly, but the contents are as follows:
Matthew Jarron: Editorial
Stephen Hyde: D’Arcy Thompson’s Legacy in Contemporary Studies of Patterns and Morphology
Edward Juler: A Bridge Between Science and Art? The Artistic Reception of On Growth and Form in Interwar Britain, c.1930-42
Matthew Jarron: Portrait of a Polymath – A Visual Portrait of D’Arcy Thompson by Will Maclean
Peter Randall-Page: On Theme and Variation
Assimina Kaniari: D’Arcy Thompson’s On Growth and Form and the Concept of Dynamic Form in Postwar Avant-Garde Art Theory
Philip Ball: Hits, Misses and Close Calls: An Image Essay on Pattern Formation in On Growth and Form
I’ve put a version of my contribution up on my website, under “Patterns”.
Matthew Jarron: Editorial
Stephen Hyde: D’Arcy Thompson’s Legacy in Contemporary Studies of Patterns and Morphology
Edward Juler: A Bridge Between Science and Art? The Artistic Reception of On Growth and Form in Interwar Britain, c.1930-42
Matthew Jarron: Portrait of a Polymath – A Visual Portrait of D’Arcy Thompson by Will Maclean
Peter Randall-Page: On Theme and Variation
Assimina Kaniari: D’Arcy Thompson’s On Growth and Form and the Concept of Dynamic Form in Postwar Avant-Garde Art Theory
Philip Ball: Hits, Misses and Close Calls: An Image Essay on Pattern Formation in On Growth and Form
I’ve put a version of my contribution up on my website, under “Patterns”.
Tuesday, June 04, 2013
The art of repair
Here’s the more or less original version of an article I’ve written for Aeon magazine. It carries a heavy debt to the wonderful catalogue of an exhibition entitled Flickwerk: The Aesthetics of Mended Japanese Ceramics (Herbert F. Johnson Museum of Art, Cornell University, 2008). It also informed my recent “60-second idea” on the BBC World Service’s Forum programme, broadcast this week, which was otherwise focused (loosely) on the topic of curiosity. Here I met cosmologist Lee Smolin, whose book Time Reborn I have just reviewed for the Observer – I’ll post the review here once it’s published.
_________________________________________________________________
Is your toilet seat broken? I only ask because it is damned hard to get things like that fixed. Are your shoes splitting? Good luck finding a cobbler these days. Is the insulation on your MacBook mains lead abraded and splitting at the power brick? They all do that, and they’re not cheap to replace.
There’s an answer to all these little repair jobs. It’s called Sugru: an adhesive, putty-like silicone polymer that you can hand-mould to shape and then leave overnight to set into a tough, flexible seal. Devised by Jane NÃ Dhulchaointigh, an Irish design graduate at the Royal College of Art in London, in collaboration with retired industrial chemists, Sugru is an all-purpose mending material with an avid following of ‘hackers’ who relish its potential not just to repair but to modify off-the-shelf products. When it was pronounced a top invention of 2010 by Time magazine, it acquired international cult status.
Sugru doesn’t, however, do its job subtly. You can get it in modest white, but Sugru-fixers tend to prefer the bright primary colours, giving their repairs maximal visibility. They seem determined to present mending not as an unfortunate necessity to be carried out as quietly as possible but as an act worth celebrating.
That’s an attitude also found in the burgeoning world of ‘radical knitting’, where artists are bringing a punk sensibility to the Women’s Institute. Take textiles artist Celia Pym, who darns people’s clothes as a way of “briefly making contact with strangers”. There are no ‘invisible mends’ here: Pym introduces bold new colours and patterns, transforming rather than merely repairing the garments.
What Pym and the Sugru crew are asserting is that mending has an aesthetic as well as a practical function. They say that if you’re going to mend, you might as well do it openly and beautifully.
If that sounds like a new idea in the pragmatic West, it has a long tradition in the East. Pym’s artful recovery of damaged clothing is anticipated by more than three centuries in the boro garments of the Japanese peasant and artisan classes, which were stitched together from scraps of cloth in a time when nothing went to waste. In boro clothing the mends become the object, much like Austrian philosopher Otto Neurath’s celebrated hypothetical boat, repaired a plank at a time until nothing of the original remains. Some boro garments might in similar fashion be colonized and eventually overwhelmed by patches; others were assembled from scraps at the outset. Today boro’s shabby chic risks becoming merely an ethnic pose in trendy Tokyo markets, belying the necessity from which it arose. But boro was always an aesthetic idea as much as an imposition of hardship. It draws on the Japanese tradition of wabi-sabi, a world view that acknowledges transience and imperfection.
I have been patching clothes for years into a kind of makeshift, barely competent boro. Trousers in particular get colonized by patches that start at the knees and the holes poked by keys around the pockets, spreading steadily across thighs with increasing disregard for colour matching. Only when patches need patches does the recycling bin beckon. At first I did this first as a hangover from student privation. Later it became a token of ecological sensibility. Those changing motives carry implications for appearance: the more defiantly visible the mend, the less it risks looking like mere penny-pinching. That’s a foolishly self-conscious consideration, of course, which is precisely why the Japanese aesthetic of repair is potentially so liberating: there is nothing defensive about it. That’s even more explicit in the tradition of ceramic mending.
In old Japan, when a treasured bowl fell to the floor one didn’t just reach for the glue. The old item was gone, but its fracture created the opportunity for a new one to be made. Such accidents held lessons worth heeding, being both respected and remedied by creating from the shards something even more elegant. Smashed ceramics would be stuck back together with a strong adhesive made from lacquer and rice glue – but then the web of cracks would actually be emphasized by tracing it out in coloured lacquer, sometimes mixed or sprinkled with powdered silver or gold and polished with silk so that the joins gleamed.
A bowl or container repaired in this way would typically be valued even more highly, aesthetically and financially, than the original. The sixteenth-century Japanese tea master Sen no Rikyu is said once to have ignored his host’s fine Song Dynasty Chinese tea jar until it was mended after the owner smashed it in despair at Rikyu’s indifference. “Now the piece is magnificent”, he pronounced of the shards painstakingly reassembled by the man’s friends. According to contemporary tea master Christy Bartlett, it was “the gap between the vanity of pristine appearance and the fractured manifestation of mortal fate which deepen[ed] its appeal”. The repair, like that of an old teddy bear, is testament to the affection in which the object is held: what is valued is not a literally superficial perfection but something deeper. The mended object is special precisely because it was worth mending. In the Japanese tea ceremony, says cultural anthropologist James-Henry Holland, “a newly-mended utensil proclaims the owner’s personal endorsement, and visually apparent repairs call attention to this honor.”
To mend such objects requires an acceptance of whatever the fracture gives: a relinquishment of the determination to impose one’s will on the world, in accord with the Japanese concept of mushin. Meaning literally “no mind”, this expresses a detachment sought by many artists and warriors. “Accidental fractures set in motion acts of repair that accept given circumstances and work within them to lead to an ultimately more profound appearance,” says Bartlett. Mended ceramics displayed their history – the pattern of fracture documented the specific forces and events that caused it. This fact has recently been recognized by a team of French physicists, who have shown that the starlike cracks in broken glass plates capture a forensic record of the mechanics of the impact. By reassembling the pieces, that moment becomes preserved. The stories of how mended Japanese ceramics were broken – like that of the jar initially spurned by Sen no Rikyu – would be perpetuated by constant retelling. In the tea ceremony these histories of the utensils provide raw materials for the stylized conversational puzzles that the host sets the guests, a function for which undamaged appearance was irrelevant.
Sugru users will appreciate another of the aesthetic considerations of Japanese ceramic repairs: the notion of asobi, which refers to a kind of playful creativity and was introduced by the sixteenth-century tea master Furuta Oribe. Repairs that embody this principle tended to be more extrovert, even crude in their lively energy. When larger areas of damage were patched using pieces from a totally different broken object, fragments with a totally different appearance might be selected to express the asobi ideal, just as clothes today might be patched with exuberant contrasting colours or patterns. Of course, one can now buy new clothes patched this way – a seemingly mannered gesture, perhaps, yet anticipated in the way Oribe would sometimes purposely damage utensils so that they were not “too perfect”. This was less a Zen-like expression of impermanence and more an exuberant relish of variety.
Such modern fashion statements aside, repair in the West has tended to be more a matter of grumbling and making do. But occasionally the aesthetic questions have been impossible to avoid. When the painting of an Old Master starts cracking and flaking off, what is the best way to make it good? Should we reverently pick up the flakes of paint and surreptitiously glue them back on again? Is it honest to display a Raphael held together with PVA? When Renaissance paint fades or discolours, should we touch it up to retain at least a semblance of what the artist intended, or surrender to wabi-sabi? It’s safe to assume that no conservator would ever have countenanced the recent ‘repair’ of Elias Garcia Martinez’s crumbling nineteenth-century fresco of Jesus in Zaragoza by an elderly woman with the artistic skills of Mr Bean. But does even a skilled ‘retouching’ risk much the same hubris?
These questions are difficult because aesthetic considerations pull in the opposite direction from concerns of authenticity. Who wants to look at a fresco anyway if only half of it is still on the wall? Victorian conservators were rather cavalier in their solutions, often deciding it was better to have a retouched Old Master than none at all. In an age that would happily render Titian’s tones more ‘acceptable’ with muddy brown varnish, that was hardly surprising. But today’s conservators mostly recoil at the idea of painting over damage in old works, although they will permit some delicate ‘inpainting’ that fills cracks without covering any of the original paint: Cosimo Tura’s Allegorical Figure in London’s National Gallery was repaired this way in the 1980s. Where damage is extensive, standard practice now is to apply treatments that prevent further decay but leave the existing damage visible.
Such rarefied instances aside, the prejudice against repair as an embarrassing sign of poverty or thrift is surely a product of the age of consumerism. Mending clothes was once routine for every stratum of society. The aristocracy were unabashed at their elbow patches – in truth more prevention than cure, since they protected shooting jackets from wear caused by the shotgun butt. Everything got mended, and mending was a trade.
But what sort of trade? Highly skilled, perhaps, but manual, consigning it to a low status in a culture that has always been shaped (this is one way that West differs from East) by the ancient Greek preference for thinking over doing. Just as, over the course of the nineteenth century, the ‘pure’ scientist gained ascendancy over the ‘applied’ (or worse still, the engineer), so too the professional engineer could at least pull rank on the maintenance man: he was a creator and innovator, not a chap with oily rag and tools. “Although central to our relationship with things”, writes historian of technology David Edgerton, “maintenance and repair are matters we would rather not think about.” Indeed, they are increasingly matters we’d rather not even do.
Edgerton explains that until the mid-twentieth century repair was a permanent state of affairs, especially for expensive items like vehicles, which “lived in constant interaction with a workshop.” It wasn’t so much that things stopped working and then got repaired, but that repair was the means by which they worked at all. Neurath’s boat probably sailed for real: “ships were often radically changed, often more than once, in the course of their lives,” says Edgerton. Repair might even spawn primary manufacturing industries: many early Japanese bicycles were assembled from the spare parts imported to repair foreign (mostly British) models.
It’s not hard to understand a certain wariness about repair: what broke once might break again. But its neglect in recent times surely owes something also to an under-developed repair aesthetic, an insistence on perfection of appearance: on the semblance of newness even in the old, a visual illusion now increasingly applied to our own bodies, repair of which is supposed to (but rarely does) make the wear and tear invisible.
Equally detrimental to a culture of repair is the ever more hermetic nature of technology, whereby DIY mending becomes impossible either physically (the unit, like your MacBook lead, is sealed) or technically (you wouldn’t know where to start). Either way, your warranty is void the moment you start tinkering. Couple that to a climate in which you pay for the service or accessories rather than the item – inks pricier than printers, mobile phones free when you subscribe to a network – and repair lacks feasibility, infrastructure or economic motivation. I gave up on repair of computer peripherals years ago when the only person I could find to fix a printer was a crook who lacked the skills for the job but charged me the price of a new one anyway. Breakers’ yards, which used to seem like places of wonder, have all but vanished as car repair has become both unfashionable and impractical.
Some feel this is going to change, whether because of the exigencies of austerity or increasing ecological concerns about waste and consumption. Martin Conreen, a design lecturer at Goldsmith’s College in London, believes that TV cookery programmes will soon be replaced by ‘how to’ DIY shows, in which repair would surely feature heavily. The hacker culture is nurturing an underground movement of making and modifying that is merging with the crowdsourcing of fixes and bodges – for example, on websites such as ifixit.com, which offers free service manuals and advice for technical devices such as computers, cameras, vehicles and domestic appliances, and fixperts.org, set up by design lecturer Daniel Charny and Sugru cofounder James Carrigan, which documents fixes on film. The mending movement has taken to the streets in the international Repair Café movement, where you can go to get free tools, materials, advice and assistance for mending anything from phones to jumpers. As 3D printers become more accessible – which can produce one-off objects made from cured resin, built up from granular ‘inks’ layer by layer – it may become possible to make your own spare parts rather than having to source them, often at some cost, from suppliers (only to discover your model is obsolete). And as fixing becomes cool, there’s good reason to hope it will acquire an aesthetic that owes less to a “make do and mend” mentality of soldiering on, and more to mushin and asobi.
Subscribe to:
Posts (Atom)