Wednesday, July 31, 2013

Plastic fantastic

Here’s the initial version of a leader I wrote for last week’s Nature.

_____________________________________________

The transition from basic science to practical technology is rarely linear. The common view – that promising discoveries need only patience, hard work and money to shape them into commercial products – obtains only rarely. Often there are more factors at play: all kinds of technical, economic and social drivers must coincide for the time to be right. So dazzling forecasts fail and fade, but might then re-emerge when the climate is more clement.

That seems to be happening for organic electronics: the use of polymers and other organic molecules as the active materials in information processing. That traditionally insulating plastics could be made to conduct electricity was discovered serendipitously in the late 1960s by Hideki Shirakawa in Tokyo, in the form of silvery films of polyacetylene. Chemists Alan Heeger and Alan MacDiarmid collaborated with Shirakawa in 1976 to boost the conductivity of this material by doping with iodine, and went on to make a ‘polymer battery’. Other conducting polymers, especially polyaniline, were mooted for all manner of uses, such as antistatic coatings and loudspeaker membranes.

This early work was greeted enthusiastically by some industrial companies, but soon seemed to be leading nowhere fast – the polymers were too unstable and difficult to process, and their properties hard to control and reproduce reliably. That changed in the late 1980s when Richard Friend and coworkers in Cambridge found that poly(para-phenylene vinylene) not only would conduct without doping but could be electrically stimulated to emit light, enabling the fabrication of polymer light-emitting diodes. The attraction was partly that a polymer’s properties, such as emission colour and solubility, can be fine-tuned by altering its chemistry. Using such substances for making lightweight, flexible devices and circuits, via simple printing and coating techniques rather than the high-tech methods needed for inorganic semiconductor electronics, began to seem possible. The genuine potential of the field was acknowledged when the 2000 Nobel prize for chemistry went to Shirakawa, Heeger and MacDiarmid.

The synthesis of gossamer-thin organic electronic circuits reported by Martin Kaltenbrunner in Tokyo and colleagues (Nature 499, 458-463; 2013) is the latest example of the ingenuity driving this field. Their devices elegantly blend new and old materials and techniques. The substrate is a one-micron-thick plastic foil, while organic small molecules provide the semiconductor for the transistors, other organic molecules and alumina constitute the insulating layers, and the electrodes are ultrathin aluminium. The featherweight plastic films, 27 times lighter than office paper, can be crumpled like paper, and on an elastomeric substrate the circuits can be stretched more than twofold, all without impairing the device performance. Adding a pressure-sensitive rubber layer produces a touch-sensing foil which could serve as an electronic skin for robotics, medical protheses and sports applications.

Wearable and flexible electronics and optoelectronics have recently taken great strides, propelled in particular by the work of John Rogers’ group at Illinois (D.-H. Kim et al., Ann. Rev. Biomed. Eng. 14, 113-128 (2012)). Such devices can now be printed on or attached directly to human skin, and can be made from materials that biodegrade safely. Especially when coupled to wireless capability, both for powering the devices and for reporting their sensor activity, the possibilities for in situ monitoring of wound care and tissue repair, brain and heart function, and drug delivery are phenomenal; the challenge will be for medical procedures to keep pace with what the technology can offer. At any event, such applications reinforce the fact that organic electronics should not be seen as a competitor to silicon logic but as complementary, taking information processing into areas that silicon will never reach.

At the risk of inflating another premature bubble, these technologies look potentially transformative – more so, on current showing, than the much heralded graphene. The remark by Kaltenbrunner et al. that their circuits are “both virtually unbreakable and imperceptible” says more than perhaps they might have intended. In this regard the new work continues the trend towards the emergence of a smart environment in which all kinds of functionality are invisibly embedded. What happens when packing film (one possible use of the new foldable circuitry), clothing, money, even flesh and blood, is imbued with the ability to receive, process and send information – when more or less any fabric of daily life can be turned, unseen, into a computing and sensing device? Most narratives currently dwell on fears of surveillance or benefits of round-the-clock medical checks and diagnoses. Both might turn out to be warranted, but past experience (with information technology in particular) should teach us that technologies don’t simply get superimposed on the quotidian, but both shape and are shaped by human behaviour. Whether or not we’ll get what’s good for us, it probably won’t be what we expect.

Wednesday, July 24, 2013

Radio DNA

Another cat among the pigeons, perhaps… here is my latest Crucible column for Chemistry World.

______________________________________________________________

It has to rate as one of the most astonishing discoveries of this century, and it came from a Nobel laureate. Yet it was almost entirely ignored. In 2011 Luc Montagnier, who three years earlier was awarded the Nobel Prize in medicine for his co-discovery of the AIDS virus HIV, reported that he and his coworkers could use the polymerase chain reaction (PCR, the conventional method of amplifying strands of DNA) to synthesize DNA sequences of more than 100 base pairs, without any of the target strands present to template the process [1]. All they needed was water. Water, that is, first subjected to very-low-frequency electromagnetic waves emitted and recorded from solutions of DNA encoding the target sequence. In other words, the information in a DNA strand could be transmitted by its electromagnetic emissions and imprinted on water itself.

Maybe you’re now thinking this work was ignored for good reason, namely that it’s utterly implausible. I agree: it doesn’t even begin to make sense given what we know about the molecular ingredients. But the claims were unambiguous. The authors say they took a 104-base-pair fragment of DNA from HIV (and who knows about that better than Montagnier?) and copied it, reproducibly and with at least 98% fidelity, by adding the PCR ingredients to the irradiated water. If you choose to ignore this, are you saying Montagnier is lying?

What you’re actually saying is that science doesn’t always work as it is ‘supposed’ to, by claims being tested and then accepted or rejected depending on the result. Of course, many trivial claims never get replicated (that’s another story), but really big ones – and they don’t come much bigger than this – are immediately interrogated by other labs, right? That’s what happened with cold fusion, however implausible it seemed. True, some results can’t be replicated without highly specialized kit and expertise – no one has rushed to verify the Higgs boson sighting. But Montagnier and colleagues used nothing more than you’d find in most molecular biology labs worldwide.

So what’s going on? What we’re really seeing tested here are the unwritten social codes of science. Montagnier has long been seen as something of a maverick, but in recent years some have accused him of descending into quackery. Since claiming in 2009 that some DNA emits EM signals [2], he has suggested that such signals can be detected in the blood of children with autism and that this justifies treating autism with antibiotics. He has seemed to suggest that HIV can be defeated with diet and supplements, and commended the notorious ‘memory of water’ proposed by French immunologist Jacques Benveniste [3]. Although he is currently the head of the World Foundation for AIDS Research and Prevention in Paris, his unorthodox views have prompted some leading researchers to question his suitability to lead such projects.

But science judges the results, not the person, right? So let’s look at the paper. At face value making a simple claim, it is in fact so peppered with oddness that other researchers probably imagine any attempt at replication will be deeply unrewarding. There are hints that the EM emissions come from a baffling and bloody-minded universe: their strength doesn’t correlate with concentration, they seem to appear in some ranges of dilution and then vanish in others, and there is no rhyme or reason to which organisms or sequences produce them and which don’t. That the authors show the signals not as ordinary graphs but as a screenshot adds to the misgivings.

Then there’s the ‘explanation’. Montagnier has teamed up with Italian physicist Emilio Del Giudice and his colleagues, who in 1988 published a “theory of liquid water based on quantum field theory” [4] which proposed that water molecules can form “coherent domains” about 100 nm in size containing “almost free electrons” that can absorb electromagnetic energy and use it to create self-organized dissipative structures. These coherent domains are, however, a quantum putty to be shaped to order, not a theory to be tested. They haven’t yet been clearly detected, nor have they convincingly explained a single problem in chemical physics, but they have been invoked to account for Benveniste’s results and cold fusion, and now they can explain Montagnier’s findings on the basis that the EM signals from DNA can somehow shape the domains to stand in for the DNA itself in the PCR process.

Make of this what you will; the real issue here is that it all looks puzzling, even prejudiced, to outsiders, who understandably cannot fathom why a startling claim by a distinguished scientist is apparently just being brushed aside. Perhaps it might help to stop pretending that science works as the books say it does. Perhaps also, given that Montagnier says his findings are motivating clinical trials to “test new therapeutics” for HIV in sub-Saharan Africa, it might be wise to subject them to more scrutiny after all.

References
1. L. Montagnier et al., J. Phys. Conf. Ser. 306, 012007 (2011).
2. L. Montagnier et al., Interdiscip. Sci. Comput. Life Sci. 1, 81 (2009).
3. E. Davenas et al., Nature 338, 816 (1988).
4. E. Del Giudice, G. Preparata & G. Vitiello, Phys. Rev. Lett. 61, 1085 (1988).

Maxwell's fridge

I haven’t generally been putting up here the pieces I’ve been writing for Physical Review Focus, as they can tend to be a bit technical. But as I’ve been writing this and that about Maxwell’s demon elsewhere, I thought I’d post this one. The final version is here.

_______________________________________________________

In 1867 the physicist James Clerk Maxwell described a thought experiment in which the random thermal fluctuations of molecules might be rectified by intelligent manipulation, building up heat that might be used to do useful work. Now in Physical Review Letters a team at the University of Maryland outline a theoretical scheme by which Maxwell’s nimble-fingered ‘demon’ might be constructed in an autonomous device that in effect uses computation to transfer heat from a cold substance to a hotter one, thereby acting as a refrigerator.

Maxwell believed that his demon might oppose the second law of thermodynamics, which stipulates that the entropy of a closed system must always increase in any process of change. Because this law seems to be statistical – an entropy increase, or increase in disorder, is simply the far more likely outcome – the demon might undermine it, for example by physically reversing the usual scrambling of hot and cold molecules and thereby preventing the diffusion of heat.

Most physicists now agree that such a demon wouldn’t defeat the second law, because of an argument developed in the 1960s by Rolf Landauer [1]. He showed that the cogitation needed to perform the selection would have a compensating entropic cost – specifically, the act of resetting the demon’s memory dissipates a certain minimal amount of heat per bit erased.

Despite this understanding, there have been few attempts to postulate an actual physical device that might act as a Maxwell demon. Last year, Christopher Jarzynski and Dibyendu Mandal at Maryland proposed such a ‘minimal model’ of an autonomous device [2]. It consisted of a three-state device (the ‘demon’) that can extract energy from a reservoir of heat and use it to do useful work. The transitions in the demon are linked the writing of bits into a memory register – a tape recording binary information – which moves past the it, according to particular coupling rules.

In collaboration with their colleague Haitao Quan, now at Peking University, Mandal and Jarzynski have now refined their model so that the demon is a two-state device coupled to heat exchange between a hot and a cold reservoir. Again, the operation of the demon is ensured by the coupling rules imposed between its transitions, the reservoirs and the memory, resulting in a mathematically solvable model whose performance depends on the model’s parameters.

The demon can absorb heat from the hot reservoir to reach its excited state, and reverse that process, without altering the memory. But the rules say that energy may only be exchanged with the cold reservoir by coupling to the memory. The demon can absorb heat from the cold reservoir if the incoming bit is a 0, or release it if the bit is a 1. And whenever energy is exchanged with the cold reservoir, the demon reverses the bit, which affects the entropy of the outgoing bit stream. So each 0 allows the chance for energy to move from the cold reservoir into the demon – and potentially then out to the hot reservoir.

The researchers find that the behaviour of the system depends on the temperature gradient and the relative proportions of 1s and 0s in the incoming bit stream. In one range of parameters the device acts as a refrigerator, drawing heat from the cold reservoir colder while imprinting a memory of this operation as 1s in the outgoing bit stream. In another range it acts as an information eraser: lowering the excess of 0s in the bit stream and thus randomizing this ‘information’, while allowing heat transfer from hot to cold.

Jarzynski says that, while the model couples heat flow and information, it doesn’t have Landauer’s condition explicitly built in. Rather, this condition emerges from the dynamics, and so the results provide some support for Landauer’s interpretation.

How might one actually build such a system? “We don’t have a specific physical implementation in mind”, Jarzynski admits, but adds that “we are exploring a fully mechanistic Rube Goldberg-like contraption where the demon and memory are represented by wheels and paddles that rotate about the same axis and interact by bumping into one another.”

Trying to figure out how a physical device might act like Maxwell’s demon is “an important task”, according to Franco Nori of the University of Michigan. “To build such a system in the future would be another story, but this is a very important step in the right direction,” he says.

Although he sees this as “an interesting theoretical model of Maxwell's demon”, Charles Bennett of IBM’s research laboratory in Yorktown Heights, New York, thinks it could be made even simpler. “It’s somewhat unrealistic and unnecessarily complicated to have the tape move at a constant velocity”, he says – the parameter describing the tape speed could be eliminated “by coupling each 0→1 tape transition to a forward step of the tape and each 1→0 transition to a backward step.”

References
1. R. Landauer, IBM J. Res. Dev. 5, 183 (1961).
2. D. Mandal & C. Jarzynski, Proc. Natl Acad. Sci. USA 109, 11641-11645 (2012).

Friday, July 19, 2013

What the bees know


I’ve written a news story for Nature on a new paper claiming that the bees’ honeycomb is made hexagonal by surface tension, rather than the engineering skills of the bees. They just make cylindrical cells, the researchers say, and physics does the rest. This isn’t a new idea, as I point out in the story: D’Arcy Thompson suggested as much, and Darwin suspected it. However, it seems to be to be potentially underestimating the role of the bees. The weird thing about the work is that it essentially freezes the honeycomb in an unfinished state, by smoking out the worker bees, and they find that the incomplete cells are circular in cross-section – but there’s apparently no reason to believe that the bees had done all they were going to do, leaving the rest to surface tension. Who’s to say they wouldn’t have kept shaping the cells if left undisturbed? It may be that the authors are right, but this current work seems to me to be some way from a proof of that. Well, here first is the story…

__________________________________________________________________

Physical forces rather than bees’ ingenuity might create the hexagonal cells.

The perfect hexagonal array of the bees’ honeycomb, admired for millennia as an example of natural pattern formation, owes more to simple physical forces than to the skill of the bees, according to a paper published in the Journal of the Royal Society Interface [1].

Engineer Bhushan Karihaloo of the University of Cardiff in Wales and his coworkers say that the bees simply make cells of circular cross-section, packed together like a layer of bubbles, and that the wax, softened by the heat of the bees’ bodies, then gets pulled into hexagonal cells by surface tension.

The finding feeds into a long-standing debate about whether the honeycomb is an example of exquisite biological engineering or blind physics.

To make a regular geometric array of identical cells with simple polygonal cross-sections, they can only have one of three forms: triangular, square or hexagonal. Of these, hexagons divide up the space using the least amount of wall area, and thus the least amount of wax.

This economy was noted in the fourth century by the mathematician Pappus of Alexandria, who claimed that the bees had “a certain geometrical forethought”. But in the seventeenth century the Danish mathematician Erasmus Bartholin suggested that they don’t need any such foresight, since the hexagons would result automatically from the pressure of each bee trying to make its cell as large as possible, much as the pressure of bubbles packed in a single layer creates a hexagonal foam.

In 1917 the Scottish zoologist D’Arcy Thompson argued that, again by analogy with bubbles, surface tension in the soft wax will pull the cell walls into hexagonal, threefold junctions [2]. A team led by Christian Pirk of the University of Würzburg in Germany showed in 2004 that molten wax poured into the space between a regular hexagonal array of cylindrical rubber bungs will indeed retract into hexagons as it cools and hardens [3].

Karihaloo and colleagues now seem to clinch this argument by showing that bees do initially make cells with a circular cross-section – as Charles Darwin suspected – and that these develop into hexagons by the flow of wax at the junctions where three walls meet.

They interrupted honeybees in the act of making a comb by smoking them out of the hive, and found that the most recently built cells have a circular shape while those just a little older have developed into hexagons. They say the worker bees that make the comb knead and heat the wax with their bodies until it reaches about 45 oC – warm enough to flow like a viscous liquid.

Karihaloo thinks that no one thought previously to look at cells before they are completed “because no one imagined that the internal profile of the cell begins as a circle” – it was just assumed that the final cell shape is the one the bees make. He says they got the idea from experiments on a bunch of circular plastic straws which changed to the hexagonal form when heated [4].

The question is whether there is anything much left for the bees to do, given that they do seem to be expert builders. They can, for example, use their head as a plumb-line to measure the vertical, tilt the cells very slightly up from horizontal to prevent the honey from flowing out, and measure cell wall thicknesses extremely precisely. Might they not continue to play an active role in shaping the circular cells into hexagons, rather than letting surface tension do the job?

Physicist and bubble expert Denis Weaire of Trinity College Dublin in Ireland suspects they might, even though he acknowledges that “surface tension must play a role”.

“I have seen descriptions of bees steadily refining their work by stripping away wax”, he says. “So surely those junctions of cell walls must be crudely assembled then progressively refined, just as a sculptor would do?”

While Karihaloo says “I don't think the bees know how to measure angles”, he admits that further experiments are needed to rule out that possibility.

Weaire adds that “if the bee’s internal temperature is enough to melt wax, the temperature of the hive will always be close to the melting point, so the wax will be close to being fluid. This may be more of a nuisance than an advantage.”

But Karihaloo explains that not all the bees act as 'heaters'. "The ambient temperature inside the comb is just 25o C", he says. Besides, he adds, the insects strengthen the walls over time by adding recycled cocoon silk to it, creating a kind of composite.

References
1. Karihaloo, B. L., Zhang, K. & Wang, J. J. R. Soc. Interface advance online publication doi:10.1098/rsif.2013.0299 (2013).
2. Thompson, D. W. On Growth and Form (Cambridge University Press, 1917).
3. Pirk, C. W. W., Hepburn, H. R., Radloff, S. E. & Tautz, J. Naturwissenschaften 91, 350–353 (2004).
4. Zhang, K., Zhao, X. W., Duan, H. L., Karihaloo, B. L. & Wang, J. J. Appl. Phys. 109, 084907 (2011).

Now I want to add a few further comments. It seems the authors didn’t know that Darwin had looked extensively at this issue. He felt some pressure to show how the hexagonal hive could have arisen by natural selection. He conducted experiments himself at Down House, and corresponded with bee experts, noting that bees first excavate hemispherical pits in the wax which they gradually work into the cell shapes. There is some fascinating correspondence on this in the link given above, though Darwin never found the evidence he was looking for.

One of the problems with leaving it all to surface tension, however, is what happens when you get an irregular cell, either because the bees make a mistake (as they do) or because edge effects create defects. As Denis Weaire pointed out,

“Bees do make topological mistakes, or are led into them by boundary conditions. Surface tension would entirely destroy their work, because of this, if unchecked! (five-sided cells shrink etc...):there is no equilibrium configuration!”

Another worry that Denis voiced is what happens to the excess wax if the cell walls are thinned and straightened by flow. This does seem to have an explanation: Karihaloo says that wax is not actually removed, it just begins in a somewhat loose, porous state, which gets consolidated.

I also wondered about the cell end caps. The cells in the honeycomb are made in two back-to-back layers, married by a puckered surface made from end caps that consist of three rhombi in a fragment of a rhombic dodecahedron. This turns out – as Denis showed in 1994 (Nature 367, 123) – to be the minimal surface for this configuration. So one might imagine it too could result from surface tension, if the authors’ argument is right. But when I asked about it, Karihaloo said “Pirk et al. have shown that the end caps are not rhombic at all; it is just an optical illusion.” I was surprised by this, and asked Weaire about it – he said this was the first time he’d heard that suggestion, and that he has pictures of natural combs which show that these polygonal end faces are certainly not illusory. Indeed, Darwin and his correspondents mention the rhombi, and those old gents were mighty careful natural historians. So this suggestion seems to be wrong.

Tuesday, July 09, 2013

Preparing for a new second

A bit techie, this one, but I liked the story. It’s a news piece for Nature.

________________________________________________________

A new type of atomic clock could transform the way we measure time.

The international definition of a second of time could be heading for a change, thanks to the demonstration by researchers in France that a new type of ‘atomic clock’ has the required precision and stability.

Jérôme Lodewyck of the Observatoire de Paris and his colleagues have shown that two so-called optical lattice clocks (OLCs) can remain as perfectly in step as the experimental precision can establish [1]. They say that this test of consistency is essential if OLCs are to be used to redefine the second, currently defined according to a different sort of atomic clock.

This is “very beautiful and careful work, which gives grounds for confidence in the optical lattice clock and in optical clocks generally”, says Christopher Oates, a specialist in atomic-clock time standards at the National Institute of Standards and Technology (NIST) in Boulder, Colorado.

Defining the unit of time according to the frequency of electromagnetic radiation emitted from atoms has the attraction that this frequency is fixed by the laws of quantum physics, which dictate the energy states of the atom and thus the energy and frequency of photons of light emitted when the atom switches from one state to the other.

Since 1967, one second has been defined as the duration of 9,192,631,770 oscillations of the microwave radiation absorbed or emitted when a caesium atom jumps between two particular energy states.

The most accurate way to measure this frequency at present is in an atomic fountain, in which a laser beam is used to propel caesium atoms in a gas upwards. Emission from the atoms is probed as they pass twice through a microwave beam – once on the way up, once as they fall back down under gravity.

The time standard for the United States is defined using a caesium atomic-fountain clock called NIST-F1 at NIST. Similar clocks are used for time standards elsewhere in the world, including the Observatoire de Paris.

The caesium fountain clock has an accuracy of about 3x10**-16, meaning that it will keep time to within one second over 100 million years. But some newer atomic clocks can do even better. Monitoring emission from individual ionized atoms trapped by an electromagnetic field can supply an accuracy of about 10**-17.

The clocks studied by Lodewyck and colleagues are newer still – first demonstrated under a decade ago [2]. And although they can’t yet beat the accuracy of trapped-ion clocks, they have already been shown to be comparable to caesium fountain clocks, and some researchers suspect that they’ll ultimately be the best of the lot.

That’s for two reasons. First, like trapped-ion clocks, they measure the frequency of visible light, with a frequency tens of thousands of times higher than microwaves. “Roughly speaking, this means that optical clocks divide a second into many more time intervals than microwave caesium clocks, and so can measure time with a higher precision,” Lodewyck explains.

Secondly, they measure the average emission frequency from several thousand trapped atoms rather than just one, and so the counting statistics are better. The atoms are trapped in a so-called optical lattice, rather like an electromagnetic eggbox for holding atoms.

If OLCs are to succeed, however, it’s essential to show that they are reliable: that one such clock ticks at exactly the same rate as another prepared in an identical way. This is what Lodewyck and colleagues have now shown for the first time. They prepared optical lattices each holding about 10,000 atoms of the strontium isotope strontium-87, and have shown that the two clocks stay in synchrony to within a precision of at least 1.5x10**-16, which is the detection limit of the experiment.

But if the definition of a second is to be switched from the caesium standard to the OLC standard, it’s also necessary to check that two types of clock are in synchrony. The French team have done that too. They found that their strontium OLCs will keep pace with all three of the caesium clocks in the Observatoire, to an accuracy limited only by the fundamental limit on the caesium clocks themselves.

“These sorts of comparisons have historically been critical in laying the groundwork for redefinitions of fundamental units”, says Oates.

Accurate timing is crucial to satellite positioning systems such as GPS, which is why GPS satellites have onboard atomic clocks. But their accuracy is currently limited more by other factors, such as air turbulence, than by the performance of their clocks. There are, however, other good reasons for going beyond the already astonishing accuracy of caesium clocks.

For example, in astronomy, if the arrival times of light from space could be compared extremely accurately for different places on the Earth’s surface, this could allow the position of the light’s source to be pinpointed very precisely – with a resolution that, as with current interferometric radio telescope networks, is “equivalent to a continent-sized telescope”, says Lodewyck.

Better time measurement would also enable high-precision experiments in fundamental physics: for example, to see if some of nature’s fundamental constants change over time, as some speculative theories beyond the Standard Model of physics predict.

Before switching to a new standard second, says Lodewyck, there are more hurdles to be jumped. Optical clocks are needed that can run constantly, and there must be better ways to compare the clocks operating in different institutes.

“This measurement is a significant advance towards a new definition of the second”, says Uwe Sterr of the Physikalisch-Technische Bundesanstalt in Braunschweig, Germany, which also operates an atomic-clock standard. “But to agree on a new standard for time the pros and cons of the different candidates that are in the play needs to be evaluated in more detail”, he adds.

“It’s not yet decided which atomic species nor which kind of optical clocks will be chosen as the next definition of the SI second”, Lodewyck concurs. “But we believe that strontium OLCs are a strong contender.”

References
1. Le Targat, R. et al., Nature Communications 4, 2109 (2013).
2. Takamoto, M., Hong, F. -L., Higashi, R. & Katori, H. Nature 435, 321–324 (2005).

Gangs of New York

Here’s my latest piece for BBC Future, pre-editing.

_______________________________________________________

One of the big challenges in fighting organized crime is precisely that it is organized. It is run like a business, sometimes literally, with chains of command and responsibility, different specialized ‘departments’, recruitment initiatives and opportunities for collaboration and trade. This structure can make crime syndicates and gangs highly responsive and adaptable to attempts at disruption by law-enforcement services.

That’s why police forces are keen to discover how these organizations are arranged: to map the networks that link individual members. This structure is quite fluid and informal compared to most legitimate businesses, but it’s not random. In fact, violent street gangs seem to be organized along rather similar lines to insurgent groups that stage armed resistance to political authority, such as guerrilla forces in areas of civil war, for instance in being affiliations of cells each with their own leaders. It’s for this reason that some law-enforcement agencies are hoping to learn from military research. A team at the West Point US Military Academy in New York has just released details of a software package it has developed to aid intelligence-gathering by police dealing with street gangs. The program, called ORCA (Organization, Relationship, and Contact Analyzer), can use real-world data acquired from arrests and questioning of suspects to deduce the network structure of the gangs.

ORCA can figure out the likely affiliations of individuals who will not admit to being members of any specific gang, as well as the sub-structure of gangs (the ‘gang ecosystem’) and the identity of particularly influential members, who tend to dictate the behaviour of others.

There are many reasons why this sort of information would be important to the police. The ecosystem structure of a gang can reveal how it operates. For example, many gangs fund themselves through drug dealing, which tends to happen by the formation of “corner crews”: small groups that congregate on a particular street corner to sell drugs. And having some knowledge of the links and affiliations between different gangs can highlight dangers that call for more focused policing. If a gang perpetrates some violent action on a rival gang, police will often monitor the rival gang more closely because of the likelihood of retaliation. But gangs know this, and so the rivals might instead ask an allied gang to carry out a reprisal instead. So police need to be aware of such alliances.

The roles of highly influential members of a social network are familiar from other studies of such networks – for example, in viral marketing and the epidemiology of infectious diseases. These individuals typically have a larger than average number of links to others, and their choices and actions are quickly adopted by others. An influential gang member who is prone to risky, radicalizing or especially violent behaviour can induce others to follow suit – so it can be important to identify these individuals and perhaps to monitor them more closely.

In developing ORCA, West Point graduate Paulo Shakarian, who has a doctorate in computer sciences and has worked in the past as an adviser to the Iraqi National Police, and his coworkers have drawn on the large literature that has grown over the past decade on the mapping of social networks. These studies have shown that the way a network operates – how information and influence spread through it, for example – depends crucially on what mathematicians call its topology: the shape of the links between people. For example, spreading happens quite differently on a grid (like the street network of Manhattan, where there are many alternative routes between two points), or a tree (where points are connected by the repeated splitting of branches), or a ‘small world’ network (where there are generally many shortcuts so that any point can be reached from any other in relatively few jumps). Many studies in this new mathematical science of networks have been concerned to deduce the community structure of the network: how it can be decomposed into smaller clusters that are highly connected internally but more sparsely linked to other modules. It’s this kind of analysis that enables ORCA to figure out the ecosystems of gangs.

One of the features of ORCA is an algorithm – a set of rules – that assigns each member of the network a probability of belonging to a particular gang. If an individual admits to this, the assignment can be awarded 100% probability. But if he will not, then any known associations he has with other individuals can be used to calculate a probable ‘degree of membership’. The program can also identify ‘connectors’ who are trusted by different gangs to mediate liaisons between them, for example to broker deals that allow one gang to conduct drug sales on the territory of another.

Shakarian and colleagues tested ORCA using police data on almost 1500 individuals belonging to 18 gangs, collected from 5418 arrests in that district over three years. These gangs were known to be racially segregated, and the police told the West Point team that one racial group was know to form more centrally organized gang structures than the other. ORCA confirmed that the latter, more decentralized group tended to be composed of more small modules, rather than larger, branched networks.

Although the West Point team can’t disclose details, they say that they are working with a “major metropolitan police department” to test their program and to integrate it with information on the geographical distributions of gangs and how they change over time. One can’t help suspecting that the developers of games such as Grand Theft Auto, which unfolds in a complex netherworld of organized crime gangs, will also be taking an interest to improve the realism of its fictional scenarios.

Reference: D. Paulo et al., preprint http://www.arxiv.org/abs/1306.6834 (2013).

Friday, July 05, 2013

Turning pearls


Here’s the previous fortnightly piece for BBC Future. That published today is coming up soon.

_________________________________________________________

Of all nature’s defence mechanisms, molluscs surely have the most stunning. If a foreign particle such as an abrasive sand grain or a parasite gets inside the soft body of a mollusc – most pearls are made by oysters, though clams and mussels will make them too – the organism coats it in nacre (mother of pearl), building up a smooth blob of this hard iridescent material. The mollusc is, of course, oblivious to the fact that this protective capsule is so gorgeous. Pearls can be white, grey, black, red, blue, green or yellow, and their attraction for humans has led to traditions of pearl-diving that are thousands of years old. Today pearls are harvested in oyster farms in the Indian Ocean, East Asia and all across the Pacific, in which pearl production is stimulated artificially by inserting round beads into the molluscs to serve an a seed.

Yet in spite of the commercial value of pearl production, the formation of pearls is still imperfectly understood. Only the most highly prized pearls are perfectly spherical. Many have other shapes: elongated and ovoid, say, or the teardrop shape that works well for earrings. Some, called baroque pearls, are irregular, like blobs of solder pinched off at one end into a squiggly tail. It’s common for pearls to adopt a shape called a solid of revolution, roughly round or egg-shaped but often with bands and rings running around them latitudinally, like wooden beads or bedknobs turned on a lathe. In other words, the pearl has perfect ‘rotational symmetry’: it looks the same when rotated by any amount on its axis. When you think about it, that’s a truly odd shape to account for.

It’s recently become clear that pearls really are turned. Pearl farmers have long suspected that the pearl might rotate as it grows within the pouch that holds it inside the soft ‘mantle’ tissue of the mollusc. In 2005 that was confirmed by a report published in an obscure French-language ‘journal of perliculture’, which stated that a pearl typically rotates once every 20 days or so. This would explain the rotational symmetry: any differences in growth rate along the axis or rotation get copied around the entire circumference.

But what makes a pearl turn? Julyan Cartwright, who works for the Spanish Research Council (CSIC), and his colleagues Antonio Checa of the University of Granada and Marthe Rousseau of the CNRS Pharmacologie et Ingénierie Articulaires in Vandoeuvre les Nancy, France, have now come up with a possible explanation.

Nacre is an astonishing material in its own right. It consists mostly of aragonite, a form of calcium carbonate (the mineral fabric of chalk), which is laid down here as microscopic slabs stacked in layers and ‘glued’ with softer organic membranes of protein and chitin (the main component of the insect cuticle and shrimp shell). This composite structure, with hard layers weakly bonded together, makes nacre extremely tough and crack-resistant, which is why materials scientists seek to mimic its microstructure in artificial composites. The layered structure also reflects light in a manner that creates interference of the light waves, producing the iridescence of mother-of-pearl.

The slabs of aragonite are made from chemical ingredients secreted by the same kind of cells responsible for making the mollusc’s shell. Several layers grow at the same time, creating terraces that can be seen on a pearl’s surface when inspected under the microscope.

Cartwright and colleagues think that these terraces hold the key to a pearl’s rotation. They say that, as new molecules and ions (whether of calcium and carbonate, or chitin or protein) stick to the step of a terrace, they release energy which warms up the surface. At the same time, molecules of water in the surrounding fluid bounce off the surface, and can pick up energy as they do. The net result is that, because of the conservation of momentum, the step edge recoils: the surface receives a little push.

If terrace steps on the surface were just oriented randomly across the pearl, this push would average out to zero. But for pearls with a solid-of-rotation shape, it’s been found that the terraces are arrayed in parallel like lines of longitude on a globe, creating a ratchet-like profile around the circumference of the pearl. Because of this ratchet shape, impart a preferred direction to the little impulses received by the pearl from molecular impacts on the vertical faces of the steps, causing the growing pearl to rotate. The researchers’ rough estimate of the size of this force during growth of a typical pearl shows that it should produce a rotation rate more or less equal to that observed.

In other words, the pearl can become a kind of ratchet that, by virtue of the unsymmetrical step profile of its surface, can convert random molecular motions into rotation in one direction.

The researchers can also offer an explanation for where the ratchet profile comes from in the first place. If by chance a growing pearl starts to turn in a particular direction, feedbacks in the complex crystallization process on the pearl surface will cause the step edges to line up longitudinally (that is perpendicular to the rotation), creating the ratchet that then sustains the rotation.

The researchers admit that there are still gaps to be filled in their argument, but they say that the idea might be applied to make little machines that will likewise rotate spontaneously powered only be ambient heat. But don’t worry: they haven’t invented perpetual motion. The rotation is ultimately powered by the heat released during the chemical process of crystallization, and it will stop when there is nothing left to crystallize – when the ‘fuel’ runs out.

Reference: J. H. E. Carywright, A. G. Checa & M. Rousseau, Langmuir, advance online publication doi:10.1021/la40142021 (2013).

More on cursive

Only when clearing out my old magazines did I notice another comment in Prospect (May issue) on my article on cursive writing. There Katy Peters says:

“[Cursive] encourages children to allow their writing to follow the flow of thought more easily… My children [who were taught the usual print and cursive] have not been taught two different systems of writing; they have been taught a single method that allows them to commit thoughts to paper.”

So cursive helps a child’s train of thought to flow better than does print? I can see the logic in that: joined up writing leads to joined up thinking, right? But, Ms Peters, what about the fact that children find spelling harder with cursive because they find it more difficult to keep track of words as being composed of a discrete sequence of letters?

How do I know that’s the case, you ask? Well, I don’t. I just made it up. But it sounds kind of plausible, doesn’t it, so I figure it is on a par with Ms Peters’ view. I should say that, because Prospect letters are kept short and don’t allow references, I can’t entirely rule out the possibility that Ms Peters is quoting the findings of an academic study. But somehow I strongly doubt that. If you want to allude to actual research but aren’t permitted the citation itself, you can do that, as I did in my original piece (though that didn’t stop some from asking why there were no references). There’s no sign that Ms Peters did so. She’s simply saying something that sounds like it might be true.

What is patently untrue is that her children were taught “a single method” of writing. No, they really weren’t. It was abundantly evident to me that my daughter was very clearly being taught two systems – as she too knew very well, stating explicitly when she was choosing to use one and when the other. Cursive was a method she had to learn afresh. So let’s not just make stuff up because we want to believe it.

In any event, I wasn’t saying that cursive is inherently absurd, but rather, that there’s no good evidence that it has any advantages (I reserve judgement in the case of some children with dyslexia). The responses to my article have been very revealing about the way folks reason on things about which they have strong views. They don’t really want to know what academic studies show, and if confronted with such studies, they find ways to ignore or dismiss them. No, people simply want to find arguments for holding on to their beliefs. It doesn’t matter if these arguments are patently absurd – when I told the audience at the Hay Festival how several people cited the “grandmother’s letters in the attic” argument, they laughed, reassuring me that this was as transparently facile as I’d always thought.

No, people prefer anecdote to scientific study. (“But cursive is quicker for me.” Well, big surprise – you stopped printing when you were six, because you were told it wasn’t “grown-up”, and so you’ve scarcely practiced it since.) This doesn’t mean that such folk are stupid; they’re simply behaving as we seem predisposed to do, which is not in an evidence-based way. I’m sure I regularly do this too.

This is one reason why scientists who insist that we must better inform the public on issues such as climate change and evolution are only getting half the point. Better information is good, but it isn’t necessarily going to win the day, because we are ingenious at finding arguments that support our preconceptions, and ignoring evidence that doesn’t. Which is why I must try to remain open to the possibility that there really is evidence out there for why teaching children to print and then to write in cursive is a sensible way to teach. I was hoping that, if it exists, my article would bring it out into the open. It hasn’t yet.

Thursday, June 20, 2013

Gene machines

I have a piece in the July issue of Prospect on DNA nanotechnology. This is the pre-edited version.

_______________________________________________________________

“It has not escaped our notice that the specific pairing we have postulated immediately suggests a possible copying mechanism for the genetic material”. The arch remark that concluded James Watson and Francis Crick’s paper in Nature on the structure of DNA, published 60 years ago, anticipated the entire basis of modern genetics. The structure they postulated is both iconic and beautiful: two strands of conjoined molecular building bocks, entwined in a double helix. The twin strands are zipped together by chemical bonds that rely on a perfect, ‘complementary’ match between their sequences of building blocks, and these sequences encode genetic information that is passed on when the molecule is replicated.

Yet although Watson and Crick were undoubtedly right to depict DNA as a kind of replicating molecular data base, the beautiful elegance of their vision of genetics – and in a sense, of the whole of biology and evolution – as the read-out of a set of digital instructions on DNA, basically using chemistry for computation, is now looking too simplistic, even misleading. It is no longer clear that DNA is the ultimate focus of all molecular processes controlling the development and evolution of organisms: it is an essential but incomplete database, more an aide-memoire than a blueprint. Watson and Crick’s picture of a molecule that can be programmed to zip up only with the right partner is finding expression in its purest and most satisfying form not in biology – which is always messier than we imagine – but in the field of nanotechnology, which is concerned with engineering matter at the scale of nanometres (millionths of a millimetre) – the dimensions of molecules.

A key motivation for nanotechnology is the miniaturization of transistors and other devices in microelectronic circuitry: they are now so small that conventional methods of carving and shaping materials are stretched to the limits of their finesse. To replace such ‘top-down’ fabrication with ‘bottom-up’, nanotechnologists need to be able to control exactly how atoms and molecules stick together, and perhaps to dictate their movements.

DNA might be the answer. Chemists have now created molecular machines from bespoke pieces of DNA that can move and walk along surfaces. They have made molecular-sized cubes and meshes, and have figured out how to persuade DNA strands to fold up into almost any shape imaginable, including Chinese characters and maps of the world smaller than a single virus. They are devising DNA computers that solve problems mechanically, not unlike an abacus, by the patching together of little ‘sticky’ tiles. They are using DNA tagging to hitch other molecules and tiny particles into unions that would otherwise be extremely difficult to arrange, enabling the chemical synthesis of new materials and devices. In short, they are finding DNA to be the ideal nanotechnological construction material, limitlessly malleable and capable of being programmed to assemble itself into structures with a precision and complexity otherwise unattainable.

Although this research places DNA in roles quite unlike those it occupies in living cells, it all comes from direct application of Watson and Crick’s insight. A single strand of DNA is composed of four types of molecule, strung together like beads on a thread. Each building block contains a unit called a base which dangles from the backbone. There are four kinds of base, whose chemical names are shortened to the labels A, T, C and G. The bases can stick to one another, but in the double helix they tolerate only one kind of partner: A pairs with T, and C with G. This means that the sequence of bases on one strand exactly complements that on the other, and the pairs of bases provide the zip that holds the strands together.

So a DNA strand will only pair up securely with another if their base sequences are complementary. If there are mismatches of bases along the double helix, the resulting bulges or distortions make the double strand prone to falling apart. This pickiness about base-pairing means that a DNA strand can find the right partner from a mixture containing many different sequences.

Chemical methods for making artificial DNA, first developed in the 1970s, have now reached the point at which strands containing millions of A, T, G and C bases can be assembled in any sequence you want. These techniques, developed for genetic engineering and biotechnology, are now used by nanotechnologists to create DNA strands designed to assemble themselves into exotic shapes.

The potential of the approach was demonstrated in the early 1990s by chemist Nadrian Seeman of New York University and his collaborators. They created DNA strands designed such that, when they are mixed together, they twist around one another not in a single helical coil but to make the struts of tiny, cube-shaped cages. No one had any particular use for a DNA cube; Seeman was demonstrating a proof of principle, showing that a molecular shape that would be extremely hard to fashion using conventional chemistry could be engineered by figuring out how to program the components to lace themselves together spontaneously.



Although regarded for some years as little more than a clever curiosity, Seeman’s work was visionary. It showed a way to tackle nanotechnology’s challenge of building very small objects from the bottom up, starting with individual atoms and molecules. If DNA is the construction fabric, the base sequence can provide the assembly instructions: unlike most molecules, DNA will do what it is told.

DNA origami could, for instance, provide scaffolding on which electronic components are arranged. One might tag the components with strands that pair up with a particular location on a DNA scaffold with the complementary sequence: in effect, an instruction saying “stick here”. Researchers have worked out how to program DNA strands to weave themselves into webs and grids, like a chicken-wire mesh, on which other molecules or objects can be precisely attached. Last February a team at Marshall University in West Virginia showed that giant molecules called carbon nanotubes – nanometre-scale tubes of carbon which conduct electricity and have been proposed as ultrasmall electronic devices – can be arranged in evenly spaced, parallel pairs along a ribbon made by DNA origami. The carbon nanotubes were wrapped with single-stranded DNA, which lashed them onto the ribbon at the designated sites.

The same approach of DNA tagging could be used to assemble complicated polymers (long chains of linked-up molecules) and other complex molecules piece by piece, much as, in living cells, DNA’s cousin RNA uses genetically encoded information to direct the formation of protein molecules, tagging and assembling the amino-acid building blocks in the right order.

The astonishing versatility of DNA origami was revealed in 2006 when Paul Rothemund at the California Institute of Technology in Pasadena unveiled a new scheme for determining the way it folds. His approach was to make single strands programmed to stick to itself in back-and-forth hairpin-like turns that create a two-dimensional shape, pinning the folds in place with ‘staples’ made from short DNA strands with appropriate complementary sequences.



Rothemund developed a computer algorithm that could work out the sequence and stapling needed to define any folding pattern, and showed experimental examples ranging from smiley faces and stars to a map of the world about a hundred nanometres across (a scale of 1:200,000,000,000,000). These complex shapes could take several days to fold up properly, allowing all kinks and mistakes to be ironed out, but researchers in Germany reported last December that each shape has an optimal folding temperature (typically around 50-60 oC) at which folding takes just a few minutes – a speed-up that could be vital for applications.



Last March, Hao Yan of the University of Arizona took the complexity of DNA origami to a new level. He showed how the design principles pioneered by Seeman and Rothemund can be tweaked to make curved shapes in two and three dimensions, such as hollow spheres just tens of nanometres wide. Meanwhile, in 2009 a Danish team saw a way to put DNA cubes to use: they made larger versions than Seeman’s, about 30 nanometres across, with lids that could be opened an closed with a ‘gene key’ – a potential way to store drug molecules until an appropriate genetic signal releases them for action.



As aficionados of Lego and Meccano know, once you have a construction kit the temptation is irresistible to give it moving parts: to add motors. Molecular-scale motors are well-known in biology: they make muscles contract and allow bacteria to swim. These biological motors are made of protein, but researchers have figured out how to produce controlled movement in artificial DNA assemblies too. One approach, championed by Bernie Yurke of Bell Laboratories in New Jersey and Andrew Turberfield at the University of Oxford, is to make a DNA ‘pincer’ that closes when ‘fueled’ with a complementary strand that sticks to the arms and pulls them together. A second ‘fuel strand’ strips away the first and opens the arms wide again. Using similar principles, Turberfield and Seeman have made two-legged ‘DNA walkers’ that stride step by step along DNA tracks, while a ‘DNA robot’ devised by Turberfield and colleagues can negotiate a particular path through a network of such tracks, directed by fuel strands that prompt a right- or left-hand turn at branching points.

They and others are working out how to implement the principles of DNA self-assembly to do computing. For example, pieces of folded-up DNA representing binary 1’s and 0’s can be programmed to stick together like arrays of tiles to encode information, and can be shuffled around to carry out calculations – a sort of mechanical, abacus-like computer at the molecular scale.

Underlying all this work is Watson and Crick’s comment about DNA replication. There is the tantalizing – some might say scary – possibility that DNA structures and machines could be programmed not only to self-assemble but to copy themselves. It’s not outrageous to imagine at least some products of DNA nanotechnology acquiring this life-like ability to reproduce, and perhaps to mutate into better forms. Right now such speculations recede rapidly into science fiction – but then, no one guessed 60 years ago where the secrets of DNA self-assembly would take us today.

Friday, June 14, 2013

Nasty, brutish and short

John Gray once said to me, wary that we might find consensus on the topic we were discussing, that “there can be too much agreement”. That is the perpetual fear of the contrarian, I suppose. But I find myself ever more in agreement with him nonetheless. His splendid demolition of the mythical Enlightenment in this week’s New Statesman (not online, it seems) is a case in point, which prompts me to stick up here my notes from a panel discussion (“Nasty, Brutish and Short”) at the How The Light Gets In festival at Hay a couple of weeks ago. They asked for a critique of the Draper-White view; I was happy to oblige.

___________________________________________________________

I’ve been trying to parse the title of this discussion ever since I saw it. The blurb says “The Enlightenment taught us to believe in the optimistic values of humanism, truth and progress” – but of course the title, which sounds a much more pessimistic note, comes from Thomas Hobbes’ Leviathan, and yet Hobbes too is very much a part of the early Enlightenment. You might recall that it was Hobbes’ description of life under what he called the State of Nature: the way people live if left to their own devices, without any overarching authority to temper their instincts to exploit one another.

That scenario established the motivation for Hobbes’ attempt to deduce the most reliable way to produce a stable society. And what marks out Hobbes’ book as a key product of the Enlightenment is that he tried to develop his argument not, as previous political philosophies going back to Plato had done, according to preconceptions and prejudices, but according to strict, quasi-mathematical logic. Hobbes’ Commonwealth is a Newtonian one – or rather, to avoid being anachronistic, a Galilean one, because he attempted to generalize his reasoning from Galileo’s law of motion. This was to be a Commonwealth governed by reason. And let me remind you that what this reason led Hobbes to conclude is that the best form of government is a dictatorship.

Now of course, this sort of exercise depends crucially one what you assume about human nature from the outset. If, like Hobbes, you see people as basically selfish and acquisitive, you’re likely to end up concluding that those instincts have to be curbed by drastic measures. If you believe, like John Locke, that humankind’s violent instincts are already curbed by an intrinsic faculty of reason, then it becomes possible to imagine some kind of more liberal, communal form of self-government – although of course Locke then argued that state authority is needed to safeguard the private property that individuals accrue from their efforts.

Perhaps the most perceptive view was that of Rousseau, who argued in effect that there is no need for some inbuilt form of inhibition to prevent people acting anti-socially, because they will see that it is in their best interests to cooperate. That’s why agreeing to abide by a rule of law administered by a government is not, as in Hobbes’ case, an abdication of personal freedom, but something that people will choose freely: it is the citizen’s part of the social contract, while the government is bound by this contract to act with justice and restraint. This is, in effect, precisely the kind of emergence of cooperation that is found in modern game theory.

My point here is that reasoning about governance during the Enlightenment could lead to all kinds of conclusions, depending on your assumptions. That’s just one illustration of the fact that the Enlightenment doesn’t have anything clear to say about what people are like or how communities and nations should be run. In this way and in many others, the Enlightenment has no message for us – it was too diverse, but more importantly, it was much too immersed in the preoccupations of its times, just like any other period of history. This is one reason why I get so frustrated about the way the Enlightenment is used today as a kind of shorthand for a particular vision of humanity and society. What is most annoying of all is that that vision so often has very little connection with the Enlightenment itself, but is a modern construct. Most often, when people today talk about Enlightenment values, they are probably arguing in favour of a secular, tolerant liberal democracy in which scientific reason is afforded a special status in decision-making. I happen to be one of those people who rather likes the idea of a state of that kind, and perhaps it is for this reason that I wish others would stop trying to yoke it to the false idol of some kind of imaginary Enlightenment.

To state the bleedin’ obvious, there were no secular liberal democracies in the modern sense in eighteenth century Europe. And the heroes of the Enlightenment had no intention of introducing them. Take Voltaire, one of the icons of the Enlightenment. Voltaire had some attractive ideas about religious tolerance and separation of church and state. But he was representative of such thinkers in opposing any idea that reason should become a universal basis for thought. It was grand for the ruling classes, but far too dangerous to advocate for the lower orders, who needed to be kept in ignorance for the sake of the social order. Here’s what he said about that: “the rabble… are not worthy of being enlightened and are apt for every yoke”.

What about religion? Let’s first of all dispose of the idea that the Enlightenment was strongly secular. Atheism was very rare, and condemned by almost all philosophers as a danger to social stability. Rousseau calls for religious tolerance, but not for atheists, who should be banished from the state because their lack of fear of divine punishment means that they can’t be trusted to obey the laws. And even people who affirm the religious dogmas of the state but then act as if they don’t believe them should be put to death.

Voltaire has been said to be a deist, which means that he believed in a God whose existence can be deduced by reason rather than revelation, and who made the world according to rational principles. According to deists, God created the world but then left it alone – he wasn’t constantly intervening to produce miracles. It’s sometimes implied that Enlightenment deism was the first step towards secularism. But contrary to common assertions, there wasn’t any widespread deist movement in Europe at that time. And again, even ideas like this had to be confined to the better classes: the message of the church should be kept simple for the lower orders, so that they didn’t get confused. Voltaire said that complex ideas such as deism are suited only “among the well-bred, among those who wish to think.”

Enough Enlightenment-bashing, perhaps. But why, then, do we have this myth of what these people thought? Partly that comes from the source of most of our historical myths, which is Victorian scholarship. The simple idea that the Enlightenment was some great Age of Reason is now rejected by most historians, but the popular conception is still caught up with a polemical view developed in particular by two nineteenth-century Americans, John William Draper and Andrew Dickson White. Draper was a scientist who decided that scientific principles could be applied to history, and his 1862 book The History of Intellectual Development in Europe was a classic example of Whiggish history in which humankind makes a long journey out of ignorance and superstition, through an Age of Faith, into a modern Age of Reason. But where we really enter the battleground is with Draper’s 1874 book History of the Conflict between Religion and Science, in which we get the stereotypical picture of science having to struggle against the blinkered dogmatism of faith – or rather, because Draper’s main target was actually Catholicism, against the views of Rome, because Protestantism was largely exonerated. White, who founded Cornell University, gave much the same story in his 1896 book A History of the Warfare of Science with Theology in Christendom. It’s books like this that gave us the simplistic views on the persecution of Galileo that get endlessly recycled today, as well as myths such as the martyrdom of Giordano Bruno for his belief in the Copernican system. (Bruno was burnt at the stake, but not for that reason.)

The so-called “conflict thesis” of Draper and White has been discredited now, but it still forms a part of the popular view of the Enlightenment as the precursor to secular modernity and to the triumph of science and reason over religious dogma.

But why, if these things are so lacking in historical support, do intelligent people still invoke the Enlightenment trope today whenever they fear that irrational forces are threatening to undermine science? Well, I guess we all know that our critical standards tend to plummet when we encounter idea that confirm our preconceptions. But it’s more than this. It is one thing to argue for how we would prefer things to be, but far more effective to suggest that things were once like that, and that this wonderful state of affairs is now being undermined by ignorant and barbaric hordes. It’s the powerful image of the Golden Age, and the rhetoric of a call to arms to defend all that is precious to us. What seems so regrettable and ironic is that the casualty here is truth, specifically the historical truth, which of course is always messy and complex and hard to put into service to defend particular ideas.

Should we be optimistic or pessimistic about human nature? Well – big news! – we should be both, and that’s what history really shows us. And if we want to find ways of encouraging the best of our natures and minimizing the worst, we need to start with the here and now, and not by appeal to some imagined set of values that we have chosen to impose on history.

The first glaze


Here’s my latest piece for BBC Future. And this seems an opportune place to advertise Jo Marchant’s latest book The Shadow King (Da Capo), on Tutankhamun’s mummy. It looks set to be covering some fascinating material, and is published at the end of June.

_____________________________________________________________

In the absence of anything like real science to guide them, most useful technologies in the ancient world were probably discovered by chance. But that doesn’t seem to bode well for understanding how, when and where these often transformative discoveries took place. Can we ever hope to know how, say, the Stone Age became the Bronze Age became the Iron Age?

Modern archaeologists are an optimistic and inventive lot, however. They figure that, even if the details are buried in the sands of time, we can make some good guesses by trying to reconstruct what the ancients were capable of, using the techniques and materials of the time. Researchers have, for example, built copies of ancient iron- and glass-making furnaces to figure out whether descriptions and recipes from those times really work.

One of the latest efforts in this field of experimental archaeology now proposes that the production of glazed stones and ceramics – an innovation that profoundly affected trade across the globe – could have been made possible by the natural saltiness of cow dung: a property that makes it the vital ingredient in a recipe assembled by serendipity.

The earliest glazes, dating from the late fifth millennium BC and found in the Near East, Egypt and the Indus Valley, were used for coating natural stones made from minerals such as quartz and soapstone (talc). As the technology advanced, the stones were often exquisitely carved before being coated with a blue copper-based glaze to make objects now known as faience. By the second millennium BC Egyptian faience was being traded throughout Europe.

Because these copper glazes appear during the so-called Chalcolithic period – the ‘Copper Age’ that preceded the Bronze Age – it has been long thought that they were discovered as an offshoot of the smelting of copper ores such as malachite to make the metal. The glazes are forms of copper silicate, made as copper combines with the silicate minerals in the high temperature of a kiln. These compounds can range from green (like malachite itself, a kind of copper carbonate) through turquoise to rich blue, depending on how much salt (more specifically, how much chloride) is incorporated into the mix: the more of it, the greener the glaze.

Sometimes these copper glazes are crystalline, with regularly ordered arrays of atoms. But they can also be glassy, meaning that the atoms are rather disordered. In fact, it seems likely that copper smelting stimulated not only glazing but the production of glass itself, as well as the pigment known as Egyptian blue, which is a ground-up copper silicate glass. In other words, a whole cluster of valuable technologies might share a common root in the making of copper metal.

The basic idea, put forward by Egyptologists such as the Englishman William Flinders Petrie in the early twentieth century, was that other materials might have found their way by accident into copper-smelting kilns and been transformed in the heat. Glass, for instance, is little more than melted sand (mostly fine-grained quartz). To melt pure sand requires temperatures higher than ancient kilns could achieve, but the melting point is lowered if there is some alkaline substance present. This could have been provided by wood ash, although some later recipes in the Middle East used the mineral natron (sodium carbonate).

How exactly might a blue glaze have been made this way? Another early Egyptologist, Alfred Lucas, who worked with Howard Carter, proposed that perhaps a piece of quartz used to grind up malachite to make eye-paint found its way into a kiln, where the heat and alkali could have converted residues of the copper mineral into a blue film. But that would make the discovery independent of copper manufacture itself, and it’s not obvious how a grinding stone could slip into a kiln. Yet why else should a copper compound come to be on the surface of a lump of quartz?

Last year, Mehran Matin and his daughter Moujan Matin, working in the research laboratory of the Shex Porcelain Company in Saveh, Iran, showed that these materials didn’t need to be in physical contact at all [M. Matin & M. Matin, J. Archaeological Science 39, 763 (2012)]. A copper compound such as copper scale – the corrosion product of copper metal, typically containing copper hydroxide – can be vaporized in a kiln and then, in the presence of vaporized alkali oxides, be deposited on the surface of a silicate such as quartz to form a bluish glaze. All that would require is for a bit of quartz, an ubiquitous mineral in the Middle East, to have been lying around in a copper-smelting kiln.

Or does it? To get the rich turquoise blue, you also need other ingredients, such as salt. So Moujan Matin, now at the Department of Archaeology and Art History at the University of Oxford in England, has undertaken a series of experiments with different mixtures to see if she can reproduce the shiny blue appearance of the earliest blue-glazed stones. She used a modern kiln fired up to the kind of temperatures ancient kilns could generate – between 850 and 980 oC – in which lumps of quartz were placed on a pedestal above a glazing mixture made from copper scale and other ingredients.

Rock salt (sodium chloride) was known in the ancient world, since there are deposits around the Mediterranean and in the Middle East. Matin found that copper scale and rock salt alone covered the quartz surface with a rather pale, greenish, dull and rough coating: not at all like ancient blue glaze. An extra ingredient – calcium carbonate, or common chalk, which the Egyptians used as a white pigment among other things – made all the difference, producing a rich, shiny turquoise-blue glaze above 950 oC.

That looked good – but it forces one to assume that salt, chalk and quartz all somehow got into the kiln along with the copper scale. It’s not impossible, but as Matin points out, such accidents probably had to happen several times before anyone took much notice. However, there’s no need for the least likely of these ingredients, rock salt. Matin reasoned that dried cattle dung, which contains significant amounts of both alkalis and salt (chloride), was widely used as a fuel since the beginnings of animal domestication in the eighth millennium BC. So she tried another mixture: copper scale, calcium carbonate and the ash of burnt cattle dung. This too produced a nice, shiny (albeit slightly paler) blue glaze.

Of course, there’s nothing that proves this was the way glazing began. But it supplies a story that is entirely plausible, and narrows the options for what will and won’t do the job.

Reference: M. Matin, Archaeometry advance online publication doi:10.111/arcm.12039.

Wednesday, June 12, 2013

Vanishing cats



Here’s my latest news story for Nature.
__________________________________________________________

A new ‘invisibility cloak’ can hide animals

A cat climbs into a glass box and vanishes, while the scene behind the box remains perfectly visible through the glass. This latest addition to the science of invisibility cloaks is one of the simplest implementations so far, but there’s no denying its striking impact.

The ‘box of invisibility’ has been designed by a team of researchers at Zhejiang University in Hangzhou, China, led by Hongsheng Chen, and their coworkers. The box is basically a set of prisms made from high-quality optical glass that bend light around any object in the opening around which the prisms are arrayed [1].

As such, the trick is arguably closer to ‘disappearances’ staged in Victorian music hall using arrangements of slanted mirrors than to the modern use of substances called metamaterials to achieve invisibility by guiding light rays in unnatural ways.

But Chen and colleagues have forged a conceptual link between the two. Metamaterials – made from arrays of electrically conducting components that interact with light so as to create new optical effects such as negative refractive index – are needed if an invisibility cloak is to achieve ‘perfect’ cloaking, being invisible itself and preserving the phase relationships between the light waves moving through it [2].

Metamaterials that work at the wavelengths of visible light are very hard to make, however. Chen’s coworker Baile Zhang of Nanyang Technological University in Singapore [3], as well as John Pendry at Imperial College in London [4], and their coworkers have shown that a compromise of partial visible-light cloaking of macroscopic objects can be attained using blocks of transparent, optically anisotropic materials such as calcite crystal, in which light propagates at different speeds in different directions.

These partial cloaks will hide objects but remain visible themselves. “Everyone would like to have a cloak that hides big real world objects from visible light, but achieving this demands some compromises of the ideal theory”, Pendry explains.

He says that Chen and colleagues have now gone “further than most” with such compromises by abandoning any concern to preserve phase relationships in the transmitted light. “As a result the authors can report quite a large cloak that operates over most of the visible spectrum”, he says.

Chen and colleagues say that such a simplification is warranted for many applications, because there’s no need to preserve the phase. “Living creatures cannot sense the phase of light”, they say.

Chen and his coworker Bin Zheng first unveiled the principle last year with a hexagonal arrangement of triangular prisms that could hide small objects [5]. But he has now found a more spectacular demonstration of what this approach can achieve.

In the researchers’ first example, they use a similar but larger hexagon of prisms placed in a fish tank. As a fish swims through the central hole, it disappears while the pondweed behind the cloak remains perfectly visible.

The second example uses a square arrangement of eight prisms with a central cavity large enough for a cat to climb inside. The researchers project a movie of a field of flowers, with a butterfly flitting between them, onto a screen behind the cloak. Seen from the front, parts of the cat vanish as it sits in the cavity or pokes its head inside, while the scene behind can be seen through the glass.

As well as being visible themselves, these cloaks only work for certain viewing directions. All the same, the researchers say that they might find uses, for example in security and surveillance, where one might imagine hiding an observer in a glass compartment that looks empty.

References
1. Chen, H. et al., preprint arxiv.org/1306.1780 (2013).
2. Schurig, D. et al., Science 314, 977-980 (2006).
3. Zhang, B., Luo, Y., Liu, X. & Barbastathis, G. Phys. Rev. Lett. 106, 033901 (2011).
4. Chen, X. et al., Nat. Commun. 2, 176 (2011).
5. Chen, H. & Zheng, B. Sci. Rep. 2, 255 (2012).

Monday, June 10, 2013

In the genes?

Yes, but then you turn to Carole Cadwalladr’s article in the Observer Review on having her genome sequenced. It made me seethe.

The article itself is fine – she does a good job of relating what she was told. But some of this genomics stuff is starting to smell strongly of quackery. Cadwalladr went to a symposium organized by the biotech company Illumina, which – surprise! – is selling sequencing machines. This is what the senior VP of the company said: “You’ll be able to surf your genome and find out everything about yourself.” Everything. One can, apparently, make such a blatantly, dangerously misleading statement and confidently expect no challenge from the assembled crowd of faithful geneticists.

Well, here’s the thing. I happened to be doing an event on Saturday at a literary festival with Steve Jones, and Steve said a great deal about genetics and predestiny. What he said was a vitally needed corrective to the sort of propaganda that Illumina is seemingly spouting. “Genetics is a field in retreat”, he admitted, saying that he has resisted producing a revised version of his classic The Language of the Genes because the field has just become so complicated and confusing since it first came out in 2000. He pointed out that a huge amount of our destiny is of course set by our environment and experience (I never knew, until Steve told me, that Mo Farah has an identical twin who is a car mechanic in Somalia). We discussed the idiocy of the “gene for” trope (the cover of the New Review has Cadwalladr saying “I don’t have a gene for conscientiousness” – but neither does any single bloody person on the planet).

There’s a huge amount of useful stuff that will come from the genomics revolution, and some people might indeed discover some medically valuable information from their genome. But the most common killer diseases, such as heart disease, will not be read out of your genome. I saw recently that at least 500 genes have been associated so far with some types of diabetes. We have 23,000 genes in total, so it goes without saying that those 500+ genes are not solely linked to functions that affect diabetes. The scientists and technologists are still grossly mis-selling the picture of what genes ‘do’, implying still that there is this one-to-one relationship between genes and particular phenotypic attributes. Steve pointed out that we still can’t even account, in genetic terms, for more than about 10 percent – the figure might even have been less, I don’t remember – of the inheritability of human height, even though it clearly does have a strong inherited influence. This is one of the issues I wanted to point to in my recent Nature article – we have little idea how most of our genome works.

One of the most invidious aspects of Cadwalladr’s piece comes from the way the folks at the symposium discussed BRCA1, the “Angelina gene”. There was no mistaking the excitement of the first speaker, Eric Topol of Scripps, who apparently said “This is the moment that will propel genomic medicine forward. It’s incredibly important symbolically.” In other words, “my field of research just got a fantastic celebrity endorsement.” But did anyone at the meeting ask if Jolie had actually made the right choice? It was an extremely difficult choice, but a cancer specialist at NIH I spoke to recently told me that he would not have recommended such a drastic measure. Steve Jones had a similar view, saying that there are drugs that are now routinely taken by women with this genetic predisposition. The good thing about such genomic information is that it could motivate frequent testing for people in such a position, to spot the onset of symptoms at the earliest opportunity (early diagnosis is the most significant factor for a successful treatment of most types of cancer). But Jolie’s case shows how a distorted message about genetic determinism, which the companies involved in this business seem still to be giving out, can skew the nature of the choices people will make. There’s a huge potential problem brewing here – not because of the technology itself, which is amazing, but because of the false confidence with which scientists and technologists are selling it, metaphorically and literally.

Sunday, June 09, 2013

About time

Here’s a book review of mine that appears in today’s Observer.

___________________________________________________________

Time Reborn: From the Crisis of Physics of the Future of the Universe
Lee Smolin
Penguin, 2013
ISBN 978-1-846-14299-4
319 pages, £20.00

Farewell to Reality: How Fairytale Physics Betrays the Search for Scientific Truth
Jim Baggott
Constable, 2013
ISBN 978-1-78033-492-9
338 pages, £12.99

At an inter-disciplinary gathering of academics discussing the concept of time, I once heard a scientist tell the assembled humanities scholars that physics can now replace all their woolly notions of time with one that is unique, precise and true. Such scientism is rightly undermined by theoretical physicist Lee Smolin in Time Reborn, which shows that the scientific view of time is up for grabs more than ever before. The source of the disagreement could hardly be more fundamental: is time real or illusory? Until recently physics has drifted toward the latter view, but Smolin insists that many of the deepest puzzles about the universe might be solved by realigning physics with our everyday intuition that the passage of time is very real indeed.

Clocks tick; seasons change; we get older. How could science have ever asserted this is all an illusion? It begins, Smolin says, with the idea that nature is governed by eternal laws, such as Newton’s law of gravity: governing principles that stand outside of time. The dream of a ‘theory of everything’, which might explain all of history from the instant of the Big Bang, assumes a law that preceded time itself. And by making the clock’s tick relative – what happens simultaneously for one observer might seem sequential to another – Einstein’s theory of special relativity not only destroyed any notion of absolute time but made time equivalent to a dimension in space: the future is already out there waiting for us, we just can’t see it until we get there.

This view is a logical and metaphysical dead end, says Smolin. Even if there was a theory of everything (which looks unlikely), we’d be left asking “why this theory?” Or equivalently, why this universe, and not one of the infinite others that seem possible? Most of all, why one in which life can exist? A favourite trick of cosmologists is to beg the question by arguing that it only gets asked in universes where life is possible – the so-called anthropic principle. Smolin will have none of that. He argues that because life-supporting universes are generally also ones in which black holes can form, and because black holes can spawn new universes, a form of cosmic natural selection can make a succession of universes evolve towards ones like ours.

In this scenario, not only is time real, but the laws of physics must themselves change over time. So there’s constant novelty, and no future until it becomes the present. The possible price you pay is that then space, not time, becomes illusory. That might seem an empty bargain, but Smolin asserts that not only could it solve many problems in fundamental physics and cosmology but it is also more amenable to testing than current ‘timeless’ theories.

That attribute might endear Smolin’s speculative ideas to physicist-turned-writer Jim Baggott. Smolin caused grumbling among his colleagues with his 2006 assault on string theory, The Trouble With Physics. In Farewell to Reality, Baggott now castigates theoretical physicists for indulging a whole industry of “fairy-tale physics” – strings, supersymmetry, brane worlds, M-theory, the anthropic principle – that not only pile one unwarranted assumption after another but are beyond the reach of experimental tests for the foreseeable future. He recalls the acerbic comment attributed to Richard Feynman: “string theorists don’t make predictions, they make excuses”.

Baggott has a point, and he makes it well, although his target is as much the way this science is marketed as what it contains. But such criticisms need to be handled with care. Imaginative speculation is the wellspring of science, as Baggott’s hero Einstein demonstrated. In one of my favourite passages of Time Reborn, Smolin sits in a café and dreams up a truly outré idea (that fundamental particles follow a principle of precedent rather than timeless laws), and then sees where the idea takes him. In creative minds, such conjecture injects vitality into science. The basic problem – that the institutional, professional and social structures of science can inflate such dreams into entire faddish disciplines before asking if nature agrees with them – is one that Baggott doesn’t quite get to.

Thursday, June 06, 2013

The legacy of On Growth and Form

The special issue of Interdisciplinary Science Reviews on the work and influence of D’Arcy Thompson that I have coedited with Matthew Jarron is finally published. It makes a nice collection – not quite as broad as originally hoped, due to some dropouts, but still a satisfying slice of the way Thompson’s ideas have been received in art and science. It won’t be freely available online, sadly, but the contents are as follows:

Matthew Jarron: Editorial
Stephen Hyde: D’Arcy Thompson’s Legacy in Contemporary Studies of Patterns and Morphology
Edward Juler: A Bridge Between Science and Art? The Artistic Reception of On Growth and Form in Interwar Britain, c.1930-42
Matthew Jarron: Portrait of a Polymath – A Visual Portrait of D’Arcy Thompson by Will Maclean
Peter Randall-Page: On Theme and Variation
Assimina Kaniari: D’Arcy Thompson’s On Growth and Form and the Concept of Dynamic Form in Postwar Avant-Garde Art Theory
Philip Ball: Hits, Misses and Close Calls: An Image Essay on Pattern Formation in On Growth and Form

I’ve put a version of my contribution up on my website, under “Patterns”.

Tuesday, June 04, 2013

The art of repair


Here’s the more or less original version of an article I’ve written for Aeon magazine. It carries a heavy debt to the wonderful catalogue of an exhibition entitled Flickwerk: The Aesthetics of Mended Japanese Ceramics (Herbert F. Johnson Museum of Art, Cornell University, 2008). It also informed my recent “60-second idea” on the BBC World Service’s Forum programme, broadcast this week, which was otherwise focused (loosely) on the topic of curiosity. Here I met cosmologist Lee Smolin, whose book Time Reborn I have just reviewed for the Observer – I’ll post the review here once it’s published.

_________________________________________________________________

Is your toilet seat broken? I only ask because it is damned hard to get things like that fixed. Are your shoes splitting? Good luck finding a cobbler these days. Is the insulation on your MacBook mains lead abraded and splitting at the power brick? They all do that, and they’re not cheap to replace.

There’s an answer to all these little repair jobs. It’s called Sugru: an adhesive, putty-like silicone polymer that you can hand-mould to shape and then leave overnight to set into a tough, flexible seal. Devised by Jane Ní Dhulchaointigh, an Irish design graduate at the Royal College of Art in London, in collaboration with retired industrial chemists, Sugru is an all-purpose mending material with an avid following of ‘hackers’ who relish its potential not just to repair but to modify off-the-shelf products. When it was pronounced a top invention of 2010 by Time magazine, it acquired international cult status.

Sugru doesn’t, however, do its job subtly. You can get it in modest white, but Sugru-fixers tend to prefer the bright primary colours, giving their repairs maximal visibility. They seem determined to present mending not as an unfortunate necessity to be carried out as quietly as possible but as an act worth celebrating.

That’s an attitude also found in the burgeoning world of ‘radical knitting’, where artists are bringing a punk sensibility to the Women’s Institute. Take textiles artist Celia Pym, who darns people’s clothes as a way of “briefly making contact with strangers”. There are no ‘invisible mends’ here: Pym introduces bold new colours and patterns, transforming rather than merely repairing the garments.

What Pym and the Sugru crew are asserting is that mending has an aesthetic as well as a practical function. They say that if you’re going to mend, you might as well do it openly and beautifully.

If that sounds like a new idea in the pragmatic West, it has a long tradition in the East. Pym’s artful recovery of damaged clothing is anticipated by more than three centuries in the boro garments of the Japanese peasant and artisan classes, which were stitched together from scraps of cloth in a time when nothing went to waste. In boro clothing the mends become the object, much like Austrian philosopher Otto Neurath’s celebrated hypothetical boat, repaired a plank at a time until nothing of the original remains. Some boro garments might in similar fashion be colonized and eventually overwhelmed by patches; others were assembled from scraps at the outset. Today boro’s shabby chic risks becoming merely an ethnic pose in trendy Tokyo markets, belying the necessity from which it arose. But boro was always an aesthetic idea as much as an imposition of hardship. It draws on the Japanese tradition of wabi-sabi, a world view that acknowledges transience and imperfection.

I have been patching clothes for years into a kind of makeshift, barely competent boro. Trousers in particular get colonized by patches that start at the knees and the holes poked by keys around the pockets, spreading steadily across thighs with increasing disregard for colour matching. Only when patches need patches does the recycling bin beckon. At first I did this first as a hangover from student privation. Later it became a token of ecological sensibility. Those changing motives carry implications for appearance: the more defiantly visible the mend, the less it risks looking like mere penny-pinching. That’s a foolishly self-conscious consideration, of course, which is precisely why the Japanese aesthetic of repair is potentially so liberating: there is nothing defensive about it. That’s even more explicit in the tradition of ceramic mending.

In old Japan, when a treasured bowl fell to the floor one didn’t just reach for the glue. The old item was gone, but its fracture created the opportunity for a new one to be made. Such accidents held lessons worth heeding, being both respected and remedied by creating from the shards something even more elegant. Smashed ceramics would be stuck back together with a strong adhesive made from lacquer and rice glue – but then the web of cracks would actually be emphasized by tracing it out in coloured lacquer, sometimes mixed or sprinkled with powdered silver or gold and polished with silk so that the joins gleamed.

A bowl or container repaired in this way would typically be valued even more highly, aesthetically and financially, than the original. The sixteenth-century Japanese tea master Sen no Rikyu is said once to have ignored his host’s fine Song Dynasty Chinese tea jar until it was mended after the owner smashed it in despair at Rikyu’s indifference. “Now the piece is magnificent”, he pronounced of the shards painstakingly reassembled by the man’s friends. According to contemporary tea master Christy Bartlett, it was “the gap between the vanity of pristine appearance and the fractured manifestation of mortal fate which deepen[ed] its appeal”. The repair, like that of an old teddy bear, is testament to the affection in which the object is held: what is valued is not a literally superficial perfection but something deeper. The mended object is special precisely because it was worth mending. In the Japanese tea ceremony, says cultural anthropologist James-Henry Holland, “a newly-mended utensil proclaims the owner’s personal endorsement, and visually apparent repairs call attention to this honor.”

To mend such objects requires an acceptance of whatever the fracture gives: a relinquishment of the determination to impose one’s will on the world, in accord with the Japanese concept of mushin. Meaning literally “no mind”, this expresses a detachment sought by many artists and warriors. “Accidental fractures set in motion acts of repair that accept given circumstances and work within them to lead to an ultimately more profound appearance,” says Bartlett. Mended ceramics displayed their history – the pattern of fracture documented the specific forces and events that caused it. This fact has recently been recognized by a team of French physicists, who have shown that the starlike cracks in broken glass plates capture a forensic record of the mechanics of the impact. By reassembling the pieces, that moment becomes preserved. The stories of how mended Japanese ceramics were broken – like that of the jar initially spurned by Sen no Rikyu – would be perpetuated by constant retelling. In the tea ceremony these histories of the utensils provide raw materials for the stylized conversational puzzles that the host sets the guests, a function for which undamaged appearance was irrelevant.

Sugru users will appreciate another of the aesthetic considerations of Japanese ceramic repairs: the notion of asobi, which refers to a kind of playful creativity and was introduced by the sixteenth-century tea master Furuta Oribe. Repairs that embody this principle tended to be more extrovert, even crude in their lively energy. When larger areas of damage were patched using pieces from a totally different broken object, fragments with a totally different appearance might be selected to express the asobi ideal, just as clothes today might be patched with exuberant contrasting colours or patterns. Of course, one can now buy new clothes patched this way – a seemingly mannered gesture, perhaps, yet anticipated in the way Oribe would sometimes purposely damage utensils so that they were not “too perfect”. This was less a Zen-like expression of impermanence and more an exuberant relish of variety.

Such modern fashion statements aside, repair in the West has tended to be more a matter of grumbling and making do. But occasionally the aesthetic questions have been impossible to avoid. When the painting of an Old Master starts cracking and flaking off, what is the best way to make it good? Should we reverently pick up the flakes of paint and surreptitiously glue them back on again? Is it honest to display a Raphael held together with PVA? When Renaissance paint fades or discolours, should we touch it up to retain at least a semblance of what the artist intended, or surrender to wabi-sabi? It’s safe to assume that no conservator would ever have countenanced the recent ‘repair’ of Elias Garcia Martinez’s crumbling nineteenth-century fresco of Jesus in Zaragoza by an elderly woman with the artistic skills of Mr Bean. But does even a skilled ‘retouching’ risk much the same hubris?

These questions are difficult because aesthetic considerations pull in the opposite direction from concerns of authenticity. Who wants to look at a fresco anyway if only half of it is still on the wall? Victorian conservators were rather cavalier in their solutions, often deciding it was better to have a retouched Old Master than none at all. In an age that would happily render Titian’s tones more ‘acceptable’ with muddy brown varnish, that was hardly surprising. But today’s conservators mostly recoil at the idea of painting over damage in old works, although they will permit some delicate ‘inpainting’ that fills cracks without covering any of the original paint: Cosimo Tura’s Allegorical Figure in London’s National Gallery was repaired this way in the 1980s. Where damage is extensive, standard practice now is to apply treatments that prevent further decay but leave the existing damage visible.

Such rarefied instances aside, the prejudice against repair as an embarrassing sign of poverty or thrift is surely a product of the age of consumerism. Mending clothes was once routine for every stratum of society. The aristocracy were unabashed at their elbow patches – in truth more prevention than cure, since they protected shooting jackets from wear caused by the shotgun butt. Everything got mended, and mending was a trade.

But what sort of trade? Highly skilled, perhaps, but manual, consigning it to a low status in a culture that has always been shaped (this is one way that West differs from East) by the ancient Greek preference for thinking over doing. Just as, over the course of the nineteenth century, the ‘pure’ scientist gained ascendancy over the ‘applied’ (or worse still, the engineer), so too the professional engineer could at least pull rank on the maintenance man: he was a creator and innovator, not a chap with oily rag and tools. “Although central to our relationship with things”, writes historian of technology David Edgerton, “maintenance and repair are matters we would rather not think about.” Indeed, they are increasingly matters we’d rather not even do.

Edgerton explains that until the mid-twentieth century repair was a permanent state of affairs, especially for expensive items like vehicles, which “lived in constant interaction with a workshop.” It wasn’t so much that things stopped working and then got repaired, but that repair was the means by which they worked at all. Neurath’s boat probably sailed for real: “ships were often radically changed, often more than once, in the course of their lives,” says Edgerton. Repair might even spawn primary manufacturing industries: many early Japanese bicycles were assembled from the spare parts imported to repair foreign (mostly British) models.

It’s not hard to understand a certain wariness about repair: what broke once might break again. But its neglect in recent times surely owes something also to an under-developed repair aesthetic, an insistence on perfection of appearance: on the semblance of newness even in the old, a visual illusion now increasingly applied to our own bodies, repair of which is supposed to (but rarely does) make the wear and tear invisible.

Equally detrimental to a culture of repair is the ever more hermetic nature of technology, whereby DIY mending becomes impossible either physically (the unit, like your MacBook lead, is sealed) or technically (you wouldn’t know where to start). Either way, your warranty is void the moment you start tinkering. Couple that to a climate in which you pay for the service or accessories rather than the item – inks pricier than printers, mobile phones free when you subscribe to a network – and repair lacks feasibility, infrastructure or economic motivation. I gave up on repair of computer peripherals years ago when the only person I could find to fix a printer was a crook who lacked the skills for the job but charged me the price of a new one anyway. Breakers’ yards, which used to seem like places of wonder, have all but vanished as car repair has become both unfashionable and impractical.

Some feel this is going to change, whether because of the exigencies of austerity or increasing ecological concerns about waste and consumption. Martin Conreen, a design lecturer at Goldsmith’s College in London, believes that TV cookery programmes will soon be replaced by ‘how to’ DIY shows, in which repair would surely feature heavily. The hacker culture is nurturing an underground movement of making and modifying that is merging with the crowdsourcing of fixes and bodges – for example, on websites such as ifixit.com, which offers free service manuals and advice for technical devices such as computers, cameras, vehicles and domestic appliances, and fixperts.org, set up by design lecturer Daniel Charny and Sugru cofounder James Carrigan, which documents fixes on film. The mending movement has taken to the streets in the international Repair Café movement, where you can go to get free tools, materials, advice and assistance for mending anything from phones to jumpers. As 3D printers become more accessible – which can produce one-off objects made from cured resin, built up from granular ‘inks’ layer by layer – it may become possible to make your own spare parts rather than having to source them, often at some cost, from suppliers (only to discover your model is obsolete). And as fixing becomes cool, there’s good reason to hope it will acquire an aesthetic that owes less to a “make do and mend” mentality of soldiering on, and more to mushin and asobi.