Monday, January 26, 2015

Secrets of exploding sodium revealed


Here’s the longer version of my latest news story for Nature. I love this stuff. I saw the experiments being done by Phil M when I visited Pavel a couple of years ago, and have been waiting for the work to come together ever since. Could you possibly need any more evidence that chemistry rocks?

____________________________________________________________________________

There’s more than exploding hydrogen in the violence of the reaction of alkali metals with water.

It’s the classic piece of chemical tomfoolery: take a lump of sodium or potassium metal, toss it into water, and watch the explosion. Yet a paper in Nature Chemistry reveals that this familiar piece of pyrotechnics has not previously been understood [1].

The explosion, say Pavel Jungwirth and his collaborators at the Czech Academy of Sciences in Prague, is not merely a consequence of the ignition of the hydrogen gas that the alkali metals release from water. That may happen eventually, but it begins as something far stranger: a rapid exodus of electrons followed by explosion of the metal driven by electrical repulsion.

Neurologist and chemical enthusiast Oliver Sacks offers a vivid account of how, as a boy, he and his friends carried out the reaction on Highgate Ponds in North London with a lump of sodium bought from the local chemicals supplier [2]: “It took fire instantly and sped around and around on the surface like a demented meteor, with a huge sheet of yellow flame above it. We all exulted – this was chemistry with a vengeance.”

Highly reactive sodium and potassium react with water to form sodium hydroxide and hydrogen, and the reaction liberates so much heat that the hydrogen may ignite spontaneously. The process seems so straightforward and understandable that no one previously seems to have felt there was anything else to explain.

But as Jungwirth says, there is a fundamental problem with the conventional explanation. “In order to have a runaway explosive behaviour of a chemical reaction, very good mixing of the reactants needs to be ensured,” he says. But the hydrogen gas and steam released at the surface of the metal should impede the further access of water and quench the reaction. Why doesn’t it?

This, Jungwirth admits, was only a part of the original motivation for looking more deeply into the reaction. The experiments were conducted by his colleague Philip Mason, and he says that “an equally important part is Phil's love for exciting experimentation and the easy availability of our balcony, where the first experiments were carried out.” There Mason set up a high-speed video camera to film the process, although the final movies were shot in the lab of coauthor Sigurd Bauerecker at the Technical University of Braunschweig in Germany.

Despite its notoriously explosive nature, the reaction of sodium with water is in fact extremely erratic: sometimes it explodes and sometimes it doesn’t, largely because of surface oxidation of the metal. “The basic trick Phil came up with is to use liquid metal – a sodium/potassium alloy that is liquid at room temperature”, says Jungwirth. But getting a reliable explosion has its hazards. “A face shield is a must”, he adds. “Phil took it off once to blow out a small fire and a tiny piece of metal exploded into his face: luckily lower part of it, so he only had a few scratches on his cheek.”

The movies revealed a vital clue to what was fueling the violent reaction in the early stages. The reaction starts less than a millisecond after the metal droplet, released from a syringe, enters the water. After just 0.4 ms, “spikes” of metal shoot out from the droplet, too fast to be expelled by heating.

What’s more, between 0.3 and 0.5 ms, this “spiking” droplet becomes surrounded by a dark blue/purple colour in the solution. The reason for these two observations became clear when Jungwirth’s postgraduate student Frank Uhlig carried out quantum-mechanical computer simulations of the process with clusters of just 19 sodium atoms. He found that each of the atoms at the surface of the cluster loses an electron within just several picoseconds (10^-12 s), and these electrons enter the surrounding water where they are solvated (surrounded by water molecules)[3].

Solvated electrons in water are known to have the deep blue colour observed transiently in the videos – although they are highly reactive, quickly decomposing water to hydrogen gas and hydroxide ions. What’s more, their departure leaves the metal cluster full of positively charged ions, which repel each other. The result is a “Coulomb explosion” in which the cluster bursts apart due to its own electrostatic repulsion, a process first explained by Lord Rayleigh in the late nineteenth century.

This explosion creates the spikes known as Taylor cones, the researchers say. They support that idea with less detailed simulations involving clusters of 4,000 sodium atoms, which also break up with spike-like instabilities at the surface.

“Four thousand sodium atoms is still a very tiny piece of matter, and I do not think we see proper Taylor cones in the simulations”, says Jungwirth. “At best, we see a microscopic version.”

Inorganic chemist James Dye of Michigan State University, a specialist on solvated electrons, is full of praise for the work. “I have done the demonstration dozens of times and wondered why sodium globules often danced on the surface, while potassium leads to explosive behaviour”, he says. “The paper gives a complete and interesting account of the early stages of the reaction.”

References
1. Mason, P. E. et al., Nat. Chem. http://dx.doi.org/10.1038/nchem.2161 (2015).
2. Sacks, O. Uncle Tungsten, p.123. Picador, London, 2001.
3. Young, R. M. & Neumark, D. M., Chem. Rev. 112, 5553-5577 (2012).

Friday, January 23, 2015

Are you ready? Then I'll begin...

The beginning of a play or book is so hard. I was reminded of this last night while watching the RSC’s new production in Stratford upon Avon, Oppenheimer. It’s a pretty good play, as I’ll say in my review in Nature soon. But I had first to get over the hump of the opening lines, where Oppenheimer reads from Niels Bohr’s 1934 book Atomic Theory and the Description of Nature: “The task of science is both to extend the range of our experience and to reduce it to order.” It seems an unobjectionable claim, even a rather good one. But as spoken by an actor dressed in period style as Oppenheimer, it seemed a terribly stagey and self-conscious opening. It was as if he were saying “The play’s starting now, and it’s about science, and now you have to believe that I’m Oppenheimer, OK?”

I had the same feeling at the start of Michael Frayn’s Copenhagen when I first saw it years ago. As I recall, the actress playing Margrethe Bohr marched on stage, struck a pose and said “But why?” And I thought “Yeah, yeah, so we are supposed to allow that the play is starting in mid-conversation and to ask ourselves, Why what?” But Copenhagen is brilliant, and so is Frayn, so what’s my problem here?

It’s all about that transition to another reality, and how to make us believe in it. Once Oppenheimer was underway, there was no problem – there was still the odd stagey moment in that production, but on the whole we can get inside the narrative quite comfortably once we are acclimatized. But how do you avoid that awkward instant at the start, where the actors have to say “We’ve started pretending now”?

This matters to me even more with books. I won’t say that I judge them by their first line, but that first line is certainly a hurdle that they have to clear. If it feels as though it has been worked on, burnished, set in place like a jewel for us to admire, then I am off to a bad start. New writers seem to be told that first lines matter a lot, and in a sense they do – but this doesn’t mean that a first line has to strive to be brilliant and lapidary, to compete with the astonishingly over-rated opening lines of Pride and Prejudice or War and Peace. Getting it right with a memorable first line, like Camus in L’Étranger or Dickens in A Christmas Carol, is far more difficult than is generally acknowledged, and more often these attempts just come across as contrived and self-conscious. How much better it is to go for the effortlessly mundane: “Stately, plump Buck Mulligan came from the stairhead, bearing a bowl of lather on which a mirror and a razor lay crossed.” Surely what is far better is that the opening page or so is captivating. If you can create one as jaw-dropping as Dickens in Bleak House, it doesn’t matter what the heck your very first line is.

But theatre: that’s another challenge. Here you’ve got the added problem that there are real people standing in front of you pretending to be different real people, and you know that and they know you know that. So how to start weaving the illusion without a jolt?

One of the best answers I ever saw was in Theatre de Complicité’s Mnemonic, when Simon McBurney just began by talking to us, as the audience. It seemed like a preamble to the start of the play, but gradually we realized that this actually was the play. Arguably that was a trick or gimmick, but it contained a more general solution: don’t try too hard. A Brechtian approach won’t work for every play, but at the very least it seems a good idea to relax and not to feel you have to ensnare the audience from the very first utterance. At that point at least, there’s really no risk we will be bored.

Wednesday, January 21, 2015

Beyond relativity

Sorry folks: Prospect has asked that my latest piece in the February issue, a survey of the centenary year of general relativity, remains "premium content" - which means I can offer only a teaser here. (Cue debate about paywalls and blogs - but we've all got to survive...) I'll be putting up some more on this topic soon, though.

____________________________________________________________________

One hundred years ago, Albert Einstein presented a paper to the Prussian Academy of Sciences that explained gravity. It is one of the four fundamental forces in the universe, although in 1915 only one of the others – the electromagnetic force – was known. (The other two act inside the atomic nucleus.) But Einstein’s paper offered a radically different way of thinking about gravity. Rather than being an invisible force between two massive objects, he described it a distortion induced by the masses in the very fabric of time and space (spacetime). This warping dictates the paths that objects take under gravity’s influence: Newton’s apple fell to earth because it was, in effect, slipping down the slope of bent spacetime. In the curved space around the sun, the planets execute orbits rather like marbles running around the rim of a bowl.

This geometric interpretation of gravity is the central idea of Einstein’s theory of general relativity. It is widely considered to be not only his greatest intellectual achievement but also the epitome of a beautiful theory. Ernest Rutherford said that the theory “cannot but be regarded as a magnificent work of art”, and Einstein was not shy of advertising its virtues himself: “Scarcely anyone who fully understands this theory can escape from its magic”, he wrote.

But the centenary celebrations for general relativity will not simply be looking back. For 2015 will be a banner year for some big, ambitious experiments that aim to probe the theory. They are looking for one of the most spectacular of the theory’s predictions: ripples in spacetime called gravitational waves...

[The rest will be on Prospect's site very shortly.]

Thursday, January 08, 2015

Computer becomes "unbeatable" at poker

Here's a longer version of my story on Nature News on the new poker-playing computer program.

_________________________________________________________________

A computer algorithm has perfected the art of playing a popular version of the gambling card game

A new computer algorithm can play poker, in one of its most popular variants, essentially perfectly. Its creators say that it is virtually “incapable of losing against any opponent in a fair game.”

This is a step beyond a computer program that can beat top human players, as Deep Blue famously did against Garry Kasparov in chess in 1997. The poker program devised by Michael Bowling and colleagues at the University of Alberta in Edmonton, Canada, along with Finnish software developer Oskari Tammelin, plays perfectly, to all intents and purposes. This means that this particular variant of poker, called Heads-up Limit Hold’em (HULHE), can be considered “solved”. The algorithm is described in a paper in Science [1].

The strategy they have computed, says computer poker researcher Eric Jackson in Menlo Park, California, is so close to perfect “as to render pointless further work on this game.”

“I think that it will come as a surprise to experts that a game this big has been solved”, says Jackson. “I follow the work on computer poker closely and I did not expect that heads-up limit was going to be solved this soon.”

A few other popular games have been solved before, including checkers, which a team from the same computer science department at Alberta (including Neil Burch, coauthor of the new study) cracked in 2007 [2].

But poker is harder to solve than checkers. Chess and checkers are examples of perfect-information games, where players can have perfect knowledge of all past events in a game. In poker, there are some things a player doesn’t know: most crucially, which cards the other player has been dealt.

Devising a strategy to deal perfectly with that uncertainty is very hard. This hidden information is what gives a poker player the opportunity to bluff: to face the other player down with a relatively weak hand.

But although bluffing looks like a very human, psychological element of the game, it’s not. You can calculate how to bluff optimally. “Bluffing falls out of the mathematics of the game”, says Bowling. If you’re dealt a jack, say, it is possible to figure out how often you should ideally bluff with it. Some of the early pioneers of game theory, such as John von Neumann, aimed to develop mathematical strategies for bluffing.

The real challenge for a poker algorithm is dealing with the immense number of possible ways the game can be played. Bowling and colleagues have looked at one of the most popular forms, called Texas Hold’em, in which the dealer deals two cards to each player (face down) and also a set of face-up “community cards”. Players can bet, raise or fold after each deal. With just two players, the game becomes Heads-up, and it is a “limit” game when it has fixed bet sizes and a fixed number of raises. There are 3.16×10^17 states that HULHE can reach, and 3.19×10^14 possible points where a player must make a decision.

The new algorithm involves calculating all possible decisions in advance, so that they can just be looked up as a game proceeds. This is done in a learning process: the strategy begins by making decisions randomly, and is then updated by experience as the algorithm attaches a “regret” value to each decision depending on how poorly it fared. It takes a little more than 1500 training rounds to make the program essentially invincible.

This “counterfactual regret minimization” (CFR) procedure has been widely adopted in the Annual Computer Poker Competition, which has run since 2006. But Bowling and colleagues have now improved the procedure by allowing it to re-evaluate decisions considered to be poor in earlier training rounds.

The other crucial innovation was the handling of the vast amounts of information needing to be stored to develop and use the strategy – of the order of 262 terabytes. This volume of data demands disk storage, which is slow to access. The researchers figured out a data-compression method that reduces the volume to a more manageable 11 TB, and which adds only 5% to the computation time from the use of disk storage.

“I think the counterfactual regret algorithm is the major advance”, says computer scientist Jonathan Shapiro of the University of Manchester. “But they have done several other very clever things to make this problem computational feasible.”

Although the new algorithm plays “perfectly”, it is not necessarily unbeatable, since there is a large chance element in poker: it depends on the hand you’re dealt. The algorithm “may in fact lose after any finite number of hands, if it were unlucky”, says Bowling. But it always wins in the long run.

What’s more, there are two other limitations. It only “weakly” solves HULHE, meaning that it plays perfectly only if the strategy has been played throughout, and not if the computer player is suddenly dropped into the middle of a game (a situation that is in fact not really clearly defined for poker, as it is say for checkers).

And it is only what the researchers call “essentially solved”, meaning that it is not strictly unbeatable: there is an extremely small margin by which, in theory, it might be beaten by skill rather than chance. But this margin is negligible in practice. “Even if a human could identify the perfect counter-strategy to exploit our solution”, says Bowling, “and even if they could play this counter-strategy without error, and even if they spent 70 years only playing poker (over 60 million hands), they still couldn’t be statistically confident that they are winning [by superior play rather than chance].”

There has been heated debate about the use of computers (poker bots) for online poker games. “For quite a while now there has been a struggle between people who write bots and try to get them to play surreptitiously on online sites, and the sites who try to detect them and ban them”, says Jackson. But Bowling says that their algorithm will have little direct effect on this, because the popularity of online HULHE has declined as human players got better. “While it may dry up the last vestige of online HULHE play just due to the perception that humans can query a perfect strategy, this impact is going to minimal”, he says. But it may respark a philosophical discussion about bots in online poker.”

“I definitely think the ideas in this paper will be fruitfully applied to other forms of poker, such as no-limit”, says Jackson. “And more generally to other games, whether with dice or cards or whatever, that have imperfect information.”

But the stakes might be higher still for imperfect-information “games” beyond mere play and gambling. The stock market seems an obvious candidate, but Bowling explains that there the unknowns are too great. “The deck of cards in the stock market, which define the distribution at chance events, is not common knowledge and not even really knowable.”

He says that it might be useful, however, for portfolio management. “We’ve been investigating robust decision-making where the goal is to optimize a particular risk measure (such as value-at-risk)”, he says. “Such robust decision-making scenarios can often be cast as a game having an almost identical form to some of our poker games, with our solution techniques be immediately applicable.” The team’s current explorations beyond poker are, however, focused on supporting medical decision-making, in collaboration with diabetes specialists.

1. Bowling, M., Burch, N., Johanson, M. & Tammelin, O. Science 347, 145-149 (2015).
2. Schaeffer, J. et al., Science 317, 1518-1522 (2007).

Tuesday, January 06, 2015

The birth of the scientific journal

This is an extended version of a piece written for Research Fortnight. To celebrate its 350th anniversary, Phil. Trans. is also soon to publish a special issue containing some of its “greatest hits”, along with accompanying commentaries explaining their significance and impact. I have written a piece for it on Alan Turing’s classic 1952 paper on morphogenesis, which I’ll put up here when the time comes. The exhibition described below is small but fun, and if you’re in the neighbourhood of the Royal Society, well worth a look.

_______________________________________________________________________

The scientific journal is 350 years old this year. As the first real scientific journal, the Philosophical Transactions of the Royal Society, which published its first issue in January 1665, can claim to have set the scene for the entire scientific literature of today, which now counts its titles in the tens of thousands.

This history is explored in a new exhibition at the Royal Society in London to mark the anniversary. It has been put together by a team at the University of the University of St Andrews in Scotland, led by historian Aileen Fyfe, that has studied the development of the journal. The exhibition includes a copy of the first issue, the referee’s report on Charles Darwin’s sole publication in the journal (a minor work of 1839 about Scottish roads) and the handwritten manuscript submitted by James Clerk Maxwell in 1865 in which he proposes that light is an electromagnetic wave.

Phil. Trans. was central to the whole idea of a scientific journal”, Fyfe says. Yet you need only glance at the first page of the first issue to see that the resemblances with the modern scientific journal were at that stage remote. Among this ‘accompt of the present Undertakings, Studies, and Labours of the Ingenious in many considerable parts of the World’, one could find the following:
"An account of the Improvement of Optick Glasses at Rome. Of the Observation made in England, of a Spot in one of the Belts of the Planet Jupiter. Of the motion of the late Comet predicted… An Experimental History of Cold… A Relation of a very odd Monstrous Calf. Of a peculiar lead-ore in Germany… Of the New American Whale-fishing about the Bermudas. A Narrative concerning the success of the Pendulum-watches at Sea for the Longitudes… A Catalogue of the Philosophical Books publisht by Monsieur de Fermat, Counsellour at Tholouse, lately dead."

In its early days Phil. Trans. published all kinds of strange, curious and often fanciful accounts of phenomena related to the Royal Society by its network of “virtuosi”: men (almost without exception) interested in the natural world, inventors, travellers, dilettantes and armchair philosophers. The selection of what to include and exclude was made solely by the Royal Society’s energetic secretary, the German natural philosopher Henry Oldenburg.

Oldenburg was the original networker, a multi-linguist who cultivated connections with all the “experimental philosophers” of seventeenth-century Europe. His approach is exemplified in a letter he sent in 1667 to the Italian naturalist Marcello Malpighi on Sicily:
"We earnestly beg you to be so good as to let us know of all that is noteworthy – of which there is so much in your island – concerning plants, or minerals, or animals and insects, especially the silkworm and its productions, and finally concerning meteorology and earthquakes, known to you or to other ingenious men."

The avowed intention of the Royal Society was to collect facts without rushing to formulate theories about them – witness Isaac Newton’s famous (and somewhat disingenuous) “hypotheses non fingo”. Yet Oldenburg’s choices reflected the spirit of his times, in which wealthy collectors and antiquaries stocked their cabinets with “curiosities” – strange and bizarre objects from around the world. Like them, the collectors of ‘facts’ at the Royal Society were often drawn to reports that were entertaining, amazing or strange rather than necessarily informative.

The editorial power wielded by Oldenburg – his contemporary Robert Hooke, demonstrator for the Royal Society, called him a “dog” for perceived biases in his record-taking – was inherited by his successors as secretary. So, however, was the considerable financial burden of producing the Transactions. But when in 1752 the Royal Society first took official responsibility for the journal (it had previously been something more akin to a news-sheet), the organization felt that it needed to think about its reputation. In the face of complaints about the poor quality of some of the content, which placed sensation before reliability, the council members figured that they needed a mechanism for making editorial decisions that were seen to be fair and well grounded.

At this time (and for many years subsequently), papers in Phil. Trans. were first read at the Society’s meetings before being published. So it was decided that Fellows would hold a secret ballot on whether, after hearing a contribution, it should be included for publication, with or without modifications. (Fyfe doubts whether this procedure was always followed to the letter.) It was a kind of peer review, after a fashion – albeit one conducted by what amounted to show of hands among a tiny clique.

In 1832 that procedure for collective editorial decision-making was extended when the Society began to solicit written reports from two reviewers – the first real instance of what we would now recognize as real peer review. Fyfe notes that some other societies were starting to introduce this system for their journals around the same time, but Phil. Trans. was certainly the most prestigious title to do so, and that the use of two referees was standard practice by the mid-nineteenth century. “Phil. Trans. became a modern scientific journal in the nineteenth century”, she says – indeed, it more or less created the template for what that meant.

The secretary for most of that century’s second half, the physicist George Stokes, was instrumental in this increasing professionalization of the publication process. His role, and that of his successors, was now becoming something like that of a journal editor as we know it today. It was during this period that commercial scientific journals began to flourish, such as the idiosyncratic Chemical News edited and published by the equally idiosyncratic William Crookes, and most famously Nature, started in 1869 by the astronomer Norman Lockyer. While these commercial ventures could publish what they liked – the peer review system at Nature was still very informal in the 1960s – learned journals such as Phil. Trans. were concerned to show their objectivity and impartiality: attributes that any modern scholarly journal now likes to claim.

This, however, was all relative. For one thing, until the 1970s, if you wanted to submit to Phil. Trans. but were not a Fellow of the Royal Society then you needed the blessing of someone who was. This meant that you needed to be plugged into the right networks, and it encouraged systems of patronage, even nepotism: Lord Kelvin was particularly active as a sponsor of submissions, often those of his former students. Schemes of this kind still persisted in recent times. Notably, the Proceedings of the National Academy of Sciences USA only admitted regular submissions from non-members of the Academy without an NAS sponsor in 1995; and not until 2010, after much criticism, did the journal do away with the principle of “communicating” submissions via Academy members, which almost guaranteed publication.

What’s more, the Phil. Trans. referees came from a limited pool. In the mid-nineteenth century, about half of them were members of the Royal Society council, and all the others had to be Fellows.

The interesting question is how much these developments changed the nature of what was published. Pre-selection procedures by the Fellows doubtless excluded a lot of bad material, so Fyfe thinks that one of the main consequences of formal peer review was not that it raised the quality of published research so much as that it encouraged authors to develop a particular literary style to improve their chances: to reduce speculation and observe the brevity, sobriety and even blandness that some would say afflicts the scientific literature today. (Darwin’s paper was criticized by geologist Adam Sedgwick for its loquaciousness.)

With alternatives to the “standard model” of peer review now proliferating, from the “techniques-only” assessments of PLoS ONE and Scientific Reports to the increasing acceptance of preprint servers as venues of de facto publication, it seems particularly timely to consider how science publication evolved and acquired its customs and habits. Perhaps peer review has become something of a shibboleth. Certainly it seems sometimes to have mutated from a routine check and trash filter to a dictatorial, almost paranoid gatekeeper: biologists complain that no referee seems to consider they have done their job unless they have suggested half a dozen additional experiments. There is surely something in the famous suggestion that Watson and Crick’s 1953 paper would not have found favour with Nature’s reviewers today. And the broadening of reviewing networks, while surely beneficial in many ways, hasn’t eliminated accusations (some well founded) of favoritism, discrimination and bias towards big-name labs. There is a fine balance to be fund between rigour and permissiveness, one that can fall foul of conservatism and petty box-checking as much as caprice. The story of Phil. Trans. opens a lively window on that discussion.

Monday, January 05, 2015

The First Emperor's rivers of mercury

This is a slightly extended version of my feature article in the latest issue of Chemistry World.

_________________________________________________________________________

The Chinese emperor had done all he could to become immortal, but in vain. His physicians had prepared herbal and alchemical elixirs, but none could stave off his decline. He had sent a minister on a voyage far over the eastern seas in search of a mythical potion of eternal life. But that expedition never returned, and now the quest seemed hopeless. So Qin Shi Huangdi, the first emperor of a unified China in the third century BC, had begun preparations for the next best thing to an endless life on earth. He would continue his cosmic rule from the spirit world, and his underground tomb would be a kind of palace for the afterlife, complete with its own army of life-size clay soldiers.

Those terracotta warriors lay hidden for two millennia beneath several metres of wind-deposited sandy soil a mile from the First Emperor’s burial mound at Mount Li (Lishan), to the northeast of the city of Xi’an in Shaanxi province of north-central China. They were rediscovered in 1974 by farmers digging a well, and Chinese archaeologists were astonished to find over the next decade that there were at least 8,000 of them, once brightly painted and equipped with clay horses and wooden chariots. As further excavation revealed the extent of the emperor’s mausoleum, with offices, stables and halls along with clay figures of officials, acrobats and labourers and life-size bronze animals, it became clear that the Han-dynasty historian Sima Qian, writing in the second century BC, hadn’t been exaggerating after all. Sima Qian claimed that 700,000 men had worked on the emperor’s tomb, constructing entire palaces, towers and scenic landscapes through which which the emperor’s spirit might roam.


The Terracotta Army was created to serve and protect China’s first emperor.

No one knows what other wonders the mausoleum might house, for the main burial chamber – a football-pitch-sized hall beneath a great mound of earth – remains sealed. Most enticing of all is a detail related by Sima Qian: “Mercury was used to fashion the hundred rivers, the Yellow River and the Long River [Yangtze], and the seas in such a way that they flowed.” This idea that the main chamber contains a kind of microcosm of all of China (as it was then recognized) with rivers, lakes and seas of shimmering mercury had long seemed to fantastic for modern historians to grant it credence. But if Sima Qian had not been inventing stories about other elaborate features of the mausoleum site, might this account of the tomb chamber be reliable too?

In the 1980s Chinese researchers found that the soil in the burial mound above the tomb indeed contains concentration of mercury way above what the soil elsewhere in this region carries. Now some archaeologists working on the site are quite ready to believe that the body of Qin Shi Huangdi may indeed lie amidst vast puddles of the liquid metal.

Yet it seems unlikely that anyone will gaze on such a sight in the foreseeable future. “We have no current plan to open the chambers”, says archaeologist Qingbo Duan of Northwest University in Xi’an, who led the mausoleum excavations from 1998 to 2008. “We have no mature technologies and effective measures to protect the relics.” So can we ever know the truth about the First Emperor’s rivers of mercury?

A harsh legacy

The construction of this immense mausoleum started fully 36 years before Qin Shi Huangdi’s death in 210 BC, when he was merely King Zheng of the kingdom of Qin – a realm occupying the valley of the Wei, a major tributary of the Yellow River, now in Shaanxi. Qin was one of seven states within China at that time, all of which had been vying for supremacy since the fifth century BC in what is known as the Warring States period. By finally defeating the last of the rival states, Qi in modern Shandong, in 221 BC, Zheng became Qin Shi Huangdi (“the First Qin Emperor”), ruler of all China.


Qin Shi Huangdi was China’s first emperor, and he hoped to use alchemical elixirs and medicines to sustain his life indefinitely.

Some etymologies trace the name “China” itself to the Qin dynasty (pronounced “Chin”), and so you might imagine that it would have a very special status in Chinese history. But the unified state barely outlasted the death of Qin Shi Huangdi himself – four years later it succumbed to a rebellion that became the much more durable Han dynasty (206 BC – 220 AD) – and it is regarded with little fondness in China today, for the First Emperor was a tyrant who ruled with brutal force. He compelled his subjects to achieve marvelous feats of engineering – he constructed the Great Wall from existing fragmentary defenses on the northern frontier, as well as the Lingqu Canal connecting the Yangtze to the Pearl River delta in the south, not to mention his own mausoleum. The First Emperor also introduced standardization of weights and measures and of the Chinese writing system. But in an attempt to expunge all previous histories and ideas, he ordered the burning of many precious documents and works of philosophy and poetry: a treasury of learning that was lost forever. The Qin rulers followed a philosophical tradition called Legalism, which advocated the ruthless suppression of all criticism and opposition.

Since much of what we know about the Qin era comes from Sima Qian, who was writing to justify the Han ascendancy over the previous rulers, it’s possible that Qin Shi Huangdi gets something of a raw deal. But there’s reluctant admiration in the way Sima Qian describes the magnificence of Qin Shi Huangdi’s tomb, which was unlike anything that had been attempted before. The Emperor didn’t just see himself as a worldly ruler – he considered his empire to be blessed by heaven, and he placed himself in the line of “sage-kings” going back to China’s mythical origins. Like all Chinese at that time, he believed that after death people’s spirits didn’t travel to some heavenly place removed from the physical world, but that the spiritual and mundane worlds coexisted, so that in some sense his rule would continue on earth after death. There was, then, nothing symbolic about all the trappings of power that would surround him in his tomb – they would be useful in the times to come.

“In ancient China, people believed the souls of the dead would live forever underground, so they would prepare almost everything from real life to bury for use in the afterlife”, says Yinglan Zhang, an archaeologist at the Shaanxi History Museum in Xi’an and deputy director of the mausoleum excavations from 1998 to 2007. Given what has already been unearthed, he says “there should be many other cultural artifacts or relics still buried in the tomb chamber or other burial pits around the tomb – maybe things beyond our imagination.”

The pits housing the Terracotta Army lie outside the 2 by 0.8 km boundary wall of the burial mound. Inside this wall are ritual buildings once containing food and other items that the emperor would need to sustain him. There are chambers full of stone armour that could protect against evil spirits, and it is possible that the emperor himself might not have been interred alone in the main chamber: Sima Qian says that officials were buried there with him, and it’s not clear if they were alive or dead at the time.

The mound itself was originally about 0.5 by 0.5 km (erosion has shrunk it a little), and the burial chamber lies about 30-40 m below the original ground surface. Its shape has been mapped out by measuring gravity anomalies in the ground – an indication of hollow or less dense structures – and by looking for changes in the electrical resistivity of the soil, which result from buried structures or cavities. In this way, Chinese archaeologists have figured out the basic layout of the tomb over the past several decades. The chamber is about 80 m east-west by 50 m north-south, surrounded by a wall of closely packed earth and – to judge from other ancient Chinese tombs – perhaps water-proofed with with stone covered with red lacquer. In 2000 researchers discovered that towards the edge of the mound a drainage dam helps to keep water away from the chamber. So there’s some reason to believe that the tomb itself might be relatively intact: neither wholly collapsed nor water-filled.


The burial mound of China’s first emperor, near Xi’an in Shaanxi province.

Measurements of the soil resistivity in the region of the chamber have also revealed another intriguing feature. They show a so-called phase anomaly, which is produced when an electrical current is reflected from a conducting surface, such as a metal. Could this be a sign of pools and streams of mercury?

The first detailed study of mercury levels in the mound were conducted in the early 1980s, when researchers from the Institute of Geophysical and Geochemical Exploration of the China Institute of Geo-Environment Monitoring sunk small boreholes into the soil over an area of 12,000 m2 in the centre of the mound and extracted soil samples for analysis. Whereas soils outside this central region contained an average of 30 parts per billion of mercury, the average above the chamber was 250 ppb, and in some places rose to 1500 ppb. A second survey in 2003 found much the same: unusually high concentrations of mercury both in the soil itself and the interstitial vapours between grains.

The grid of borehole samples allowed the Chinese researchers to make a rough map of how the high levels of mercury are distributed. “There is no unusual amount of mercury in the northwest corner of the tomb”, says Duan, “while the mercury level is highest in the northeast and second highest in the south.” If you squint at this distribution, you can persuade yourself that it matches the locations of the two great rivers of China – the Yellow and Yangtze – as seen from the ancient Qin capital of Xianyang, close to modern Xi’an. “The distribution of mercury level corresponds to the location of waterways in the Qin empire”, Duan asserts. In other words, the tomb might indeed contain a facsimile of the empire, watered by mercury.



The mercury levels in soils above the tomb chamber (top), and a map of China from the eleventh century AD (bottom) showing the rivers, especially the Yellow (north) and Yangtze (south). In Qin times the knowledge of China’s topography would have been much more rudimentary, but the locations of the main rivers would have been known roughly.

Zhang isn’t so sure that one can conclude much from the present-day mercury distribution, however. He thinks that the tomb chamber must have collapsed thousands years ago, just like the pits containing the Terracotta Army. “The mercury will have volatilized into nearby soils during this long time, so it would be impossible to show up detailed information that we can connect with particular rivers or lakes”, he says.

Silver water

In any case, just because the mausoleum apparently contains a lot of mercury doesn’t in itself verify Sima Qian’s account. It had other uses too, particularly in alchemy, which has some of its oldest roots in China. In the West this art was commonly associated with attempts to make gold from other metals, and some Chinese alchemists tried that too – in 144 BC the Han Emperor Jingdi decreed that anyone caught trying to make counterfeit gold should be executed. But Chinese alchemy was more oriented towards medicinal uses, in particular elixirs of immortality. Some believed that alchemical gold could have this effect: the Han emperor Xuandi in 60 BC appointed the scholar Liu Xiang to make alchemical gold to prolong his life.

Others thought that the elixir of life lay elsewhere – and perhaps mercury (in Chinese shui yin, literally “water silver”) was the key. Chinese legend tells of one Huang An, who prolonged his life for at least 10,000 years by eating mercury sulphide (the mineral cinnabar). Qin Shi Huangdi was said to have consumed wine and honey laden with cinnabar thinking it would prolong his life, and some have speculated that he might have hastened his death with these “medicines”. During the Warring States period, mercury was a common ingredient of medicines, being used to treat infected sores, scabies, ringworm and (even more alarmingly) as a sedative for mania and insomnia.

It had other uses too. Cinnabar itself is red, and it was long used in China for art and decoration – its artificial form, produced in the West since the Roman era, became known as the pigment vermilion. The mineral has been found on the “oracle bones” used for divination during the Shang Dynasty of Bronze Age China (second millennium BC).


Cinnabar (HgS) was widely used in ancient China for decoration, medicine and alchemy.

One of the most important uses of mercury at this time has a particularly alchemical tinge. Gold and silver dissolve in mercury to form amalgams, and such mixtures were used for gilt plating. The amalgam was rubbed on and heated to evaporate the mercury and leave behind a gleaming coat of precious metal. Such mixtures also featured in alchemical elixirs: the Daoist concept of yin and yang, the two fundamental and complementary principles of life, encouraged an idea that cold, watery (yang) mercury and bright, fiery (yin) gold might be blended in ideal proportions to sustain vitality. Such ideas, says Duan, “led astray the ancient scientific aspects of mercury use until a re-awakening in the Song dynasty” (10th-13th centuries AD).

Throughout antiquity cinnabar was the source of all mercury metal, which can be extracted simply by heating. There was a lot of cinnabar in China, particularly in the western regions such as Sichuan. Shaanxi alone contains almost a fifth of all the cinnabar reserves in the country, and there are very ancient mines in Xunyang county in the south of the province that are a good candidate source of the mercury apparently in the First Emperor’s tomb.

To extract mercury from cinnabar one need only roast it in air, converting the sulphur to SO2 while the mercury is released as vapour that can then be condensed. Since mercury boils at 357 oC, this process needs temperatures of little more than 350 oC, well within the capabilities of Qin-era kilns. Of course, anyone trying this method in an unsealed container – closed chambers weren’t used until the Han period – risked serious harm.

But despite there being a mature mercury-refining technology by the time of the Qin, and although Zhang attests that “the people of the Qin Dynasty had some basic chemical knowledge”, Duan argues that Chinese alchemy was still in its infancy in that period. In particular, he says, there is no good reason to think that the practice of soaking dead bodies in mercury to prevent their decay, common during the Song dynasty in the 10th-13th centuries AD, was used as early as the Qin dynasty. So even though mercury, either as cinnabar or as the elemental metal, has been found in tombs dating back as far as second millennium BC, it’s not clear why it was put there. Might its toxicity have acted as a deterrent to grave-looters? Probably not – the dangers of mercury fumes were not recognized until Han times. So if, as it seems, there’s a lot of mercury in Qin Shi Huangdi’s burial chamber, it’s unlikely to be either a preservative or an anti-theft device. (Sima Qian says that the First Emperor’s tomb was, however, booby-trapped with crossbows “rigged so that they would immediately shoot down anyone attempting to break in”, suggesting that if archaeologists were ever to try opening it up, they might face Indiana Jones-style hazards.)

Yet even if this mercury was indeed used for fantastical landscaping, Duan doubts that there can have been much of it. Based on estimates of mercury production from the Song era and allowing for the imperfections of the earlier refinement process, he thinks the chamber might have contained at most 100 tons of the liquid metal: around 7 m3.

We might never be able to check that. “Right now, our archaeological work is focused on deducing the basic layout” of the tomb, says Duan. Because even a small breach in the seal could admit water or air that might damage whatever lies within, even robot-based exploration of the interior is ruled out. “If the chamber was opened even using a robot or drilling, the balance of the situation would be broken and the buried objects would deteriorate quickly”, says Zhang.

So if we’re ever going to peek inside, it will have to be with better scientific techniques than are currently available. “I dream of a day when technology will shed light on all that is buried there, without disturbing the sleeping emperor and his two-thousand-year-old underground empire”, says Yongqi Wu, curator of the Qin Shi Huang Mausoleum Museum at the Lishan site. Maybe these concerns to preserve the unknown heritage will guarantee Qin Shi Huangdi a kind of immortality after all.

Friday, January 02, 2015

There goes the neighbourhood



How would you like to live on Heinrich Himmlerstraat? Maybe not? Don’t worry – to my knowledge no street named after the SS Reichsführer exists. But if you do fancy living on a street dedicated to an ardent Nazi, the one pictured above will do you: it is Lenardstraat in Nijmegen. A Google search for this location will take you to the Nijmegen city website, where we’re told that “Philipp Lenard was a German physicist, and received the 1905 Nobel Prize in physics for his research on cathode rays and their properties.” Sounds worthy enough, huh? They forgot to mention that Lenard was also a fervent supporter of the Nazis who hosted Hitler in his home after the Führer-to-be was released from a Bavarian prison following the failed Beer Hall Putsch of 1923. Lenard was one of the main progenitors of “Aryan physics”, which he proclaimed as the only true physics, in distinction from the pernicious “Jewish physics” promulgated by Einstein and his acolytes. In 1924 Lenard and his associate Johannes Stark published an article called “The Hitler spirit and science”, in which they said that Hitler and his comrades “appear to us as God’s gifts from times of old when races were purer, people were greater, and minds were less deluded.” According to Lenard, all of the great scientists of former times (including Galileo!) were of Nordic-Aryan stock.

Should we hold all this against Lenard the physicist? Well, certainly it does not invalidate his important work on cathode rays and the photoelectric effect, any more than Stark’s adulation of Hitler should prevent us from recognizing his discovery of the Stark effect (which won him a Nobel too). But acknowledging scientific achievement and precedence is one thing; celebrating committed Nazis and anti-Semites by naming streets after them is another. (The University of Heidelberg quietly ditched the Philipp Lenard Institute – which once boasted of being “Jew-free” – after his death.)

You might think that the matter would be especially sensitive in the Netherlands, which suffered so greatly under the Nazis. Yet the Dutch academic who pointed out this road to me tells me that his efforts to raise the issue have got nowhere. “After seeking support from Dutch historians of science and sending a copy of my letter to the local press”, he says, “I received a letter from the mayor that a name change will not be considered since 80% of the street inhabitants resisted a change and nowadays nobody knows who he was.”

Well, this is who Lenard was: one of the most unpleasant of the distinguished scientists of the twentieth century. It’s of course a valid and difficult question at what stage (if ever) we draw a veil over the dubious character of a historical figure and simply recognize their achievements. But given the furious debate that still rages today in the Netherlands over how to think about wartime conduct, memories in this case seem disturbingly short.

Thursday, December 18, 2014

The Future of the Brain

I’ve a short review online with Prospect of The Future of the Brain (Princeton University Press), edited by Gary Marcus & Jeremy Freeman. Here’s a slightly longer version. But I will say more about the book and topic in my Prospect blog soon.

_________________________________________________________________________

If you want a breezy, whistle-stop tour of the latest brain science, look elsewhere. But if you’re up for chunky, rather technical expositions by real experts, this book repays the effort. The message lies in the very (and sometimes bewildering) diversity of the contributions: despite its dazzling array of methods to study the brain, from fMRI to genetic techniques for labeling and activating individual neurons, this is still a primitive field largely devoid of conceptual and theoretical frameworks. As the editors put it, “Where some organs make sense almost immediately once we understand their constituent parts, the brain’s operating principles continue to elude us.”

Among the stimulating ideas on offer is neuroscientist Anthony Zador’s suggestion that the brain might lack unifying principles, but merely gets the job done with a makeshift “bag of tricks”. There’s fodder too for sociologists of science: several contributions evince the spirit of current projects that aim to amass dizzying amounts of data about how neurons are connected, seemingly in the blind hope that insight will fall out of the maps once they are detailed enough.

All the more reason, then, for the skeptical voices reminding us that “data analysis isn’t theory”, that current neuroscience is “a collection of facts rather than ideas”, and that we don’t even know what kind of computer the brain is. All the same, the “future” of the title might be astonishing: will “neural dust” scattered through the brain record all our thoughts? And would you want that uploaded to the Cloud?

Wednesday, December 17, 2014

The restoration of Chartres: second thoughts


Several people have asked me what I think about the “restoration” of Chartres Cathedral, in the light of the recent piece by Martin Filler for the New York Review of Books. (Here, in case you’re wondering, is why anyone should wish to solicit my humble opinion on the matter.) I have commented on this before here, but the more I hear about the work, the less sanguine I feel. Filler makes some good arguments against, the most salient, I think, being the fact that this contravenes normal conservation protocols: the usual approach now, especially for paintings, is to do what we can to rectify damage (such as reattaching flakes of paint) but otherwise to leave alone. In my earlier blog I mentioned the case of York Cathedral, where masons actively replace old, crumbling masonry with new – but this is a necessary affair to preserve the integrity of the building, whereas slapping on a lick of paint isn’t. And the faux marble on the columns looks particularly hideous and unnecessary. To judge from the photos, the restoration looks far more tacky than I had anticipated.

It is perhaps not ideal that visitors to Chartres come away thinking that the wonderful, stark gloom is what worshippers in the Middle Ages would have experienced too. But it seems unlikely that the new paint job is going to get anyone closer to an authentic experience. Worse, it’s the kind of thing that, once done, is very hard to undo. It’s good to recognize that the reverence with which we generally treat the fabric of old buildings now is very different from the attitudes of earlier times – bishops would demand that structures be knocked down when they looked too old-fashioned and replaced with something à la mode, and during the nineteenth-century Gothic revival architects like Eugène Viollet-le-Duc would take all kinds of liberties with their “restorations”. But this is no reason why we should act the same way. So while there is still a part of me that is intrigued by the thought of being able to see the interior of Chartres in something close to its original state, I have come round to thinking that the cathedral should have been left alone.

Monday, December 15, 2014

Beyond the crystal

Here’s my Material Witness column from the November issue of Nature Materials (13, 1003; 2014).

___________________________________________________________________________

The International Year of Crystallography has understandably been a celebration of order. From Rene-Just Haüy’s prescient drawings of stacked cubes to the convolutions of membrane proteins, Nature’s Milestones in Crystallography revealed a discipline able to tackle increasingly complex and subtle forms of atomic-scale regularity. But it seems fitting, as the year draws to a close, to recognize that the road ahead is far less tidy. Whether it is the introduction of defects to control semiconductor band structure [1], the nanoscale disorder that can improve the performance of thermoelectric materials [2], or the creation of nanoscale conduction pathways in graphene [3], the future of solid-state materials physics seems increasingly to depend on a delicate balance of crystallinity and its violation. In biology, the notion of “structure” has always been less congruent with periodicity, but ever since Schrödinger’s famous “aperiodic crystal” there has been a recognition that a deeper order may underpin the apparent molecular turmoil of life.

The decision to redefine crystallinity to encompass the not-quite-regularity of quasicrystals is, then, just the tip of the iceberg when it comes to widening the scope of crystallography. Even before quasicrystals were discovered, Ruelle asked if there might exist “turbulent crystals” without long-ranged order, exhibiting fuzzy diffraction peaks [4]. The goal of “generalizing” crystallography beyond its regular domain has been pursued most energetically by Mackay [5], who anticipated the link between quasicrystals and quasiperiodic tilings [6]. More recently, Cartwright and Mackay have suggested that structures such as crystals might be best characterized not by their degree of order as such but by the algorithmic complexity of the process by which they are made – making generalized crystallography an information science [7]. As Mackay proposed, “a crystal is a structure the description of which is much smaller than the structure itself, and this view leads to the consideration of structures as carriers of information and on to wider concerns with growth, form, morphogenesis, and life itself” [5].

These ideas have now been developed by Varn and Crutchfield to provide what they call an information-theoretic measure for describing materials structure [8]. Their aim is to devise a formal tool for characterizing the hitherto somewhat hazy notion of disorder in materials, thereby providing a framework that can encompass anything from perfect crystals to totally amorphous materials, all within a rubric of “chaotic crystallography”.

Their approach is again algorithmic. They introduce the concept of “ε-machines”, which are minimal operations that transform one state into another [9]: for example, one ε-machine can represent the appearance of a random growth fault. Varn and Crutchfield present nine ε-machines relevant to crystallography, and show how their operation to generate a particular structure is a kind of computation that can be assigned a Shannon entropy, like more familiar computations involving symbolic manipulations. Any particular structure or arrangement of components can then be specified in terms of an initially periodic arrangement of components and the amount of ε-machine computation needed to generate from it the structure in question. The authors demonstrate how, for a simple one-dimensional case, diffraction data can be inverted to reconstruct the ε-machine that describes the disordered material structure.

Quite how this will play out in classifying and distinguishing real materials structures remains to be seen. But it surely underscores the point made by D’Arcy Thompson, the pioneer of morphogenesis, in 1917: “Everything is what it is because it got that way” [10].

1. Seebauer, E. G. & Noh, K. W. Mater. Sci. Eng. Rep. 70, 151-168 (2010).
2. Snyder, G. J. & Toberer, E. S. Nat. Mater. 7, 105-114 (2008).
3. Lahiri, J., Lin, Y., Bozkurt, P., Oleynik, I. I. & Batzill, M., Nat. Nanotech. 5, 326-329 (2010).
4. Ruelle, D. Physica A 113, 619-623 (1982).
5. Mackay, A. L. Struct. Chem. 13, 215-219 (2002).
6. Mackay, A. L. Physica A 114, 609-613 (1982).
7. Cartwright, J. H. E. & Macaky, A. L., Phil. Trans. R. Soc. A 370, 2807-2822 (2012).
8. Varn, D. P. & Crutchfield, J. P., preprint http://www.arxiv.org/abs/1409.5930 (2014).
9. Crutchfield, J. P., Nat. Phys. 8, 17-24 (2012).
10. Thompson, D’A. W., On Growth and Form. Cambridge University Press, Cambridge, 1917.

Friday, December 12, 2014

Why some junk DNA is selfish, but selfish genes are junk

“Horizontal gene transfer is more common than thought”: that's the message of a nice article in Aeon. I first came across it via a tweeted remark to the effect that this was the ultimate expression of the selfish gene. Why, genes are so selfish that they’ll even break the rules of inheritance by jumping right into the genomes of another species!

Now, that is some trick. I mean, the gene has to climb out of its native genome – and boy, those bonds are tough to break free from! – and then swim through the cytoplasm to the cell wall, wriggle through and then leap out fearlessly into the extracellular environment. There it has to live in hope of a passing cell before it gets degraded, and if it’s in luck then it takes out its diamond-tipped cutting tool and gets to work on…

Wait. You’re telling me the gene doesn’t do all this by itself? You’re saying that there is a host of genes in the donor cell that helps it happen, and a host of genes in the receiving cell to fix the new gene in place? But I thought the gene was being, you know, selfish? Instead, it’s as if it has sneaked into a house hoping to occupy it illegally, only to find a welcoming party offering it a cup of tea and a bed. Bah!

No, but look, I’m being far too literal about this selfishness, aren’t I? Well, aren’t I? Hmm, I wonder – because look, the Aeon page kindly directs me to another article by Itai Yanai and Martin Lercher that tells me all about what this selfishness business is all about.

And I think: have I wandered into 1976?

You see, this is what I find:
“Yet viewing our genome as an elegant and tidy blueprint for building humans misses a crucial fact: our genome does not exist to serve us humans at all. Instead, we exist to serve our genome, a collection of genes that have been surviving from time immemorial, skipping down the generations. These genes have evolved to build human ‘survival machines’, programmed as tools to make additional copies of the genes (by producing more humans who carry them in their genomes). From the cold-hearted view of biological reality, we exist only to ensure the survival of these travellers in our genomes... The selfish gene metaphor remains the single most relevant metaphor about our genome.”

Gosh, that really is cold-hearted isn’t it? It makes me feel so sad. But what leads these chaps to this unsparing conclusion, I wonder?

This: “From the viewpoint of natural selection, each gene is a long-lived replicator, its essential property being its ability to spawn copies.”

Then evolution, it seems, isn’t doing its job very well. Because, you see, I just took a gene and put it in a beaker and fed it with nucleotides, and it didn’t make a single copy. It was a rubbish replicator. So I tried another gene. Same thing. The funny thing was, the only way I could get the genes to replicate was to give them the molecular machinery and ingredients. Like in a PCR machine, say – but that’s like putting them on life support, right? The only way they’d do it without any real intervention was if I put the gene in a genome in a cell. So it really looked to me as though cells, not genes, were the replicators. Am I doing something wrong? After all, I am reliably informed that the gene “is on its own as a “replicator” – because “genes, but no other units in life’s hierarchy, make exact copies of themselves in a pool of such copies”. But genes no more “make exact copies of themselves in a pool of such copies” than printed pages (in a stack of other pages) make exact copies of themselves on the photocopier.

Oh, but silly me. Of course the genes don’t replicate by themselves! It is on its own as a replicator but doesn’t replicate on its own! (Got that?) No, you see, they can only do the job all together – ideally in a cell. “When looking at our genome”, say Yanai and Lercher, “we might take pride in how individual genes co-operate in order to build the human body in seemingly unselfish ways. But co-operation in making and maintaining a human body is just a highly successful strategy to make gene copies, perfectly consistent with selfishness.”

To be honest, I’ve never taken very much pride in what my genes do. But anyway: perfect consistent with selfishness? Let me see. I pay my taxes, I obey the laws, I contribute to charities that campaign for equality, I try to be nice to people, and a fair bit of this I do because I feel it is a pretty good thing to be a part of a society that functions well. I figure that’s probably best for me in the long run. Aha! – so what I do is perfectly consistent with selfishness. Well yes, but look, you’re not going to call me selfish just because I want to live in a well ordered society are you? No, but then I have intentions and thoughts of the future, I have acquired moral codes and so on – genes don’t have any of these things. Hmm… so how exactly does that make the metaphor of “selfishness” work? Or more precisely, how does it make selfishness a better metaphor than cooperativeness? If “genes don’t care”, then neither metaphor is better than the other. It’s just stuff that happens when genes get together.

But no, wait, maybe I’m missing the point. Genes are selfish because they compete with each other for limited resources, and only the best replicators – well no, only those housed in the cells or organisms that are themselves the best at replicating their genomes – survive. See, it says here: “Those genes that fail at replicating are no longer around, while even those that are good face stiff competition from other replicators. Only the best can secure the resources needed to reproduce themselves.”

This is the bullshit at the heart of the issue. “Good genes” face stiff competition from who exactly? Other replicators? So a phosphatase gene is competing with a dehydrogenase gene? (Yeah, who would win that fight?) No. No, no, no. This, folks, this is what I would bet countless people believe because of the bad metaphor of selfishness. Yet the phosphatase gene might well be doomed without the dehydrogenase gene. They need each other. They are really good friends. (These personification metaphors are great, aren’t they?) If the dehydrogenase gene gets better at its job, the phosphatase gene further down the genome just loves it, because she gets the benefit too! She just loves that better dehydrogenase. She goes round to his place and…

Hmm, these metaphors can get out of hand, can’t they?

No, if the dehydrogenase gene is competing with anyone, it’s with other alleles of the dehydrogenase gene. Genes aren’t in competition, alleles are.

(Actually even that doesn’t seem quite right. Organisms compete, and their genetic alleles affect their ability to compete. But this gives a sufficient appearance of competition among alleles that I can accept the use of the word.)

So genes only get replicated (by and large) in a genome. So if a gene is “improved” by natural selection, the whole genome benefits. But that’s just a side result – the gene doesn’t care about the others! Yet this is precisely the point. Because the gene “doesn’t care”, all you can talk about is what you see, not what you want to ascribe, metaphorically or otherwise, to a gene. An advantageous gene mutation helps the whole genome replicate. It’s not a question of who cares or who doesn’t, or what the gene “really” wants or doesn’t want. That is the outcome. “Selfishness” doesn’t help to elucidate that outcome – it confuses it.

“So why are we fooled into believing that humans (and animals and plants) rather than genes are what counts in biology?” the authors ask. They give an answer, but it’s not the right one. Higher organisms are a special case, of course, especially ones that reproduce sexually – it’s really cells that count. We’re “fooled” because cells can reproduce autonomously, but genes can’t.

So cells are where it’s at? Yes, and that’s why this article by Mike Lynch et al. calling for a meeting of evolutionary theory and cell biology is long overdue (PNAS 111, 16990; 2014). For one thing, it might temper erroneous statements like this one that Yanai and Lercher make: “Darwin showed that one simple logical principle [natural selection] could lead to all of the spectacular living design around us.” As Lynch and colleagues point out, there is abundant evidence that natural selection is one of several evolutionary processes that has shaped cells and simple organisms: “A commonly held but incorrect stance is that essentially all of evolution is a simple consequence of natural selection.” They point out, for example, that many pathways to greater complexity of both genomes and cells don’t confer any selective fitness.

The authors end with a tired Dawkinseque flourish: “we exist to serve our genome”. This statement has nothing to do with science – it is akin to the statement that “we are at the pinnacle of evolution”, but looking in the other direction. It is a little like saying that we exist to serve our minds – or that water falls on mountains in order to run downhill. It is not even wrong. We exist because of evolution, but not in order to do anything. Isn’t it strange how some who preen themselves on facing up to life’s lack of purpose then go right ahead and give it back a purpose?

The sad thing is that that Aeon article is actually all about junk DNA and what ENCODE has to say about it. It makes some fair criticisms of ENCODE’s dodgy definition of “function” for DNA. But it does so by examining the so-called LINE-1 elements in genomes, which are non-coding but just make copies of themselves. There used to be a word for this kind of DNA. Do you know what that word was? Selfish.

In the 1980s, geneticists and molecular biologists such as Francis Crick, Leslie Orgel and Ford Doolittle used “selfish DNA” in a strict sense, to refer to DNA sequences that just accumulated in genomes by making copies – and which did not itself affect the phenotype (W. F. Doolittle & C. Sapienza, Nature 284, p601, and L. E. Orgel & F. H. C. Crick, p604, 1980). This stuff not only had no function, it messed things up if it got too rife: it could eventually be deleterious to the genome that it infected. Now that’s what I call selfish! – something that acts in a way that is to its own benefit in the short term while benefitting nothing else, and which ultimately harms everything.

So you see, I’m not against the use of the selfish metaphor. I think that in its original sense it was just perfect. Its appropriation to describe the entire genome – as an attribute of all genes – wasn’t just misleading, it also devalued a perfectly good use of the term.

But all that seems to have been forgotten now. Could this be the result of some kind of meme, perhaps?

Monday, December 08, 2014

Chemistry for the kids - a view from the vaults


At some point this is all going to become a more coherently thought-out piece, but right now I just want to show you some of the Chemical Heritage Foundation’s fabulous collection of chemistry kits through the ages. It is going to form the basis of an exhibition at some point in the future, so consider this a preview.

There is an entire social history to be told through these boxes of chemistry for kids.



Here's one of the earliest examples, in which the chemicals come in rather fetching little wooden bottles. That’s the spirit, old chap!



I like the warning on this one: if you’re too little or too dumb to read the instructions, keep your hands off.



Sartorial tips here for the young chemist, sadly unheeded today. Tuck those ties in, mind – you don’t want them dipping in the acid. Lots of the US kits, like this one, were made by A. C. Gilbert Co. of New Haven, Connecticut, which became one of the biggest toy manufacturers in the world. The intriguing thing is that the company began in 1909 as a supplier of materials for magic shows – Alfred Gilbert was a magician. So even at this time, the link between stage magic and chemical demonstrations, which had been established in the nineteenth century, was still evident.



Girls, as you know, cannot grow up to be scientists. But if they apply themselves, they might be able to adapt their natural domestic skills to become lab technicians. Of course, they’ll only want this set if it is in pink.



But if that makes you cringe, it got far worse. Some chemistry sets were still marketed as magic shows even in the 1940s and 50s. Of course, this required that you dress up as some exotic Eastern fellow, like a “Hindu prince or Rajah”. And he needs an assistant, who should be “made up as an Ethiopian slave”. “His face and arms should be blackened with burned cork… By all means assign him a fantastic name such as Allah, Kola, Rota or any foreign-sounding word.” Remember now, these kits were probably being given to the fine young boys who would have been formulating US foreign policy in the 1970s and 80s (or, God help us, even now).



OK, so boys and girls can both do it in this British kit, provided that they have this rather weird amalgam of kitchen and lab.



Don’t look too closely, though, at the Periodic Tables pinned to the walls on either side. With apologies for the rubbish image from my phone camera, I think you can get the idea here.



This is one of my favourites. It includes “Safe experiments in atomic energy”, which you can conduct with a bit of uranium ore. Apparently, some of the Gilbert kits also included a Geiger counter. Make sure an adult helps you, kids!



Here are the manuals for it – part magic, part nuclear.



But we are not so reckless today. Oh no. Instead, you get 35 “fun activities”… with “no chemicals”. Well, I should jolly well hope not!



This one speaks volumes about its times, which you can see at a glance was the 1970s. It is not exactly a chemistry kit in the usual sense, because for once the kids are doing their experiments outside. Now they are not making chemicals, but testing for them: looking for signs of pollution and contamination in the air and the waters. Johnny Horizon is here to save the world from the silent spring.



There is still a whiff of the old connection with magic here, and with the alchemical books of secrets (which are the subject of the CHF exhibition that brought me here).



But here we are now. This looks a little more like it.



What a contrast this is from the clean, shiny brave new world of yesteryear.



Many thanks to the CHF folks for dragging these things from their vaults.

Wednesday, December 03, 2014

Pushing the boundaries


Here is my latest Music Instinct column for Sapere magazine. I collected some wonderful examples of absurdly complicated scores while working on this, but none with quite the same self-mocking wit as the one above.

___________________________________________________________________

There’s no accounting for taste, as people say, usually when they are deploring someone else’s. But there is. Psychological tests since the 1960s have suggested that what people tend to look for in music, as in visual art, is an optimal level of complexity: not too much, not too little. The graph of preference versus complexity looks like an inverted U [1].

Thus we soon grow weary of nursery rhymes, but complex experimental jazz is a decidedly niche phenomenon. So what is the ideal level of musical complexity? By some estimates, fairly low. The Beatles’ music has been shown to get generally more complex from 1962 to 1970, based on analysis of the rhythmic patterns, statistics of melodic pitch-change sequences and other factors [2]. And as complexity increased, sales declined. Of course, there’s surely more to the trend than that, but it implies that “All My Loving” is about as far as most of us will go.

But mightn’t there be value – artistic, if not commercial – in exploring the extremes of simplicity and complexity? Classical musicians evidently think so, whether it is the two-note drones of 1960s ultra-minimalist La Monte Young or the formidable, rapid-fire density of the “New Complexity” school of Brian Ferneyhough. Few listeners, it must be said, want to stay in these rather austere sonic landscapes for long.

But musical complexity needn’t be ideologically driven torture. J. S. Bach sometimes stacked up as many as six of his overlapping fugal voices, while the chromatic density of György Ligeti’s Atmosphères, with up to 56 string instruments all playing different notes, made perfect sense when used as the soundtrack to Stanley Kubrick’s 2001: A Space Odyssey.

The question is: what can the human mind handle? We have trouble following two conversations at once, but we seem able to handle musical polyphony without too much trouble. There are clearly limits to how much information we can process, but on the whole we probably sell ourselves short. Studies show that the more original and unpredictable music is, the more attentive we are to it – and often, relatively little exposure is needed before a move towards the complex end of the spectrum ceases to be tedious or confusing and becomes pleasurable. Acculturation can work wonders. Before pop music colonized Indonesia, gamelan was its pop music – and that, according to one measure of complexity rooted in information theory, is perhaps the most complex major musical style in the world.

1. Davies, J. B. The Psychology of Music. Hutchinson, London, 1978.
2. Eerola, T. & North, A. C. ‘Expectancy-based model of melodic complexity’, in Proceedings of the 6th International Conference of Music Perception and Cognition (Keele University, August 2000), eds Woods, C., Luck, G., Brochard, R., Seddon, F. & Sloboda, J. Keele University, 2000.

Wednesday, November 26, 2014

Hidden truths


I had meant to put up this piece – my October Crucible column for Chemistry World – some time back, so as to have the opportunity to show more of the amazing images of Liu Bolin. So anyway, here it is.

____________________________________________________________________

When I first set eyes on Liu Bolin, I didn’t see him at all. The Chinese artist has been dubbed the “Invisible Man”, because he uses extraordinarily precise body-painting to hide himself against all manner of backgrounds: shelves of magazines or tins in supermarkets, government propaganda posters, the Great Wall. What could easily seem in the West to be a clever gimmick becomes a deeply political statement in China, where invisibility serves as a metaphor for the way the state is perceived to ignore or “vanish” the ordinary citizen, while the rampant profiteering of the booming Chinese economy turns individuals into faceless consumers of meaningless products. In one of his most provocative works, Liu stands in front of the iconic portrait of Mao Zedong in Tiananmen Square, painted so that just his head and shoulders seem to be superimposed over those of the Great Helmsman. In another, a policeman grasps what appears to be a totally transparent, almost invisible Liu, or places his hands over the artist’s invisible eyes.







More recently Liu has commented on the degradation of China’s environment and the chemical adulteration of its food and drink. Some of these images are displayed in a new exhibition, A Colorful World?, at the Klein Sun Gallery in New York, which opened on 11 September. The title refers to “the countless multicolored advertisements and consumer goods that cloud today’s understanding of oppression and injustice.” But it’s not just their vulgarity and waste that Liu wants to point out, for in China there has sometimes been far more to fear from foods than their garish packaging. “The bright and colorful packaging of these snack foods convey a lighthearted feeling of joy and happiness, but what they truly provide is hazardous to human health”, the exhibition’s press release suggests. It’s bad enough that the foods are laden with carcinogens and additives, but several recent food scandals in China have revealed the presence of highly dangerous compounds. In 2008, some leading brands of powdered milk and infant formula were found to contain melamine, a toxic carcinogen added to boost the apparent protein content and so allow the milk to be diluted without failing quality standards. Melamine can cause kidney stones – several babies died from that condition, while many thousands were hospitalized.



There have been several other cases of foods treated with hormones and other hazardous cosmetic additives. The most recent involves the use of phthalate plasticizers in soft drinks as cheap replacements for palm oil. These compounds may be carcinogenic and are thought to disrupt the endocrine and reproductive systems by mimicking hormones. In his 2011 work Plasticizer, Liu commented on this use of such additives by “disappearing” in front of supermarket shelves of soft drinks.



So there is no knee-jerk chemophobia in these works, which represent a genuine attempt to highlight the lack of accountability and malpractice exercised within the food industry – and not just in China. The same is true of Liu’s Cancer Village, part of his Hiding in the City series in which he and others are painted to vanish against Chinese landscapes. The series began as a protest against the demolition of an artists’ village near Beijing in 2005 – Liu vanished amongst the rubble – but Cancer Village illustrates the invisibility and official “non-existence” of ordinary citizens in the face of a much more grievous threat: chemical pollution from factories, which seems likely to be the cause of a massive increase in cancer incidence in the village of the 23 people who Liu and his assistants have merged into a field behind which a chemical plant looms.



Such politically charged performance art walks a delicate line in China. Artists there have refined their approach to combine a lyrical, even playful obliqueness – less easily attacked by censors – with resonant symbolism. When in 1994 Wang Jin emptied 50 kg of red organic pigment into the Red Flag Canal in Henan Province for his work Battling the Flood, it was a sly comment not just on the uncritical “Red China” rhetoric of the Mao era (when the canal was dug) but also on the terrible bloodshed of that period.



It could equally have been a statement about the lamentable state of China’s waterways, where pollution, much of it from virtually unregulated chemicals plants, has rendered most of the river water fit only for industrial uses. That was certainly the concern highlighted by Yin Xiuzhen’s 1995 work Washing the River, in which the artist froze blocks of water from the polluted Funan River in Chengdu and stacked them on the banks. Passers-by were invited to “wash” the ice with clean water in an act that echoed the purifying ritual of baptism. This year Yin has recreated Washing the River on the polluted Derwent River near Hobart, Tasmania.



These are creative and sometimes moving responses to problems with both technological and political causes. They should be welcomed by scientists, who are good at spotting such problems but sometimes struggle to elicit a public reaction to them.