I wrote a couple of items for the Diary section of the October issue of Prospect. One was used in truncated form; the other wasn’t. Here are both of them.
____________________________________________________
Turkey’s prime minister Recep Tayyip Erdoğan recently outlined his vision for ‘Islamist-led democratic capitalism’: “Management of people, management of science and management of money.” It is becoming clear what ‘management’ means here. Erdoğan’s government has been steadily bringing various public bodies under direct state control, of which the latest are the Turkish Academy of Sciences (TÜBA) and the scientific funding agency TÜBITAK. The move has appalled many Turkish scientists, who consider independent scientific research a basic democratic freedom. The government has claimed that TÜBA was functioning poorly. But the absence of any prior consultation adds to the impression that this is essentially a political move, perhaps to muzzle an organization seen as too secular and left-leaning. “Academics will be increasingly careful about what they say, and what topics they teach and research”, says Erol Gelenbe, an electronic engineer and TÜBA member working at Imperial College in London. One obvious concern is whether ‘Islamist-led’ science will suppress Darwinism. Turkey already has the lowest public acceptance of the theory of evolution in all of Europe, and TÜBA drew criticism on this issue during the 2009 Darwin Year celebrations. Stem-cell research is also unlikely to find governmental favour. But Gelenbe believes that religious considerations will “affect all areas of the sciences, especially the human and social sciences”. He suspects it is only a matter of time before TÜBA begins appointing theologians.
******
Images of Americans boarding up in preparation for Hurricane Katia reminded Europeans of how little they need to fear extreme weather. The worst Katia could do was to rouse a blustery day in Scotland with a flick of her tail. But don’t count on it staying this way. Hurricane-like events similar to those that appear in the tropical Atlantic and Pacific have been occasionally seen in the Mediterranean. These so-called Medicanes have been predicted to multiply and intensify, possibly reaching full hurricane force, as global temperatures rise, since high sea surface temperatures are the engine of hurricanes. You might want to think twice before booking for Majorca in 2050.
Wednesday, September 28, 2011
Wednesday, September 21, 2011
Chemistry's Grand Challenges
I have an article in the latest (October) issue of Scientific American that looks at ten big challenges for chemistry in the coming decades. It’s presented by the Sci Am editors as “big mysteries”, though I’m not too sure quite how well that fits: these are not issues about which we’re totally in the dark, but rather, ones that seem to present either challenges to our fundamental understanding or our technological capability. The topics were decided in collaboration with the editors – I’m happy that all justify inclusion, though left to my own devices I’d probably have a slightly different list. The article grew to huge proportions in preparation, before being trimmed severely. So here is the full original text – or rather, an unholy hybrid of that and some of the changes made during the editing process. It's a big post for a blog, but hopefully of some value. And it includes an intro which was snipped out in toto.
______________________________________
Introduction
There aren’t many novels with chemistry in them, but one of the most famous has a Professor Waldman of the University of Ingolstadt say this: “Chemistry is that branch of natural philosophy in which the greatest improvements have been and may be made.” Waldman is the tutor of Victor Frankenstein in Mary Shelley’s classic from 1818, and he inspires his student to make the discovery that triggers the book’s dark tale.
This association imputes a Faustian aspect to chemistry. But that, like Waldman’s optimism, was transferred in the twentieth century first to physics and then to biology. Chemistry seemed to be left behind as a ‘finished’ science, now just a matter of engineering and devoid of the grand questions that Shelley – a devotee of Humphry Davy – seemed to glimpse in chemistry two hundred years ago. What happened?
Perhaps the answer is that chemistry became too versatile for its own good. It inveigled its way into so many areas of study and production, from semiconductor manufacturing to biomedicine, that we lost sight of it. The core of chemistry remains in making molecules and materials, but these are so diverse – drugs, paints, plastics, microscopic machines – that it is hard to see them as parts of a united discipline.
In this Year of Chemistry, it’s good to take stock – not just to remind ourselves why chemistry is central to our lives, but to consider where it is headed. Here are ten of the key challenges that chemistry faces today. Needless to say, there is no definitive list of this sort, and while all of these ten directions are important, their main value here is perhaps to illustrate that Waldman’s words still remain true. Several of these challenges are concerned with practical applications, as befits chemistry’s role as the most applied and arguably the most useful of the central sciences. But there are also questions about foundations, for the popular idea that chemistry is now conceptually understood, and that all we have to do is use it, is false. It has been only in the past several decades, for example, that the centrality of the non-covalent bond in the chemistry of life has been appreciated, and this sort of ‘temporary stickiness’ of molecules has been recognized as a key aspect of any technological applications, from molecular machines and nanotechnology to the development of surface coatings. Chemistry retains deep intellectual as well as practical challenges.
The last word should also go to Shelley’s Professor Waldman, who tells Victor Frankenstein that “a man would make but a very sorry chemist if he attended to that department of human knowledge alone”. You could perhaps say the same for any branch of science, but it particularly true for chemistry, which depends not just on understanding the world but of finding creative expressions of that knowledge. The creative opportunities for chemists lie everywhere: in making vehicles cleaner, producing artificial leaves, inventing new colours for artists, altering the fate of cells and comprehending the fate of stars. Chemistry is as limitless as art, because it is one.
1. The origins of life, and how life could be different on other planets.
The chemical origin of life used to be a rather parochial topic. That’s not to diminish the profundity, or the difficulty, of the question of how life began on Earth. But now that we have a better view of some of the strange and potentially fertile environments in our solar system – the occasional flows of water on Mars, the petrochemical seas of Saturn’s moon Titan and the cold, salty oceans that seem to lurk under the ice of Jupiter’s moons Europa and Ganymede – the origin of terrestrial life seems only a part of a grander question: under what circumstances can life arise, and how widely can its chemical basis vary? That issue is made even more rich by the discovery over the past 16 years of more than 500 extrasolar planets orbiting other stars – worlds of bewildering variety, forcing us to broaden our imagination about the possible chemistries of life. For instance, while NASA has long pursued the view that liquid water is a prerequisite, now we’re not so sure. How about liquid ammonia, or formamide (CHONH2), or an oily solvent like liquid methane, or supercritical hydrogen on Jupiter? And why should life restrict itself to DNA and proteins – after all, several artificial chemical systems have now been made that exhibit a kind of replication from the component parts without relying on nucleic acids. All you need, it seems, is a molecular system that can serve as a template for making a copy, and then detach itself.
Fixating on terrestrial life is a hang-up, but if we don’t, it’s hard to know where to begin. Looking at life on Earth, says chemist Steven Benner of the University of Florida, “we have no way to decide whether the similarities [such as the use of DNA and proteins] reflect common ancestry or the needs of life universally.” But if we retreat into saying that we’ve got to stick with what we know, he says, “we have no fun.”
All the same, Earth is the only locus of life that we know of, and so it makes sense to start here in trying to understand how matter can come alive and, eventually, know itself. This process seems to have begun extremely quickly in geological terms: there are fossil signs of early life dating back almost to the time that the oceans first formed. On that basis, it looks easy – some suspect, even inevitable. The challenge is no longer to come up with vaguely plausible scenarios, for there are plenty – polymerization catalysed by minerals, chemical complexity fuelled by hydrothermal vents, the RNA world. No, the game is to figure out how to make these more than just suggestive reactions coddled in the test tube. Researchers have made conspicuous progress in recent years, showing for example that certain relatively simple chemicals can spontaneously react to form the more complex building blocks of living systems, such as amino acids and nucleotides, the building blocks of DNA and RNA. In 2009, a team led by John Sutherland, now at the MRC Laboratory of Molecular Biology in Cambridge, England, was able to demonstrate the formation of nucleotides from molecules likely to have existed in the primordial broth. Other researchers have focused on the ability of some RNA strands to act as enzymes, providing evidence in support of the RNA world hypothesis. Through such steps, scientists may progressively bridge the gap from inanimate matter to self- replicating, self-sustaining systems.
Perhaps the dawn of synthetic biology, which includes the construction of primitive lifelike entities from scratch, will help to bridge the gap between the geological formation of simple organic ingredients, as demonstrated by Harold Urey and Stanley Miller in their famous ‘spark’ experiments more than 50 years ago, and the earliest cells.
2. Understanding the nature of the chemical bond and modeling chemistry on the computer.
“The chemistry of the future”, wrote the zoologist D’Arcy Wentworth Thompson in 1917, “must deal with molecular mechanics by the methods and in the strict language of mathematics”. Just 10 years later that seemed possible: the physicists Walter Heitler and Fritz London showed how to describe a chemical bond using the equations of then nascent quantum theory, and the great American chemist Linus Pauling proposed that bonds form when the electron orbitals of different atoms can overlap in space. A competing theory by Robert Mulliken and Friedrich Hund suggested that bonds are the result of atomic orbitals merging into “molecular orbitals” that extend over more than one atom. Theoretical chemistry seemed about to become a branch of physics.
Nearly 100 years later the molecular-orbital picture has become the most common one, but there is still no consensus among chemists that it is always the best way to look at molecules. The reason is that this model of molecules and all others are based on simplifying assumptions and are thus approximate, partial descriptions. In reality, a molecule is a bunch of atomic nuclei in a cloud of electrons, with opposing electrostatic forces fighting a constant tug-of-war with one another, and all components constantly moving and reshuffling. Existing models of the molecule usually try to crystallize such a dynamic entity into a static one and may capture some of its salient properties but neglect others.
Quantum theory is unable to supply a unique definition of chemical bonds that accords with the intuition of chemists whose daily business it is to make and break them. There are now many ways of assigning bonds to the quantum description of molecules as electrons and nuclei. According to quantum chemist Dominik Marx of the University of Bochum in Germany, “some are useful in some cases but fail in others and vice versa”. As a result, he says, “there will always be a search, and thus controversy, for ‘the best method’”.
This is no obstacle to calculating the structures and properties of molecules from quantum first principles – something that can be done to great accuracy if the number of electrons is relatively small. “Computational chemistry can be pushed to the level of utmost realism and complexity”, says Marx. As a result, computer calculations can increasingly be regarded as a kind of virtual experiment that predicts the outcome of a reaction.
But the challenge is to extend these approaches to increasingly complex cases. On the one hand, that may mean simply modelling more molecules. Can a computer model capture the complicated environment inside cells, for example, where many molecules large and small interact, aggregate and react within the responsive, protean medium of salty water? At the moment, most descriptions of such processes use highly simplified descriptions of bonding in which atoms are little more than balls on springs. Can computational chemistry help us understand, say, the detailed workings of a vast biomolecular machine like the ribosome?
On the other hand, can computational methods capture complex chemical processes and behavior, such as catalysis? Attempts to do so tend at the moment to rely on ways of bridging the calculations to intuitive expectations. One promising approach, being developed by Jörg Behler at Bochum, uses neural networks to deduce the energy surfaces on which these reactions happen. It also remains hard to predict subtle behaviour such as superconductivity. But already new materials have been discovered by computation – perhaps in times to come that will become the norm.
3. Graphene and carbon nanotechnology: sculpting with carbon.
The discovery of fullerenes – hollow, cagelike molecules made entirely of carbon – in 1985 was literally the start of something much bigger. The polyhedral shells of these molecules showed how the flat sheets of carbon atoms that make up graphite – where they are joined into hexagonal rings tiled side by side, like chicken wire – can be curved by including some pentagonal rings. With precisely 12 pentagons, the structure curls up into a closed shell. Six years later tubes of graphite-like carbon just a few nanometers in diameter, called carbon nanotubes, fostered the idea that this sort of carbon can be moulded into all manner of curved nanoscale structures. Being hollow, extremely strong and stiff, and electrically conducting, carbon nanotubes promised applications ranging from high-strength carbon composites to tiny wires and electronic devices, miniature molecular capsules and water-filtration membranes.
Now graphite itself has moved centre stage, thanks to the discovery that it can be separated into individual sheets, called graphene, that could supply the fabric for ultra-miniaturized, cheap and robust electronic circuitry. Graphene garnered the 2010 Nobel prize in physics, but the success of this and other forms of carbon nanotechnology might ultimately depend on chemistry. For one thing, ‘wet’ chemical methods may prove the cheapest and simplest for separating graphite into its component sheets. “Graphene can be patterned so that the interconnect and placement problems of carbon nanotubes are overcome”, says carbon specialist Walt de Heer of the Georgia Institute of Technology.
Some feel, however, that graphene has so far been over-hyped in a way that plays down the hurdles to making it a viable technology. “The hype is extreme”, says de Heer. “Many of the newly claimed superlative graphene properties are really graphite properties ‘under new management’ and were known and used for a very long time.” He believes graphitic electronics has not yet been shown to be viable. “The best that has been done to date is to show that ultrathin graphite (including graphene) can be gated [switched electronically, as in transistors]. But the gating is quite poor, since you cannot turn it completely off. Most people would not consider this to be even a starting point for electronics.” And he says that existing methods of graphene patterning are so crude that the edges undo any advantage that graphene nanoribbons have to offer. However, narrow ribbons and networks can be made to measure with atomic precision by using the techniques of organic chemistry to build them up from ‘polyaromatic’ molecules, in which several hexagonal carbon rings are linked together like little fragments of a graphene sheet. It seems quite possible that graphene technology will depend on clever chemistry.
[Watch this space: I’ve just written a piece on graphene for BBC’s pop-sci magazine Focus, which explores all these things in greater depth.]
4. Artificial photosynthesis.
Of all the sources of ‘clean energy’ available to us, sunlight seems the most tantalizing. With every sunrise comes a reminder of the vast resource of which we currently tap only a pitiful fraction. The main problem is cost: the expense of conventional photovoltaic panels made of silicon still restricts their use. But life on Earth, almost all of which is ultimately solar-powered by photosynthesis, shows that solar cells don’t have to be terribly efficient if, like leaves, they can be made abundantly and cheaply enough.
Yet ‘artificial photosynthesis’ and the ‘artificial leaf’ are slippery concepts. Do they entail converting solar to chemical energy, just as the leaf uses absorbed sunlight to make the biological ‘energy molecule’ ATP? Or must the ‘artificial leaf’ mimic photosynthesis by splitting water to make hydrogen – a fuel – and oxygen?
“Artificial photosynthesis means different things to different people”, says photochemist Devens Gust of Arizona State University. “Some people call virtually any sort of solar energy conversion that involves electricity or fuels artificial photosynthesis.” Gust himself reserves the term for photochemical systems that make fuels using sunlight: “I like to define it as the use of the fundamental scientific principles underlying natural photosynthesis for the design of technological solar-energy conversion systems.”
“One of the holy grails of solar energy research is using sunlight to produce fuels”, Gust explains. “In order to make a fuel, we need not only energy from sunlight, but a source of electrons, and some material to reduce to a fuel with those electrons. The source of electrons has to be water, if the process is to be carried out on a scale anything like that of human energy usage. The easiest way to make a fuel from this is to use the electrons to reduce the protons to hydrogen gas.” Nathan S. Lewis and his collaborators at Caltech are developing an artificial leaf that would do just that using silicon nanowires.
MIT chemist Daniel Nocera and his coworkers have recently announced an ‘artificial leaf’: a device the size of a credit card in which silicon solar cells and a photocatalyst of metals such as nickel and cobalt split water into hydrogen and oxygen which can then be used to drive fuel cells. Nocera estimates that a gallon of water would provide enough fuel to power a home in developing countries for a day. “Our goal is to make each home its own power station”, he says. His start-up company Sun Catalytix aims to take the technology to a commercial level.
But “water oxidation is not a solved problem, even at a fundamental level”, according to Gust. “Cobalt catalysts such as the one that Nocera uses, and newly-discovered catalysts based on other common metals are promising”, he says, but there is still no potentially inexpensive, ideal catalyst. “We don’t know how the natural photosynthetic catalyst, which is based on four manganese atoms and a calcium atom, works”, Gust adds.
Carbon-based fuels are easier than hydrogen to transport, store and integrate with current technologies. Photosynthesis makes carbon-based fuels (sugars, ATP) using sunlight. Gust and his colleagues have been working on making molecular assemblies for artificial photosynthesis that more closely mimic their biological inspiration. “We know how to make artificial antenna systems and photosynthetic reaction centers that work in the lab, but questions about stability remain, as they are usually based at least in part on organic molecules.” He admits that “we are not very close to a technologically useful catalyst for converting carbon dioxide to a useful liquid fuel.” On the other hand, he says, “the recent increase in funding, worldwide for solar fuels has meant that many more researchers have gotten into the game.” If this funding can be preserved, he anticipates “really significant advances.” Let’s hope so, since as Gust says, “we desperately need a fuel or energy source that is abundant, inexpensive, environmentally benign, and readily available.”
5. Devising catalysts for making biofuels.
The demand for biofuels – fuels made by conversion of organic matter, primarily plants – isn’t driven just by concern for the environment. While it’s true that a biofuel economy is notionally sustainable – carbon emissions from burning the fuels are balanced by the carbon dioxide taken up to grow the fuel crops – the truth is that it’s increasingly hard to find any good alternatives. Organic liquids (oil and petroleum) remain the main energy source globally, and are forecast to do so at least until the mid-century. But several estimates say that, at current production rates, we have only about 50 years worth of oil reserves left. What’s more, most of these are in politically unstable parts of the world. And currently soaring prices are expected to continue – the days of cheap oil are over.
There’s nothing new about biofuels: time was when there was only wood to burn in winter, or peat or dried animal dung. But that’s a very inefficient way to use the energy bound up in carbon-based molecules. Today’s biofuels are mostly ethanol made from fermenting corn, sugar-cane or switchgrass, or biodiesel, an ester made from the lipids in rapeseed or soybean oils. The case for biofuels seems easy to make – as well as being potentially greener and offering energy security, they can come from crops grown on land unsuitable for food agriculture, and can boost rural economies.
But the initial optimism about biofuels cooled quickly. For one thing, they threaten to displace food crops, particularly in developing countries where selling biofuels abroad can be more lucrative than feeding people at home. And the numbers are daunting: meeting current oil demand will mean requisitioning huge areas of arable land. But these figures depend crucially on how efficiently the carbon is used. Some parts of plants, particularly the resinous lignin, can’t easily be turned into biofuel, especially by biological fermentation. Finding new chemical catalysts to assist this process looks essential if biofuels are to fly.
One of the challenges of breaking down lignin – cracking open ‘aromatic C-O bonds’: benzene rings bridged by an oxygen – was recently met by John Hartwig and Alexey Sergeev of the University of Illinois, who found a nickel-based catalyst that will do the trick. Hartwig points out that, if biomass is to supply non-fossil-fuel chemical feedstocks as well as fuels, it will need to offer aromatic compounds – of which lignin is the only major potential source.
It’s a small part of a huge list of challenges: “There are issues at every level”, says Hartwig. Some of these are political – a carbon tax, for example, could decide the economical viability of biofuels. But many are chemical. The changes in infrastructure and engineering needed for an entirely new liquid fuel (more or less pure alcohol) are so vast that it seems likely the biofuels will need to be compatible with existing technology – in other words, to be hydrocarbons. That means converting the oxidized compounds in plant matter to reduced ones. Not only does this require catalysts, but it also demands a source of hydrogen – either from fossil fuels or ideally, but dauntingly, from splitting of water.
And fuels will need to be liquid for easy transportation along pipelines. But biomass is primarily solid. Liquefaction would need to happen on site where the plant is harvested. And one of the difficulties for catalytic conversion is the extreme impurity of the reagent – classical chemical synthesis does not tend to allow for reagents such as ‘wood’. “There’s no consensus on how all this will be done in the end”, says Hartwig. But an awful lot of any solution lies with the chemistry, especially with finding the right catalysts. “Almost every industrial reaction on a large scale has a catalyst associated”, Hartwig points out.
6. Understanding the chemical basis of thought and memory.
The brain is a chemical computer. Interactions between the neurons that form its circuitry are mediated by molecules: neurotransmitters that pass across the synaptic spaces where one neural cell wires up to another. This chemistry of the mind is perhaps at its most impressive in the operation of memory, in which abstract principles and concepts – a telephone number, say – are imprinted in states of the neural network by sustained chemical signals. How does chemistry create a memory that is at the same time both persistent and dynamic: susceptible to recall, revision and forgetting?
We now know that a cascade of biochemical processes, leading to a change in production of neurotransmitter molecules at the synapse, triggers ‘learning’ for habitual reflexes. But even this ‘simple’ aspect of learning has short- and long-term stages. Meanwhile, more complex so-called ‘declarative’ memory (of people, places and so on) has a different mechanism and location in the brain, involving the activation by the excitatory neurotransmitter glutamate of a protein called the NMDA receptor. Blocking these receptors with drugs prevents memory retention for many types of declarative memory.
Our everyday declarative memories are often encoded in a process called long-term potentiation (LTP), which involves NMDA receptors and in accompanied by an expansion of the synapse, the region of a neuron involved in its communication with others. As the synapse grows, so does the ‘strength’ of its connection with neighbours. The biochemistry of this process has been clarified in the past several years. It involves stimulation of the formation of filaments within the neuron made from the protein actin – the basic scaffolding of the cell, which determine its size and shape. But that process can be undone during a short period before the change is consolidated by biochemical agents that block the newly formed filaments.
Once encoded, long-term memory for both simple and complex learning is actively maintained by switching on genes that produce proteins. It now appears that this can involve a self-perpetuating chemical reaction of a prion, a protein molecule that can switch between two different conformations. This switching process was first discovered for its role in neurodegenerative disease, but prion mechanisms have now been found to have normal, beneficial functions too. The prion protein is switched from a soluble to an insoluble, aggregated state that can then perpetuate itself autocatalytically, and which ‘marks’ a particular synapse to retain a memory.
There are still big gaps in the story of how memory works, many of which await filling with the chemical details. How, for example, is memory recalled once it has been stored? “This is a deep problem whose analysis is just beginning”, says neuroscientist and Nobel laureate Eric Kandel of Columbia University. It may involve the neurotransmitters dopamine and acetylcholine. And what happens at the molecular level when things go wrong, for example in Alzheimer’s-related memory loss and other cognitive disorders that affect memory? Addressing and perhaps even reversing such problems will require a deeper understanding of the many biochemical processes in memory storage, including a better understanding of the chemistry of prions – which in turn seems to point us increasingly towards a more fundamental grasp of protein structure and how it is shaped by evolution.
Getting to grips with the chemistry of memory offers the enticing, and controversial, prospect of pharmacological enhancement. Some memory-boosting substances are already known: neuropeptides, sex steroids and chemicals that act on receptors for nicotine, glutamate, serotonin and other neurotransmitters and their mimics have all been shown to enhance memory. In fact, according to neurobiologist Gary Lynch at the University of California at Irvine, the complex sequence of steps leading to long-term learning and memory means that there are a large number of potential targets for such ‘memory drugs’. However, there’s so far little evidence that known memory boosters improve cognitive processing more generally – that’s to say, it’s not clear that they actually make you smarter. Moreover, just about all studies so far have been on rodents and monkeys, not humans.
Yet it seems entirely possible that effective memory enhancers will be found. Naturally, such possibilities raise a host of ethical and social questions. One might argue that using such drugs is not so different from taking vitamins to improve health, or sleeping pills to get a much-needed good rest, and that it can’t be a bad thing to allow people to become brighter. But can it be right for cognitive enhancement to be available only for those who can afford it? In manipulating the brain’s chemistry, are we modifying the self? As our knowledge and capabilities advance, such ethical questions will become unavoidable.
7. Understanding the chemical basis of epigenetics.
Cells, like humans, become less versatile and more narrowly focused as they age. Pluripotent stem cells present in the early embryo can develop into any tissue type; but as the embryo grows, cells ‘differentiate’, acquiring specific roles (such as blood, muscle or nerve cells) that remain fixed in their progeny. One of the revolutionary discoveries in research on cloning and stem cells, however, is that this process isn’t irreversible. Cells don’t lose genes as they differentiate, retaining only those they need. Rather, the genes are switched off but remain latent – and can be reactivated. The recent discovery that a cocktail of just four proteins is sufficient to cause mature differentiated cells to revert to stem-cell-like status, becoming induced pluripotent cells, might not only transform regenerative medicine but also alters our view of how the human body grows from a fertilized egg.
Like all of biology, this issue has chemistry at its core. It’s slowly becoming clear that the versatility of stem cells, and its gradual loss during differentiation, results from the chemical changes taking place in the chromosomes. Whereas the old idea of biology makes it a question of which genes you have, it is now clear that an equally important issue is which genes you use. The formation of the human body is a matter of chemically modifying the stem cells’ initial complement of genes to turn them on and off.
What is particularly exciting and challenging for chemists is that this process seems to involve chemical events happening at size scales greater than those of atoms and molecules: at the so-called mesoscale, involving the interaction and organization of large molecular groups and assemblies. Chromatin, the mixture of DNA and proteins that makes up chromosomes, has a hierarchical structure. The double helix is wound around cylindrical particles made from proteins called histones, and this ‘string of beads’ is then bundled up into higher-order structures that are poorly understood. Yet it seems that cells exert great control over this packing – how and where a gene is packed into chromatin may determine whether it is ‘active’ or not. Cells have specialized enzymes for reshaping chromatin structure, and these have a central role in cell maturation and differentiation. Chromatin in embryonic stem cells seems to have a much looser, open structure: as some genes fall inactive, the chromatin becomes increasingly lumpy and organized. “The chromatin seems to fix and maintain or stabilize the cells’ state”, says pathologist Bradley Bernstein of the Massachusetts General Hospital in Boston.
What’s more, this process is accompanied by chemical modification of both DNA and histones. Small-molecule tags become attached to them, acting as labels that modify or silence the activity of genes. The question of to what extent mature cells can be returned to pluripotency – whether iPS cells are as good as true stem cells, which is a vital issue for their use in regenerative medicine – seems to hinge largely on how far this so-called epigenetic marking can be reset. If iPS cells remember their heritage (as it seems they partly do), their versatility and value could be compromised. On the other hand, some histone marks seem actually to preserve the pluripotent state.
It is now clear that there is another entire chemical language of genetics – or rather, of epigenetics – beyond the genetic code of the primary DNA sequence, in which some of the cell’s key instructions are written. “The concept that the genome and epigenome form an integrated system is crucial”, says geneticist Bryan Turner of the University of Birmingham in the UK.
The chemistry of chromatin and particularly of histone modifications may be central to how the influence of our genes gets modified by environmental factors. “It provides a platform through which environmental components such as toxins and foodstuffs can influence gene expression”, says Turner. “We are now beginning to understand how environmental factors influence gene function and how they contribute to human disease. Whether or not a genetic predisposition to disease manifests itself will often depend on environmental factors operating through these epigenetic pathways. Switching a gene on or off at the wrong time or in the wrong tissue can have effects on cell function that are just as devastating as a genetic mutation, so it’s hardly surprising that epigenetic processes are increasingly implicated in human diseases, including cancer.”
8. Finding new ways to make complex molecules.
The core business of chemistry is a practical, creative one: making molecules. But the reasons for doing that have changed. Once the purpose of constructing a large natural molecule such as vitamin B12 by painstaking atom-by-atom assembly was to check the molecular structure. If what you build, knowing were each atom is going, is the same as what nature makes, it presumably has the same structure. But we’re now good enough at deducing structures from methods such as X-ray crystallography – often for molecules that it would be immensely hard to make anyway – that this justification is hard to sustain.
Maybe it’s worth making a molecule because it is useful – as a drug, say. That’s true, but the more complicated the molecule, the less useful its synthesis from scratch (‘total synthesis’) tends to be, because of the cost and the small yield of the product after dozens of individual steps. Better, often, to extract the molecule from natural sources, or to use living organisms to make it or part of it, for example by equipping bacteria or yeast with the necessary enzymes.
And total synthesis is typically slow – even if rarely as slow as the 11-year project to make vitamin B12 that began in 1961. Yet new molecules and drugs are often needed very fast – for example, new antibiotics to outstrip the rise of resistant microorganisms.
As a result, total synthesis is “a lot harder to justify than it once was”, according to industrial chemist Derek Lowe. It’s a great training ground for chemists, but are there now more practical ways to make molecules? One big hope was combinatorial chemistry, in which new and potentially useful molecules were made by a random assembly of building blocks followed by screening to identify those that do a job well. Once hailed as the future of medicinal chemistry, ‘combi-chem’ fell from favour as it failed to generate anything useful.
But after the initial disappointments, combi-chem may enjoy a brighter second phase. It seems likely to work only if you can make a wide enough range of molecules and find good ways of picking out the minuscule amounts of successful ones. Biotechnology might help here – for example, each molecule could be linked to a DNA-based ‘barcode’ that both identifies it and aids its extraction. Or cell-based methods might coax combinatorial schemes towards products with particular functions using guided (‘directed’) evolution in the test tube.
There are other new approaches to bond-making too, which draw on nature’s mastery of uniting fragments in highly selective yet mild ways. Proteins, for example, have a precise sequence of amino acids determined by the base sequence of the messenger RNA molecule on which they are assembled in the ribosome. Using this model, future chemists might program molecular fragments to assemble autonomously in highly selective ways, rather than relying on the standard approach of total synthesis that involves many independent steps, including cumbersome methods for protecting the growing molecule from undesirable side reactions. For example, David Liu at Harvard University and his coworkers have devised a molecule-making strategy inspired by nature’s use of nucleic-acid templates to specify the order in which units are linked together. They tagged small molecules with short DNA strands that ‘programme’ them for linkage on a DNA template. And they have created a ‘DNA walker’ which can step along a template strand sequentially attaching small molecules dangling from the strand to produce a macromolecular chain – a process highly analogous to protein synthesis on the ribosome, essentially free from undesirable side reactions. This could be a handy way to tailor new drugs. “Many molecular life scientists believe that macromolecules will play an increasingly central, if not dominant, role in the future of therapeutics”, says Liu.
9. Integrating chemistry: creating a chemical information technology.
Increasingly, chemists don’t simply want to make molecules but also to communicate with them: to make chemistry an information technology that will interface with anything from living cells to conventional computers and fibre-optic telecommunications. In part, this is an old idea: biosensors in which chemical reactions are used to report on concentrations of glucose in the blood date back to the 1960s, although only recently has their use for monitoring diabetes been cheap, portable and widespread. Chemical sensing has countless applications – to detect contaminants in food and water at very low concentrations, say, or to monitor pollutants and trace gases in the atmosphere.
But it is in biomedicine that chemical sensors have the most dramatic potential. Some of the products of cancer genes circulate in the bloodstream long before the condition becomes apparent to regular clinical tests – if they could be detected early, prognoses would be vastly improved. Rapid genomic profiling would enable drug regimes to be tailored to individual patients, reducing risks of side-effects and allowing some medicines to be used that today are hampered by their dangers to a genetic minority. Some chemists foresee continuous, unobtrusive monitoring of all manner of biochemical markers of health and disease, perhaps in a way that is coupled remotely to alarm systems in doctors’ surgeries or to automated systems for delivering remedial drug treatments. All of this depends on developing chemical methods for sensing and signaling with high selectivity and often at very low concentrations. “Advances are needed in improving the sensitivity of such systems so that biological intermediates can be detected a much lower levels”, says chemist Allen Bard of the University of Texas at Austin. “This raises a lot of challenges. But such analyses could help in the early detection of disease.”
Integrated chemical information systems might go much further still. Prototype ‘DNA computers’ have been developed in which strands of bespoke DNA in the blood can detect, diagnose and respond to disease-related changes in gene activity. Clever chemistry can also couple biological processes to electronic circuitry, for example so that nerve cells can ‘speak’ to computers. Information processing and logic operations can be conducted between individual molecules. The photosynthetic molecular apparatus of some organisms even seems able to manipulate energy using the quantum rules that physicists are hoping to exploit in super-powerful quantum computers. It is conceivable that mixtures of molecules might act as super-fast quantum computers to simulate the quantum behavior of other molecules, in ways that are too computationally intensive on current machines. According to chemistry Nobel laureate Jean-Marie Lehn of the University of Strasbourg, this move of chemistry towards what he calls a science of informed (and informative) matter “will profoundly influence our perception of chemistry, how we think about it, how we perform it.”
10. Exploring the limits of applicability of the periodic table, and new forms of matter that lie outside it.
The periodic tables that adorn the walls of classrooms are now having to be constantly revised, because the number of elements keeps growing. Using particle accelerators to crash atomic nuclei together, scientists can create new ‘superheavy’ elements, with more protons and neutrons than the 92 or so elements found in nature. These engorged nuclei are not very stable – they decay radioactively, often within a tiny fraction of a second. But while they exist, the new ‘synthetic’ elements such as seaborgium (element 106) and hassium (108) are like any other insofar as they have well defined chemical properties. In dazzling experiments, the properties of both of these synthetic elements have been investigated from just a handful of the elusive atoms in the instant before they fall apart.
Such studies probe not just the physical but the conceptual limits of the periodic table: do these superheavy elements continue to display the trends and regularities in chemical behavior that make the table periodic in the first place? Some do, and some don’t. In particular, such massive nuclei hold on to the atoms’ innermost electrons so tightly that they move at close to the speed of light. Then the effects of special relativity increase their mass and play havoc with the quantum energy states on which their chemistry – and thus the table’s periodicity – depends.
Because nuclei are thought to be stabilized by particular ‘magic numbers’ of protons and neutrons, some researchers hope to find an ‘island of stability’, a little beyond the current capabilities of element synthesis, in which these superheavies live for longer. But is there any fundamental limit to their size? A simple calculation suggests that relativity prohibits electrons from being bound to nuclei of more than 137 protons. But more sophisticated calculations defy that limit. “The periodic system will not end at 137; in fact it will never end”, insists nuclear physicist Walter Greiner of the Johann Wolfgang Goethe University in Frankfurt, Germany. The experimental test of that claim remains a long way off.
Besides extending the periodic table, chemists are stepping outside it. Conventional wisdom has it that the table enumerates all the ingredients that chemists have at their disposal. But that’s not quite true. For one thing, it has been found that small clusters of atoms can act collectively like single ‘giant’ atoms of other elements. A so-called ‘superatom’ of aluminum containing precisely 13 atoms will behave like a giant iodine atom, while an Al14 cluster behaves like an alkaline earth metal. “We can take one element and have it mimic several different elements in the Periodic Table”, says Shiv Khanna of Virginia Commonwealth University in Richmond, Virginia. It’s not yet clear how far this superatom concept can be pushed, but according to one of its main advocates, A. Welford Castleman of Pennsylvania State University, it potentially makes the periodic table three-dimensional, each element being capable of mimicking several others in suitably sized clusters. There’s no fundamental reason why such superatoms have to contain just one element either, nor why the ‘elements’ they mimic need be analogues of others in the table.
Furthermore, physicists have made synthetic atoms that are not like traditional ones at all, with nuclei of protons (and perhaps neutrons) surrounded by electrons. The electron’s heavier cousin the muon can replace the electron in ‘muonium’, a kind of heavy hydrogen. And the anti-electron, or positron, can act as the positive nucleus of ‘positronium’, a super-light analogue of hydrogen. A slightly heftier version of ‘light hydrogen’ has been made that substitutes the central proton for a positively charged muon. These synthetic atoms have been used to test aspects of the quantum theory of chemical reactions. And by comparing the spectrum of muonium with that of ordinary hydrogen, researchers have been able to obtain a new, more accurate value for the mass of the proton.
______________________________________
Introduction
There aren’t many novels with chemistry in them, but one of the most famous has a Professor Waldman of the University of Ingolstadt say this: “Chemistry is that branch of natural philosophy in which the greatest improvements have been and may be made.” Waldman is the tutor of Victor Frankenstein in Mary Shelley’s classic from 1818, and he inspires his student to make the discovery that triggers the book’s dark tale.
This association imputes a Faustian aspect to chemistry. But that, like Waldman’s optimism, was transferred in the twentieth century first to physics and then to biology. Chemistry seemed to be left behind as a ‘finished’ science, now just a matter of engineering and devoid of the grand questions that Shelley – a devotee of Humphry Davy – seemed to glimpse in chemistry two hundred years ago. What happened?
Perhaps the answer is that chemistry became too versatile for its own good. It inveigled its way into so many areas of study and production, from semiconductor manufacturing to biomedicine, that we lost sight of it. The core of chemistry remains in making molecules and materials, but these are so diverse – drugs, paints, plastics, microscopic machines – that it is hard to see them as parts of a united discipline.
In this Year of Chemistry, it’s good to take stock – not just to remind ourselves why chemistry is central to our lives, but to consider where it is headed. Here are ten of the key challenges that chemistry faces today. Needless to say, there is no definitive list of this sort, and while all of these ten directions are important, their main value here is perhaps to illustrate that Waldman’s words still remain true. Several of these challenges are concerned with practical applications, as befits chemistry’s role as the most applied and arguably the most useful of the central sciences. But there are also questions about foundations, for the popular idea that chemistry is now conceptually understood, and that all we have to do is use it, is false. It has been only in the past several decades, for example, that the centrality of the non-covalent bond in the chemistry of life has been appreciated, and this sort of ‘temporary stickiness’ of molecules has been recognized as a key aspect of any technological applications, from molecular machines and nanotechnology to the development of surface coatings. Chemistry retains deep intellectual as well as practical challenges.
The last word should also go to Shelley’s Professor Waldman, who tells Victor Frankenstein that “a man would make but a very sorry chemist if he attended to that department of human knowledge alone”. You could perhaps say the same for any branch of science, but it particularly true for chemistry, which depends not just on understanding the world but of finding creative expressions of that knowledge. The creative opportunities for chemists lie everywhere: in making vehicles cleaner, producing artificial leaves, inventing new colours for artists, altering the fate of cells and comprehending the fate of stars. Chemistry is as limitless as art, because it is one.
1. The origins of life, and how life could be different on other planets.
The chemical origin of life used to be a rather parochial topic. That’s not to diminish the profundity, or the difficulty, of the question of how life began on Earth. But now that we have a better view of some of the strange and potentially fertile environments in our solar system – the occasional flows of water on Mars, the petrochemical seas of Saturn’s moon Titan and the cold, salty oceans that seem to lurk under the ice of Jupiter’s moons Europa and Ganymede – the origin of terrestrial life seems only a part of a grander question: under what circumstances can life arise, and how widely can its chemical basis vary? That issue is made even more rich by the discovery over the past 16 years of more than 500 extrasolar planets orbiting other stars – worlds of bewildering variety, forcing us to broaden our imagination about the possible chemistries of life. For instance, while NASA has long pursued the view that liquid water is a prerequisite, now we’re not so sure. How about liquid ammonia, or formamide (CHONH2), or an oily solvent like liquid methane, or supercritical hydrogen on Jupiter? And why should life restrict itself to DNA and proteins – after all, several artificial chemical systems have now been made that exhibit a kind of replication from the component parts without relying on nucleic acids. All you need, it seems, is a molecular system that can serve as a template for making a copy, and then detach itself.
Fixating on terrestrial life is a hang-up, but if we don’t, it’s hard to know where to begin. Looking at life on Earth, says chemist Steven Benner of the University of Florida, “we have no way to decide whether the similarities [such as the use of DNA and proteins] reflect common ancestry or the needs of life universally.” But if we retreat into saying that we’ve got to stick with what we know, he says, “we have no fun.”
All the same, Earth is the only locus of life that we know of, and so it makes sense to start here in trying to understand how matter can come alive and, eventually, know itself. This process seems to have begun extremely quickly in geological terms: there are fossil signs of early life dating back almost to the time that the oceans first formed. On that basis, it looks easy – some suspect, even inevitable. The challenge is no longer to come up with vaguely plausible scenarios, for there are plenty – polymerization catalysed by minerals, chemical complexity fuelled by hydrothermal vents, the RNA world. No, the game is to figure out how to make these more than just suggestive reactions coddled in the test tube. Researchers have made conspicuous progress in recent years, showing for example that certain relatively simple chemicals can spontaneously react to form the more complex building blocks of living systems, such as amino acids and nucleotides, the building blocks of DNA and RNA. In 2009, a team led by John Sutherland, now at the MRC Laboratory of Molecular Biology in Cambridge, England, was able to demonstrate the formation of nucleotides from molecules likely to have existed in the primordial broth. Other researchers have focused on the ability of some RNA strands to act as enzymes, providing evidence in support of the RNA world hypothesis. Through such steps, scientists may progressively bridge the gap from inanimate matter to self- replicating, self-sustaining systems.
Perhaps the dawn of synthetic biology, which includes the construction of primitive lifelike entities from scratch, will help to bridge the gap between the geological formation of simple organic ingredients, as demonstrated by Harold Urey and Stanley Miller in their famous ‘spark’ experiments more than 50 years ago, and the earliest cells.
2. Understanding the nature of the chemical bond and modeling chemistry on the computer.
“The chemistry of the future”, wrote the zoologist D’Arcy Wentworth Thompson in 1917, “must deal with molecular mechanics by the methods and in the strict language of mathematics”. Just 10 years later that seemed possible: the physicists Walter Heitler and Fritz London showed how to describe a chemical bond using the equations of then nascent quantum theory, and the great American chemist Linus Pauling proposed that bonds form when the electron orbitals of different atoms can overlap in space. A competing theory by Robert Mulliken and Friedrich Hund suggested that bonds are the result of atomic orbitals merging into “molecular orbitals” that extend over more than one atom. Theoretical chemistry seemed about to become a branch of physics.
Nearly 100 years later the molecular-orbital picture has become the most common one, but there is still no consensus among chemists that it is always the best way to look at molecules. The reason is that this model of molecules and all others are based on simplifying assumptions and are thus approximate, partial descriptions. In reality, a molecule is a bunch of atomic nuclei in a cloud of electrons, with opposing electrostatic forces fighting a constant tug-of-war with one another, and all components constantly moving and reshuffling. Existing models of the molecule usually try to crystallize such a dynamic entity into a static one and may capture some of its salient properties but neglect others.
Quantum theory is unable to supply a unique definition of chemical bonds that accords with the intuition of chemists whose daily business it is to make and break them. There are now many ways of assigning bonds to the quantum description of molecules as electrons and nuclei. According to quantum chemist Dominik Marx of the University of Bochum in Germany, “some are useful in some cases but fail in others and vice versa”. As a result, he says, “there will always be a search, and thus controversy, for ‘the best method’”.
This is no obstacle to calculating the structures and properties of molecules from quantum first principles – something that can be done to great accuracy if the number of electrons is relatively small. “Computational chemistry can be pushed to the level of utmost realism and complexity”, says Marx. As a result, computer calculations can increasingly be regarded as a kind of virtual experiment that predicts the outcome of a reaction.
But the challenge is to extend these approaches to increasingly complex cases. On the one hand, that may mean simply modelling more molecules. Can a computer model capture the complicated environment inside cells, for example, where many molecules large and small interact, aggregate and react within the responsive, protean medium of salty water? At the moment, most descriptions of such processes use highly simplified descriptions of bonding in which atoms are little more than balls on springs. Can computational chemistry help us understand, say, the detailed workings of a vast biomolecular machine like the ribosome?
On the other hand, can computational methods capture complex chemical processes and behavior, such as catalysis? Attempts to do so tend at the moment to rely on ways of bridging the calculations to intuitive expectations. One promising approach, being developed by Jörg Behler at Bochum, uses neural networks to deduce the energy surfaces on which these reactions happen. It also remains hard to predict subtle behaviour such as superconductivity. But already new materials have been discovered by computation – perhaps in times to come that will become the norm.
3. Graphene and carbon nanotechnology: sculpting with carbon.
The discovery of fullerenes – hollow, cagelike molecules made entirely of carbon – in 1985 was literally the start of something much bigger. The polyhedral shells of these molecules showed how the flat sheets of carbon atoms that make up graphite – where they are joined into hexagonal rings tiled side by side, like chicken wire – can be curved by including some pentagonal rings. With precisely 12 pentagons, the structure curls up into a closed shell. Six years later tubes of graphite-like carbon just a few nanometers in diameter, called carbon nanotubes, fostered the idea that this sort of carbon can be moulded into all manner of curved nanoscale structures. Being hollow, extremely strong and stiff, and electrically conducting, carbon nanotubes promised applications ranging from high-strength carbon composites to tiny wires and electronic devices, miniature molecular capsules and water-filtration membranes.
Now graphite itself has moved centre stage, thanks to the discovery that it can be separated into individual sheets, called graphene, that could supply the fabric for ultra-miniaturized, cheap and robust electronic circuitry. Graphene garnered the 2010 Nobel prize in physics, but the success of this and other forms of carbon nanotechnology might ultimately depend on chemistry. For one thing, ‘wet’ chemical methods may prove the cheapest and simplest for separating graphite into its component sheets. “Graphene can be patterned so that the interconnect and placement problems of carbon nanotubes are overcome”, says carbon specialist Walt de Heer of the Georgia Institute of Technology.
Some feel, however, that graphene has so far been over-hyped in a way that plays down the hurdles to making it a viable technology. “The hype is extreme”, says de Heer. “Many of the newly claimed superlative graphene properties are really graphite properties ‘under new management’ and were known and used for a very long time.” He believes graphitic electronics has not yet been shown to be viable. “The best that has been done to date is to show that ultrathin graphite (including graphene) can be gated [switched electronically, as in transistors]. But the gating is quite poor, since you cannot turn it completely off. Most people would not consider this to be even a starting point for electronics.” And he says that existing methods of graphene patterning are so crude that the edges undo any advantage that graphene nanoribbons have to offer. However, narrow ribbons and networks can be made to measure with atomic precision by using the techniques of organic chemistry to build them up from ‘polyaromatic’ molecules, in which several hexagonal carbon rings are linked together like little fragments of a graphene sheet. It seems quite possible that graphene technology will depend on clever chemistry.
[Watch this space: I’ve just written a piece on graphene for BBC’s pop-sci magazine Focus, which explores all these things in greater depth.]
4. Artificial photosynthesis.
Of all the sources of ‘clean energy’ available to us, sunlight seems the most tantalizing. With every sunrise comes a reminder of the vast resource of which we currently tap only a pitiful fraction. The main problem is cost: the expense of conventional photovoltaic panels made of silicon still restricts their use. But life on Earth, almost all of which is ultimately solar-powered by photosynthesis, shows that solar cells don’t have to be terribly efficient if, like leaves, they can be made abundantly and cheaply enough.
Yet ‘artificial photosynthesis’ and the ‘artificial leaf’ are slippery concepts. Do they entail converting solar to chemical energy, just as the leaf uses absorbed sunlight to make the biological ‘energy molecule’ ATP? Or must the ‘artificial leaf’ mimic photosynthesis by splitting water to make hydrogen – a fuel – and oxygen?
“Artificial photosynthesis means different things to different people”, says photochemist Devens Gust of Arizona State University. “Some people call virtually any sort of solar energy conversion that involves electricity or fuels artificial photosynthesis.” Gust himself reserves the term for photochemical systems that make fuels using sunlight: “I like to define it as the use of the fundamental scientific principles underlying natural photosynthesis for the design of technological solar-energy conversion systems.”
“One of the holy grails of solar energy research is using sunlight to produce fuels”, Gust explains. “In order to make a fuel, we need not only energy from sunlight, but a source of electrons, and some material to reduce to a fuel with those electrons. The source of electrons has to be water, if the process is to be carried out on a scale anything like that of human energy usage. The easiest way to make a fuel from this is to use the electrons to reduce the protons to hydrogen gas.” Nathan S. Lewis and his collaborators at Caltech are developing an artificial leaf that would do just that using silicon nanowires.
MIT chemist Daniel Nocera and his coworkers have recently announced an ‘artificial leaf’: a device the size of a credit card in which silicon solar cells and a photocatalyst of metals such as nickel and cobalt split water into hydrogen and oxygen which can then be used to drive fuel cells. Nocera estimates that a gallon of water would provide enough fuel to power a home in developing countries for a day. “Our goal is to make each home its own power station”, he says. His start-up company Sun Catalytix aims to take the technology to a commercial level.
But “water oxidation is not a solved problem, even at a fundamental level”, according to Gust. “Cobalt catalysts such as the one that Nocera uses, and newly-discovered catalysts based on other common metals are promising”, he says, but there is still no potentially inexpensive, ideal catalyst. “We don’t know how the natural photosynthetic catalyst, which is based on four manganese atoms and a calcium atom, works”, Gust adds.
Carbon-based fuels are easier than hydrogen to transport, store and integrate with current technologies. Photosynthesis makes carbon-based fuels (sugars, ATP) using sunlight. Gust and his colleagues have been working on making molecular assemblies for artificial photosynthesis that more closely mimic their biological inspiration. “We know how to make artificial antenna systems and photosynthetic reaction centers that work in the lab, but questions about stability remain, as they are usually based at least in part on organic molecules.” He admits that “we are not very close to a technologically useful catalyst for converting carbon dioxide to a useful liquid fuel.” On the other hand, he says, “the recent increase in funding, worldwide for solar fuels has meant that many more researchers have gotten into the game.” If this funding can be preserved, he anticipates “really significant advances.” Let’s hope so, since as Gust says, “we desperately need a fuel or energy source that is abundant, inexpensive, environmentally benign, and readily available.”
5. Devising catalysts for making biofuels.
The demand for biofuels – fuels made by conversion of organic matter, primarily plants – isn’t driven just by concern for the environment. While it’s true that a biofuel economy is notionally sustainable – carbon emissions from burning the fuels are balanced by the carbon dioxide taken up to grow the fuel crops – the truth is that it’s increasingly hard to find any good alternatives. Organic liquids (oil and petroleum) remain the main energy source globally, and are forecast to do so at least until the mid-century. But several estimates say that, at current production rates, we have only about 50 years worth of oil reserves left. What’s more, most of these are in politically unstable parts of the world. And currently soaring prices are expected to continue – the days of cheap oil are over.
There’s nothing new about biofuels: time was when there was only wood to burn in winter, or peat or dried animal dung. But that’s a very inefficient way to use the energy bound up in carbon-based molecules. Today’s biofuels are mostly ethanol made from fermenting corn, sugar-cane or switchgrass, or biodiesel, an ester made from the lipids in rapeseed or soybean oils. The case for biofuels seems easy to make – as well as being potentially greener and offering energy security, they can come from crops grown on land unsuitable for food agriculture, and can boost rural economies.
But the initial optimism about biofuels cooled quickly. For one thing, they threaten to displace food crops, particularly in developing countries where selling biofuels abroad can be more lucrative than feeding people at home. And the numbers are daunting: meeting current oil demand will mean requisitioning huge areas of arable land. But these figures depend crucially on how efficiently the carbon is used. Some parts of plants, particularly the resinous lignin, can’t easily be turned into biofuel, especially by biological fermentation. Finding new chemical catalysts to assist this process looks essential if biofuels are to fly.
One of the challenges of breaking down lignin – cracking open ‘aromatic C-O bonds’: benzene rings bridged by an oxygen – was recently met by John Hartwig and Alexey Sergeev of the University of Illinois, who found a nickel-based catalyst that will do the trick. Hartwig points out that, if biomass is to supply non-fossil-fuel chemical feedstocks as well as fuels, it will need to offer aromatic compounds – of which lignin is the only major potential source.
It’s a small part of a huge list of challenges: “There are issues at every level”, says Hartwig. Some of these are political – a carbon tax, for example, could decide the economical viability of biofuels. But many are chemical. The changes in infrastructure and engineering needed for an entirely new liquid fuel (more or less pure alcohol) are so vast that it seems likely the biofuels will need to be compatible with existing technology – in other words, to be hydrocarbons. That means converting the oxidized compounds in plant matter to reduced ones. Not only does this require catalysts, but it also demands a source of hydrogen – either from fossil fuels or ideally, but dauntingly, from splitting of water.
And fuels will need to be liquid for easy transportation along pipelines. But biomass is primarily solid. Liquefaction would need to happen on site where the plant is harvested. And one of the difficulties for catalytic conversion is the extreme impurity of the reagent – classical chemical synthesis does not tend to allow for reagents such as ‘wood’. “There’s no consensus on how all this will be done in the end”, says Hartwig. But an awful lot of any solution lies with the chemistry, especially with finding the right catalysts. “Almost every industrial reaction on a large scale has a catalyst associated”, Hartwig points out.
6. Understanding the chemical basis of thought and memory.
The brain is a chemical computer. Interactions between the neurons that form its circuitry are mediated by molecules: neurotransmitters that pass across the synaptic spaces where one neural cell wires up to another. This chemistry of the mind is perhaps at its most impressive in the operation of memory, in which abstract principles and concepts – a telephone number, say – are imprinted in states of the neural network by sustained chemical signals. How does chemistry create a memory that is at the same time both persistent and dynamic: susceptible to recall, revision and forgetting?
We now know that a cascade of biochemical processes, leading to a change in production of neurotransmitter molecules at the synapse, triggers ‘learning’ for habitual reflexes. But even this ‘simple’ aspect of learning has short- and long-term stages. Meanwhile, more complex so-called ‘declarative’ memory (of people, places and so on) has a different mechanism and location in the brain, involving the activation by the excitatory neurotransmitter glutamate of a protein called the NMDA receptor. Blocking these receptors with drugs prevents memory retention for many types of declarative memory.
Our everyday declarative memories are often encoded in a process called long-term potentiation (LTP), which involves NMDA receptors and in accompanied by an expansion of the synapse, the region of a neuron involved in its communication with others. As the synapse grows, so does the ‘strength’ of its connection with neighbours. The biochemistry of this process has been clarified in the past several years. It involves stimulation of the formation of filaments within the neuron made from the protein actin – the basic scaffolding of the cell, which determine its size and shape. But that process can be undone during a short period before the change is consolidated by biochemical agents that block the newly formed filaments.
Once encoded, long-term memory for both simple and complex learning is actively maintained by switching on genes that produce proteins. It now appears that this can involve a self-perpetuating chemical reaction of a prion, a protein molecule that can switch between two different conformations. This switching process was first discovered for its role in neurodegenerative disease, but prion mechanisms have now been found to have normal, beneficial functions too. The prion protein is switched from a soluble to an insoluble, aggregated state that can then perpetuate itself autocatalytically, and which ‘marks’ a particular synapse to retain a memory.
There are still big gaps in the story of how memory works, many of which await filling with the chemical details. How, for example, is memory recalled once it has been stored? “This is a deep problem whose analysis is just beginning”, says neuroscientist and Nobel laureate Eric Kandel of Columbia University. It may involve the neurotransmitters dopamine and acetylcholine. And what happens at the molecular level when things go wrong, for example in Alzheimer’s-related memory loss and other cognitive disorders that affect memory? Addressing and perhaps even reversing such problems will require a deeper understanding of the many biochemical processes in memory storage, including a better understanding of the chemistry of prions – which in turn seems to point us increasingly towards a more fundamental grasp of protein structure and how it is shaped by evolution.
Getting to grips with the chemistry of memory offers the enticing, and controversial, prospect of pharmacological enhancement. Some memory-boosting substances are already known: neuropeptides, sex steroids and chemicals that act on receptors for nicotine, glutamate, serotonin and other neurotransmitters and their mimics have all been shown to enhance memory. In fact, according to neurobiologist Gary Lynch at the University of California at Irvine, the complex sequence of steps leading to long-term learning and memory means that there are a large number of potential targets for such ‘memory drugs’. However, there’s so far little evidence that known memory boosters improve cognitive processing more generally – that’s to say, it’s not clear that they actually make you smarter. Moreover, just about all studies so far have been on rodents and monkeys, not humans.
Yet it seems entirely possible that effective memory enhancers will be found. Naturally, such possibilities raise a host of ethical and social questions. One might argue that using such drugs is not so different from taking vitamins to improve health, or sleeping pills to get a much-needed good rest, and that it can’t be a bad thing to allow people to become brighter. But can it be right for cognitive enhancement to be available only for those who can afford it? In manipulating the brain’s chemistry, are we modifying the self? As our knowledge and capabilities advance, such ethical questions will become unavoidable.
7. Understanding the chemical basis of epigenetics.
Cells, like humans, become less versatile and more narrowly focused as they age. Pluripotent stem cells present in the early embryo can develop into any tissue type; but as the embryo grows, cells ‘differentiate’, acquiring specific roles (such as blood, muscle or nerve cells) that remain fixed in their progeny. One of the revolutionary discoveries in research on cloning and stem cells, however, is that this process isn’t irreversible. Cells don’t lose genes as they differentiate, retaining only those they need. Rather, the genes are switched off but remain latent – and can be reactivated. The recent discovery that a cocktail of just four proteins is sufficient to cause mature differentiated cells to revert to stem-cell-like status, becoming induced pluripotent cells, might not only transform regenerative medicine but also alters our view of how the human body grows from a fertilized egg.
Like all of biology, this issue has chemistry at its core. It’s slowly becoming clear that the versatility of stem cells, and its gradual loss during differentiation, results from the chemical changes taking place in the chromosomes. Whereas the old idea of biology makes it a question of which genes you have, it is now clear that an equally important issue is which genes you use. The formation of the human body is a matter of chemically modifying the stem cells’ initial complement of genes to turn them on and off.
What is particularly exciting and challenging for chemists is that this process seems to involve chemical events happening at size scales greater than those of atoms and molecules: at the so-called mesoscale, involving the interaction and organization of large molecular groups and assemblies. Chromatin, the mixture of DNA and proteins that makes up chromosomes, has a hierarchical structure. The double helix is wound around cylindrical particles made from proteins called histones, and this ‘string of beads’ is then bundled up into higher-order structures that are poorly understood. Yet it seems that cells exert great control over this packing – how and where a gene is packed into chromatin may determine whether it is ‘active’ or not. Cells have specialized enzymes for reshaping chromatin structure, and these have a central role in cell maturation and differentiation. Chromatin in embryonic stem cells seems to have a much looser, open structure: as some genes fall inactive, the chromatin becomes increasingly lumpy and organized. “The chromatin seems to fix and maintain or stabilize the cells’ state”, says pathologist Bradley Bernstein of the Massachusetts General Hospital in Boston.
What’s more, this process is accompanied by chemical modification of both DNA and histones. Small-molecule tags become attached to them, acting as labels that modify or silence the activity of genes. The question of to what extent mature cells can be returned to pluripotency – whether iPS cells are as good as true stem cells, which is a vital issue for their use in regenerative medicine – seems to hinge largely on how far this so-called epigenetic marking can be reset. If iPS cells remember their heritage (as it seems they partly do), their versatility and value could be compromised. On the other hand, some histone marks seem actually to preserve the pluripotent state.
It is now clear that there is another entire chemical language of genetics – or rather, of epigenetics – beyond the genetic code of the primary DNA sequence, in which some of the cell’s key instructions are written. “The concept that the genome and epigenome form an integrated system is crucial”, says geneticist Bryan Turner of the University of Birmingham in the UK.
The chemistry of chromatin and particularly of histone modifications may be central to how the influence of our genes gets modified by environmental factors. “It provides a platform through which environmental components such as toxins and foodstuffs can influence gene expression”, says Turner. “We are now beginning to understand how environmental factors influence gene function and how they contribute to human disease. Whether or not a genetic predisposition to disease manifests itself will often depend on environmental factors operating through these epigenetic pathways. Switching a gene on or off at the wrong time or in the wrong tissue can have effects on cell function that are just as devastating as a genetic mutation, so it’s hardly surprising that epigenetic processes are increasingly implicated in human diseases, including cancer.”
8. Finding new ways to make complex molecules.
The core business of chemistry is a practical, creative one: making molecules. But the reasons for doing that have changed. Once the purpose of constructing a large natural molecule such as vitamin B12 by painstaking atom-by-atom assembly was to check the molecular structure. If what you build, knowing were each atom is going, is the same as what nature makes, it presumably has the same structure. But we’re now good enough at deducing structures from methods such as X-ray crystallography – often for molecules that it would be immensely hard to make anyway – that this justification is hard to sustain.
Maybe it’s worth making a molecule because it is useful – as a drug, say. That’s true, but the more complicated the molecule, the less useful its synthesis from scratch (‘total synthesis’) tends to be, because of the cost and the small yield of the product after dozens of individual steps. Better, often, to extract the molecule from natural sources, or to use living organisms to make it or part of it, for example by equipping bacteria or yeast with the necessary enzymes.
And total synthesis is typically slow – even if rarely as slow as the 11-year project to make vitamin B12 that began in 1961. Yet new molecules and drugs are often needed very fast – for example, new antibiotics to outstrip the rise of resistant microorganisms.
As a result, total synthesis is “a lot harder to justify than it once was”, according to industrial chemist Derek Lowe. It’s a great training ground for chemists, but are there now more practical ways to make molecules? One big hope was combinatorial chemistry, in which new and potentially useful molecules were made by a random assembly of building blocks followed by screening to identify those that do a job well. Once hailed as the future of medicinal chemistry, ‘combi-chem’ fell from favour as it failed to generate anything useful.
But after the initial disappointments, combi-chem may enjoy a brighter second phase. It seems likely to work only if you can make a wide enough range of molecules and find good ways of picking out the minuscule amounts of successful ones. Biotechnology might help here – for example, each molecule could be linked to a DNA-based ‘barcode’ that both identifies it and aids its extraction. Or cell-based methods might coax combinatorial schemes towards products with particular functions using guided (‘directed’) evolution in the test tube.
There are other new approaches to bond-making too, which draw on nature’s mastery of uniting fragments in highly selective yet mild ways. Proteins, for example, have a precise sequence of amino acids determined by the base sequence of the messenger RNA molecule on which they are assembled in the ribosome. Using this model, future chemists might program molecular fragments to assemble autonomously in highly selective ways, rather than relying on the standard approach of total synthesis that involves many independent steps, including cumbersome methods for protecting the growing molecule from undesirable side reactions. For example, David Liu at Harvard University and his coworkers have devised a molecule-making strategy inspired by nature’s use of nucleic-acid templates to specify the order in which units are linked together. They tagged small molecules with short DNA strands that ‘programme’ them for linkage on a DNA template. And they have created a ‘DNA walker’ which can step along a template strand sequentially attaching small molecules dangling from the strand to produce a macromolecular chain – a process highly analogous to protein synthesis on the ribosome, essentially free from undesirable side reactions. This could be a handy way to tailor new drugs. “Many molecular life scientists believe that macromolecules will play an increasingly central, if not dominant, role in the future of therapeutics”, says Liu.
9. Integrating chemistry: creating a chemical information technology.
Increasingly, chemists don’t simply want to make molecules but also to communicate with them: to make chemistry an information technology that will interface with anything from living cells to conventional computers and fibre-optic telecommunications. In part, this is an old idea: biosensors in which chemical reactions are used to report on concentrations of glucose in the blood date back to the 1960s, although only recently has their use for monitoring diabetes been cheap, portable and widespread. Chemical sensing has countless applications – to detect contaminants in food and water at very low concentrations, say, or to monitor pollutants and trace gases in the atmosphere.
But it is in biomedicine that chemical sensors have the most dramatic potential. Some of the products of cancer genes circulate in the bloodstream long before the condition becomes apparent to regular clinical tests – if they could be detected early, prognoses would be vastly improved. Rapid genomic profiling would enable drug regimes to be tailored to individual patients, reducing risks of side-effects and allowing some medicines to be used that today are hampered by their dangers to a genetic minority. Some chemists foresee continuous, unobtrusive monitoring of all manner of biochemical markers of health and disease, perhaps in a way that is coupled remotely to alarm systems in doctors’ surgeries or to automated systems for delivering remedial drug treatments. All of this depends on developing chemical methods for sensing and signaling with high selectivity and often at very low concentrations. “Advances are needed in improving the sensitivity of such systems so that biological intermediates can be detected a much lower levels”, says chemist Allen Bard of the University of Texas at Austin. “This raises a lot of challenges. But such analyses could help in the early detection of disease.”
Integrated chemical information systems might go much further still. Prototype ‘DNA computers’ have been developed in which strands of bespoke DNA in the blood can detect, diagnose and respond to disease-related changes in gene activity. Clever chemistry can also couple biological processes to electronic circuitry, for example so that nerve cells can ‘speak’ to computers. Information processing and logic operations can be conducted between individual molecules. The photosynthetic molecular apparatus of some organisms even seems able to manipulate energy using the quantum rules that physicists are hoping to exploit in super-powerful quantum computers. It is conceivable that mixtures of molecules might act as super-fast quantum computers to simulate the quantum behavior of other molecules, in ways that are too computationally intensive on current machines. According to chemistry Nobel laureate Jean-Marie Lehn of the University of Strasbourg, this move of chemistry towards what he calls a science of informed (and informative) matter “will profoundly influence our perception of chemistry, how we think about it, how we perform it.”
10. Exploring the limits of applicability of the periodic table, and new forms of matter that lie outside it.
The periodic tables that adorn the walls of classrooms are now having to be constantly revised, because the number of elements keeps growing. Using particle accelerators to crash atomic nuclei together, scientists can create new ‘superheavy’ elements, with more protons and neutrons than the 92 or so elements found in nature. These engorged nuclei are not very stable – they decay radioactively, often within a tiny fraction of a second. But while they exist, the new ‘synthetic’ elements such as seaborgium (element 106) and hassium (108) are like any other insofar as they have well defined chemical properties. In dazzling experiments, the properties of both of these synthetic elements have been investigated from just a handful of the elusive atoms in the instant before they fall apart.
Such studies probe not just the physical but the conceptual limits of the periodic table: do these superheavy elements continue to display the trends and regularities in chemical behavior that make the table periodic in the first place? Some do, and some don’t. In particular, such massive nuclei hold on to the atoms’ innermost electrons so tightly that they move at close to the speed of light. Then the effects of special relativity increase their mass and play havoc with the quantum energy states on which their chemistry – and thus the table’s periodicity – depends.
Because nuclei are thought to be stabilized by particular ‘magic numbers’ of protons and neutrons, some researchers hope to find an ‘island of stability’, a little beyond the current capabilities of element synthesis, in which these superheavies live for longer. But is there any fundamental limit to their size? A simple calculation suggests that relativity prohibits electrons from being bound to nuclei of more than 137 protons. But more sophisticated calculations defy that limit. “The periodic system will not end at 137; in fact it will never end”, insists nuclear physicist Walter Greiner of the Johann Wolfgang Goethe University in Frankfurt, Germany. The experimental test of that claim remains a long way off.
Besides extending the periodic table, chemists are stepping outside it. Conventional wisdom has it that the table enumerates all the ingredients that chemists have at their disposal. But that’s not quite true. For one thing, it has been found that small clusters of atoms can act collectively like single ‘giant’ atoms of other elements. A so-called ‘superatom’ of aluminum containing precisely 13 atoms will behave like a giant iodine atom, while an Al14 cluster behaves like an alkaline earth metal. “We can take one element and have it mimic several different elements in the Periodic Table”, says Shiv Khanna of Virginia Commonwealth University in Richmond, Virginia. It’s not yet clear how far this superatom concept can be pushed, but according to one of its main advocates, A. Welford Castleman of Pennsylvania State University, it potentially makes the periodic table three-dimensional, each element being capable of mimicking several others in suitably sized clusters. There’s no fundamental reason why such superatoms have to contain just one element either, nor why the ‘elements’ they mimic need be analogues of others in the table.
Furthermore, physicists have made synthetic atoms that are not like traditional ones at all, with nuclei of protons (and perhaps neutrons) surrounded by electrons. The electron’s heavier cousin the muon can replace the electron in ‘muonium’, a kind of heavy hydrogen. And the anti-electron, or positron, can act as the positive nucleus of ‘positronium’, a super-light analogue of hydrogen. A slightly heftier version of ‘light hydrogen’ has been made that substitutes the central proton for a positively charged muon. These synthetic atoms have been used to test aspects of the quantum theory of chemical reactions. And by comparing the spectrum of muonium with that of ordinary hydrogen, researchers have been able to obtain a new, more accurate value for the mass of the proton.
Monday, September 19, 2011
Talking to Yo-Yo Ma
I recently interviewed Yo-Yo Ma for the Financial Times. The article is now published, but here is the original version. It goes without saying that this was an honour to do, but it turned out also to be a huge pleasure, as Yo-Yo is so engaging, unaffected and thoughtful – it’s easy to see why the UN selected him as a Peace Ambassador. From what I’ve heard so far, his new CD is pretty fabulous too. Forgive me if I’m sounding too much the fanboy here – he’s just a very nice bloke.
____________________________________________________________________
When Yo-Yo Ma was asked to identify a private passion for this article, Sony sent back the message ‘Yo-Yo Ma is interested in everything.’ I’d have happily discussed Everything with Ma, and initially he seems determined to make that happen. His first question to me (were we doing this thing the right way round?) is about the latest technology for splitting water to make hydrogen as a fuel, a trick borrowed from photosynthesis in plants. This turns out to be an offshoot of his interest in water and rivers, a topic that could evidently have engaged us throughout the short time I was allotted in Ma’s frantic schedule during his visit to London for a performance at the Proms.
In view of all this, it comes as no surprise to discover that Ma’s fascination with neuroscience – this is what I’m allegedly there to discuss – is not a hobby like jam-making or long-distance running, but is merely one of the many facets of what begins to emerge as his grand vision: to foster a creative society. One might even be forgiven for suspecting that the music-making for which Ma enjoys world renown happens almost by chance to be the avenue through which he pursues this goal. It could equally, perhaps, have been anthropology, which Ma studied at university.
As Ma began playing the cello at age 4, however, it seems unlikely that his musical career left much to chance. A child prodigy, he performed before Presidents Eisenhower and Kennedy and was conducted by Leonard Bernstein. He then studied at the renowned Juilliard School in New York City before completing a liberal arts degree at Harvard. What followed is the kind of glittering career that all too readily becomes a numbing litany of awards and accolades that have left Ma described as ‘one of the most recognizable classical musicians on the planet’. He was the natural choice to take Pablo Casals’ part when the concert for Kennedy’s inauguration, at which Casals performed, was restaged for its 50th anniversary last January.
So far, so conventionally awe-inspiring. But the stereotype of the stratospheric virtuoso doesn’t last a moment once Ma appears, fresh from premiering Graham Fitkin’s intense Cello Concerto at the Royal Albert Hall – written for Ma – the night before. Isn’t he too young, for starters? (56 in October, since you ask.) And instead of gravitas or world-weariness, he has a boyish enthusiasm for, well, everything.
But I shouldn’t be surprised that Ma is no remote creature of the highbrow concert circuit. He has appeared on Sesame Street and (in cartoon form) on The Simpsons, he can be heard on the soundtrack to Crouching Tiger, Hidden Dragon, and he is a UN Peace Ambassador. He has performed with Sting and Bobby McFerrin, and his latest CD is a bluegrass collaboration, The Goat Rodeo Sessions.
I’m not meant to be talking about any of that, though – the topic on the table is neuroscience. We’ll get there, but there’s a broader agenda: to unite the notorious Two Cultures of C. P. Snow. Ma has been reading Richard Holmes’ The Age of Wonder, which describes how Keats, Coleridge and Shelley shared with Humphry Davy and William Herschel a passion for the marvels and mysteries of the natural world. “This is what happened in the 1800s”, Ma says. “Maybe we’re in another point in time where we actually need both specialists and generalists. The word amateur used to be a positive term. Nowadays if you’re an amateur, you’re a dilettante, you’re not serious.”
I profess my own exhilaration at Holmes’ demand that we should be impatient with “the old, rigid debates and boundaries” – that we need “a wider, more generous, more imaginative” way of writing about science that can locate it within the rest of culture. “That’s exactly what I’d hope for,” Ma agrees. “I love quoting [Nobel laureate physicist Richard] Feynman, who said that nature has a much greater imagination than humans, but she guards her secrets jealously. So his job as a scientist is to unlock some of those secrets, and interpret them for you. That’s what music tries to do. If I’m trying to describe something that someone else wrote, I have to get into that world and then I have to find a way to ensure that what I think is there lives in you also.”
Perhaps neuroscience can create bridges because the brain is the crucible within which art, science and all of culture are forged, presumably with the same tools. This is the seat of the creativity that we channel into discovery and expression: looking out and looking in. For Ma, the work of neuroscientist Antonio Damasio on homeostasis expresses something of where these creative impulses come from. Homeostasis is the tendency of all living things to maintain the internal conditions necessary for their continuation, and Damasio considers all non-conscious aspects of this self-preservation to be forms of emotion, whether they are basic reflexes, immune responses or ‘emotions-proper’ such as joy. “Life forms are always looking for homeostasis, equilibrium”, says Ma. So behaviours that promote it are responding to a need. “That made a lot of sense to me.”
His experiences among the Kalahari bushmen of southern Africa, who he visited for a documentary 15 years after he had studied them in his anthropology courses, convinced him that music can perform that function in many ways. “They do these trance dances that are for spiritual and religious purposes, it’s for medicine, it’s their art form, it’s everything. That matches all I’ve learnt about what music should be or could do.” It’s there because it fulfils fundamental needs. “Sound is one of our basic senses, so everyone uses sound to its maximum advantage: to promotes things that lead to homeostasis.”
But how does that magic work? I suggest that music is exploiting our instincts to make sense of our environment, to look for patterns, to develop hypotheses about our environment. It’s setting us puzzles. Ma is fascinated by how the brain’s plasticity ensures we have the capacity to solve them, to convert sensory data into a viable model of the world. “A newborn sees everything essentially upside down. But its brain is constantly interpreting what is being received, and at some stage it will just decide to turn all the information around.”
I mention Damasio’s insistence, in Descartes’ Error (1994), on the somatic component of the brain – that we are not Descartes’ disembodied mental homunculus directing a physical body, but that instead the self cannot be meaningfully imagined without being embedded in a body. This must be resonant for a musician? He concurs and suggests that the role of tactility in our mental well-being is under-appreciated. “That’s our largest organ.”
Ma sees this separation of intellect and mechanism, of the self and the body, as pernicious. “We’ve based so much of our educational system on it. At the music conservatory there’s a focus on the plumbing, not psychology. It’s about the engineering of sound, how to play accurately. But then going to university, the music professor would say ‘you can play very well, but why do you want to do it?’ Music is powered by ideas. If you don’t have clarity of ideas, you’re just communicating sheer sound.”
And this is about much more than intellectual transmission. It has to be packaged with emotion. “Passion is one great force that unleashes creativity, because if you’re passionate about something, then you’re more willing to take risks.” According to Damasio, there’s a deeper function of passion too. He challenged decades if not centuries of preconception about rationality by showing that emotion plays a vital part in it. Far from being a distraction, emotion is often the lubricant of good decision-making: when it is lacking, as in some people with mental impairments or deficits, the ability to make sound choices – or any choices at all – can evaporate.
He doesn’t want to stop. With his manager giving a gentle yet determined signal that our time is up, he exhorts me to ask one more question. So – how can music be made central to education, rather than an option at the periphery? His response makes the big vision a little more concrete: it is about finding ways to communicate ideas in a manner that yields the greatest harvest of creativity. “There is nothing more important today than to find a way to be knowledge-based creative societies. My job as a performer is to make sure that whatever happens in a performance lives in somebody else, that it’s memorable. It’s great if a person buys the CD or a ticket to the concert, but its only when the ideas are passed on that your job is done. If you forget tomorrow what you heard yesterday, there’s really not much point in you having been there – or me, for that matter. Now, isn’t that the purpose of education too? That’s when I realised that education and culture are the same. Once something is memorable, it’s living and you’re using it. That to me is the foundation of a creative society.”
____________________________________________________________________
When Yo-Yo Ma was asked to identify a private passion for this article, Sony sent back the message ‘Yo-Yo Ma is interested in everything.’ I’d have happily discussed Everything with Ma, and initially he seems determined to make that happen. His first question to me (were we doing this thing the right way round?) is about the latest technology for splitting water to make hydrogen as a fuel, a trick borrowed from photosynthesis in plants. This turns out to be an offshoot of his interest in water and rivers, a topic that could evidently have engaged us throughout the short time I was allotted in Ma’s frantic schedule during his visit to London for a performance at the Proms.
In view of all this, it comes as no surprise to discover that Ma’s fascination with neuroscience – this is what I’m allegedly there to discuss – is not a hobby like jam-making or long-distance running, but is merely one of the many facets of what begins to emerge as his grand vision: to foster a creative society. One might even be forgiven for suspecting that the music-making for which Ma enjoys world renown happens almost by chance to be the avenue through which he pursues this goal. It could equally, perhaps, have been anthropology, which Ma studied at university.
As Ma began playing the cello at age 4, however, it seems unlikely that his musical career left much to chance. A child prodigy, he performed before Presidents Eisenhower and Kennedy and was conducted by Leonard Bernstein. He then studied at the renowned Juilliard School in New York City before completing a liberal arts degree at Harvard. What followed is the kind of glittering career that all too readily becomes a numbing litany of awards and accolades that have left Ma described as ‘one of the most recognizable classical musicians on the planet’. He was the natural choice to take Pablo Casals’ part when the concert for Kennedy’s inauguration, at which Casals performed, was restaged for its 50th anniversary last January.
So far, so conventionally awe-inspiring. But the stereotype of the stratospheric virtuoso doesn’t last a moment once Ma appears, fresh from premiering Graham Fitkin’s intense Cello Concerto at the Royal Albert Hall – written for Ma – the night before. Isn’t he too young, for starters? (56 in October, since you ask.) And instead of gravitas or world-weariness, he has a boyish enthusiasm for, well, everything.
But I shouldn’t be surprised that Ma is no remote creature of the highbrow concert circuit. He has appeared on Sesame Street and (in cartoon form) on The Simpsons, he can be heard on the soundtrack to Crouching Tiger, Hidden Dragon, and he is a UN Peace Ambassador. He has performed with Sting and Bobby McFerrin, and his latest CD is a bluegrass collaboration, The Goat Rodeo Sessions.
I’m not meant to be talking about any of that, though – the topic on the table is neuroscience. We’ll get there, but there’s a broader agenda: to unite the notorious Two Cultures of C. P. Snow. Ma has been reading Richard Holmes’ The Age of Wonder, which describes how Keats, Coleridge and Shelley shared with Humphry Davy and William Herschel a passion for the marvels and mysteries of the natural world. “This is what happened in the 1800s”, Ma says. “Maybe we’re in another point in time where we actually need both specialists and generalists. The word amateur used to be a positive term. Nowadays if you’re an amateur, you’re a dilettante, you’re not serious.”
I profess my own exhilaration at Holmes’ demand that we should be impatient with “the old, rigid debates and boundaries” – that we need “a wider, more generous, more imaginative” way of writing about science that can locate it within the rest of culture. “That’s exactly what I’d hope for,” Ma agrees. “I love quoting [Nobel laureate physicist Richard] Feynman, who said that nature has a much greater imagination than humans, but she guards her secrets jealously. So his job as a scientist is to unlock some of those secrets, and interpret them for you. That’s what music tries to do. If I’m trying to describe something that someone else wrote, I have to get into that world and then I have to find a way to ensure that what I think is there lives in you also.”
Perhaps neuroscience can create bridges because the brain is the crucible within which art, science and all of culture are forged, presumably with the same tools. This is the seat of the creativity that we channel into discovery and expression: looking out and looking in. For Ma, the work of neuroscientist Antonio Damasio on homeostasis expresses something of where these creative impulses come from. Homeostasis is the tendency of all living things to maintain the internal conditions necessary for their continuation, and Damasio considers all non-conscious aspects of this self-preservation to be forms of emotion, whether they are basic reflexes, immune responses or ‘emotions-proper’ such as joy. “Life forms are always looking for homeostasis, equilibrium”, says Ma. So behaviours that promote it are responding to a need. “That made a lot of sense to me.”
His experiences among the Kalahari bushmen of southern Africa, who he visited for a documentary 15 years after he had studied them in his anthropology courses, convinced him that music can perform that function in many ways. “They do these trance dances that are for spiritual and religious purposes, it’s for medicine, it’s their art form, it’s everything. That matches all I’ve learnt about what music should be or could do.” It’s there because it fulfils fundamental needs. “Sound is one of our basic senses, so everyone uses sound to its maximum advantage: to promotes things that lead to homeostasis.”
But how does that magic work? I suggest that music is exploiting our instincts to make sense of our environment, to look for patterns, to develop hypotheses about our environment. It’s setting us puzzles. Ma is fascinated by how the brain’s plasticity ensures we have the capacity to solve them, to convert sensory data into a viable model of the world. “A newborn sees everything essentially upside down. But its brain is constantly interpreting what is being received, and at some stage it will just decide to turn all the information around.”
I mention Damasio’s insistence, in Descartes’ Error (1994), on the somatic component of the brain – that we are not Descartes’ disembodied mental homunculus directing a physical body, but that instead the self cannot be meaningfully imagined without being embedded in a body. This must be resonant for a musician? He concurs and suggests that the role of tactility in our mental well-being is under-appreciated. “That’s our largest organ.”
Ma sees this separation of intellect and mechanism, of the self and the body, as pernicious. “We’ve based so much of our educational system on it. At the music conservatory there’s a focus on the plumbing, not psychology. It’s about the engineering of sound, how to play accurately. But then going to university, the music professor would say ‘you can play very well, but why do you want to do it?’ Music is powered by ideas. If you don’t have clarity of ideas, you’re just communicating sheer sound.”
And this is about much more than intellectual transmission. It has to be packaged with emotion. “Passion is one great force that unleashes creativity, because if you’re passionate about something, then you’re more willing to take risks.” According to Damasio, there’s a deeper function of passion too. He challenged decades if not centuries of preconception about rationality by showing that emotion plays a vital part in it. Far from being a distraction, emotion is often the lubricant of good decision-making: when it is lacking, as in some people with mental impairments or deficits, the ability to make sound choices – or any choices at all – can evaporate.
He doesn’t want to stop. With his manager giving a gentle yet determined signal that our time is up, he exhorts me to ask one more question. So – how can music be made central to education, rather than an option at the periphery? His response makes the big vision a little more concrete: it is about finding ways to communicate ideas in a manner that yields the greatest harvest of creativity. “There is nothing more important today than to find a way to be knowledge-based creative societies. My job as a performer is to make sure that whatever happens in a performance lives in somebody else, that it’s memorable. It’s great if a person buys the CD or a ticket to the concert, but its only when the ideas are passed on that your job is done. If you forget tomorrow what you heard yesterday, there’s really not much point in you having been there – or me, for that matter. Now, isn’t that the purpose of education too? That’s when I realised that education and culture are the same. Once something is memorable, it’s living and you’re using it. That to me is the foundation of a creative society.”
Friday, September 16, 2011
Why chemistry is good for you
This is really just for the record (mine): I have a book review in Chemistry World here. It’s a challenge to get to the nub of a big, multi-author volume in 300 words or so…
Tuesday, September 13, 2011
Here is the political weather forecast
Here’s the pre-edited version of my latest story for Nature’s online news, with added bonus boxes. There was far too much interesting stuff in this paper to cram into 700 words or so. And more on the way from others working in this field: watch this space.
_____________________________________________________________________
Signs of impending social and political change may lie hidden in a sea of data.
You could have foreseen the Arab spring if only you’d been paying enough attention to the news. That’s the claim of a new study which shows how ‘data mining’ of news reportage can reveal the possibility of future crises well before they happen.
Computer scientist Kalev Leetaru at the University of Illinois in Champaign has trawled through a vast collection of open-access news reporting and examined the ‘tone’ of the news about Tunisia, Egypt and Libya, where long-established dictatorial political leaders have been deposed by public uprisings in the so-called Arab spring. In all cases, he says, there was a clear, steady trend towards a negative tone for about a decade before the revolts [1].
While this doesn’t predict either the course or the timing of the events during last spring and summer, Leetaru argues that it provided a clear indicator of an impending crisis. “I strongly doubt we'll ever get to the point where we can say ‘at 5:05PM next July 2nd there will be a riot of 20 people at such and such street corner’”, he says. “Rather, the value of this class of work lies in warning of changing moods and environments, and increased vulnerability to a sudden shock”.
Erez Lieberman Aiden of Harvard University, who has explored the mining of digitized literary texts for linguistic and historical trends, agrees. “Leetaru’s work is interesting not so much because it makes predictions, but because it points to the power and the opportunity latent in new ways of analyzing large-scale news databases”, he says.
Political scientist Thomas Chadefaux of the Swiss Federal Institute of Technology (ETH) in Zurich, Switzerland, calls the paper “a welcome addition to a field – political science – that has cared very little about finding early warning signals for war, or making predictions at all.”
Long-term trends can be subtle and hard to spot by subjective and partial monitoring of the news. But they might presage crises more reliably than does a focus on the short term. For example, while there was talk during the spring of the possibility of similar public uprisings in Saudi Arabia, reflected in a rather negative tone in the news there during March 2011, the long-term data showed that spell to be no worse than other fluctuations in recent years – there was no worsening trend. On this basis, one would have predicted the failure of the Arab spring to unseat the Saudi rulers.
“If we think of the vast array of digital information around us today as an ocean of information, up to this point we've largely been studying the surface”, says Leetaru. “The idea behind this work is to poke our heads beneath the water for a moment to show that there's a vast world down there that we've been missing”. He thinks that automated news analysis that looks for information about mood, tone or spatial references could supply something like a political weather forecast, “offering updated assessments every few minutes for the entire planet and pointing out emerging patterns that might warrant further investigation.”
Leetaru has used the immense collection of news reports in the Summary of World Broadcasts (SWB), a monitoring service set up by the British intelligence service just before World War II to assess world opinion. The SWB now includes newspaper articles, television and radio broadcasts, periodicals and a variety of other online resources from over 130 countries.
Previous efforts to extract ‘buried’ information from vast literary resources – an approach dubbed ‘culturomics’ – have tended to focus on quantifying the occurrence of certain key words [2]. In contrast, Leeratu conducted ‘sentiment mining’ of the sources by assessing their positive or negative tone, looking for evaluation words such as ‘terrible’, ‘awful’ or ‘good’. He used computer algorithms to convert these data trawls into a single parameter that quantified the tone of the news, normalized so that the long-term average value is zero.
For Egypt, the tone in early 2011 fell to a negative value seen only once before in the past three decades. What’s more, at that same time the tone of the coverage specifically mentioning the (now deposed) president Hosni Mubarak reached its lowest ever level for his almost 30-year rule. Similar falls to highly unusual low points were found for Tunisia and Libya.
This didn’t in itself predict when those crises would happen – it seems likely, for example, that rocketing food prices helped to trigger the Arab spring revolts [3]. But it might reveal when a region or state is ripe for unrest. Dirk Helbing, a specialist in modeling of social systems at ETH, compares it to the case of traffic flow: computer models can help to spot when traffic is in a potentially unstable state, but the actual triggers for jams may be random and unpredictable.
By the same token, it remains to be seen whether this approach can spot signs of trouble in advance, rather than retrospectively finding them foreshadowed in the media. “It is obviously much easier to find precursory signs when you know where to look than to do it blindly”, says Chadefaux.
But if news mining does turn out to offer a crystal ball, “the question is what kinds of use we’ll make of this information”, says Helbing. “Will governments act in a responsive way to avoid crises, say by improving people’s living conditions, or will they use it to police dissatisfied people in a preventative way?”
References
1. Leeratu, K. First Monday 16(9) (online only), 5 September 2011. Available here.
2. Michel, J. B. et al., Science 331, 176-182 (2010).
3. Lagi, M., Betrand, K. Z. & Bar-Yam, Y. http://arxiv.org/abs/1108.2455 (2011).
Read all about it
Where is Osama bin Laden?
Leetaru also looked at whether the sources of news reports might provide information about the spatial location of events. He analysed all media references to Osama bin Laden since 1979 to look for co-occurrences of geographical places. Between bin Laden’s rise to media prominence in the 1990s and his capture and killing in 2011, the most common associations were with northern Pakistan, within a 200-km radius of the cities of Islamabad and Peshawar – the region in which he was finally found.
How the world looks from here
News sources are often criticized for being too parochial. That turns out to be a valid complaint, at least for the US news: Leetaru found that even the New York Times, a relatively ‘internationalist’ newspaper, constantly refers reports in other countries back to the US. “Nearly every foreign location it covers is mentioned alongside a US city, usually Washington DC”, he says.
By looking for such co-references to specific cities or other geographical landmarks throughout the world, Leetaru extracted a map of how the global news links nations into ‘world civilizations’. For SWB these correspond largely to the recognized geographical affiliations: Australasia, the Middle East (including much of northeast Africa), the Americas and so forth. But there are anomalies: Spain is linked to South America, and France and Portugal to southern Africa, showing that the imprint of imperial history is still felt in the world. Strikingly, however, the ‘map’ derived from the New York Times alone is rather different: on this measure, the US has its own distinctive view of the world. That matters, says Leetaru. “Understanding how a given country groups the rest of the world gives you critical information on how to approach that country in terms of shaping policy”, he says.
Here’s some more bad news
If you’ve been feeling that the news is always bad these days, you’ve got a point. It has been getting steadily worse for the past 30 years, according to the trend in the tone of the entire data set in the SWB since 1979.
_____________________________________________________________________
Signs of impending social and political change may lie hidden in a sea of data.
You could have foreseen the Arab spring if only you’d been paying enough attention to the news. That’s the claim of a new study which shows how ‘data mining’ of news reportage can reveal the possibility of future crises well before they happen.
Computer scientist Kalev Leetaru at the University of Illinois in Champaign has trawled through a vast collection of open-access news reporting and examined the ‘tone’ of the news about Tunisia, Egypt and Libya, where long-established dictatorial political leaders have been deposed by public uprisings in the so-called Arab spring. In all cases, he says, there was a clear, steady trend towards a negative tone for about a decade before the revolts [1].
While this doesn’t predict either the course or the timing of the events during last spring and summer, Leetaru argues that it provided a clear indicator of an impending crisis. “I strongly doubt we'll ever get to the point where we can say ‘at 5:05PM next July 2nd there will be a riot of 20 people at such and such street corner’”, he says. “Rather, the value of this class of work lies in warning of changing moods and environments, and increased vulnerability to a sudden shock”.
Erez Lieberman Aiden of Harvard University, who has explored the mining of digitized literary texts for linguistic and historical trends, agrees. “Leetaru’s work is interesting not so much because it makes predictions, but because it points to the power and the opportunity latent in new ways of analyzing large-scale news databases”, he says.
Political scientist Thomas Chadefaux of the Swiss Federal Institute of Technology (ETH) in Zurich, Switzerland, calls the paper “a welcome addition to a field – political science – that has cared very little about finding early warning signals for war, or making predictions at all.”
Long-term trends can be subtle and hard to spot by subjective and partial monitoring of the news. But they might presage crises more reliably than does a focus on the short term. For example, while there was talk during the spring of the possibility of similar public uprisings in Saudi Arabia, reflected in a rather negative tone in the news there during March 2011, the long-term data showed that spell to be no worse than other fluctuations in recent years – there was no worsening trend. On this basis, one would have predicted the failure of the Arab spring to unseat the Saudi rulers.
“If we think of the vast array of digital information around us today as an ocean of information, up to this point we've largely been studying the surface”, says Leetaru. “The idea behind this work is to poke our heads beneath the water for a moment to show that there's a vast world down there that we've been missing”. He thinks that automated news analysis that looks for information about mood, tone or spatial references could supply something like a political weather forecast, “offering updated assessments every few minutes for the entire planet and pointing out emerging patterns that might warrant further investigation.”
Leetaru has used the immense collection of news reports in the Summary of World Broadcasts (SWB), a monitoring service set up by the British intelligence service just before World War II to assess world opinion. The SWB now includes newspaper articles, television and radio broadcasts, periodicals and a variety of other online resources from over 130 countries.
Previous efforts to extract ‘buried’ information from vast literary resources – an approach dubbed ‘culturomics’ – have tended to focus on quantifying the occurrence of certain key words [2]. In contrast, Leeratu conducted ‘sentiment mining’ of the sources by assessing their positive or negative tone, looking for evaluation words such as ‘terrible’, ‘awful’ or ‘good’. He used computer algorithms to convert these data trawls into a single parameter that quantified the tone of the news, normalized so that the long-term average value is zero.
For Egypt, the tone in early 2011 fell to a negative value seen only once before in the past three decades. What’s more, at that same time the tone of the coverage specifically mentioning the (now deposed) president Hosni Mubarak reached its lowest ever level for his almost 30-year rule. Similar falls to highly unusual low points were found for Tunisia and Libya.
This didn’t in itself predict when those crises would happen – it seems likely, for example, that rocketing food prices helped to trigger the Arab spring revolts [3]. But it might reveal when a region or state is ripe for unrest. Dirk Helbing, a specialist in modeling of social systems at ETH, compares it to the case of traffic flow: computer models can help to spot when traffic is in a potentially unstable state, but the actual triggers for jams may be random and unpredictable.
By the same token, it remains to be seen whether this approach can spot signs of trouble in advance, rather than retrospectively finding them foreshadowed in the media. “It is obviously much easier to find precursory signs when you know where to look than to do it blindly”, says Chadefaux.
But if news mining does turn out to offer a crystal ball, “the question is what kinds of use we’ll make of this information”, says Helbing. “Will governments act in a responsive way to avoid crises, say by improving people’s living conditions, or will they use it to police dissatisfied people in a preventative way?”
References
1. Leeratu, K. First Monday 16(9) (online only), 5 September 2011. Available here.
2. Michel, J. B. et al., Science 331, 176-182 (2010).
3. Lagi, M., Betrand, K. Z. & Bar-Yam, Y. http://arxiv.org/abs/1108.2455 (2011).
Read all about it
Where is Osama bin Laden?
Leetaru also looked at whether the sources of news reports might provide information about the spatial location of events. He analysed all media references to Osama bin Laden since 1979 to look for co-occurrences of geographical places. Between bin Laden’s rise to media prominence in the 1990s and his capture and killing in 2011, the most common associations were with northern Pakistan, within a 200-km radius of the cities of Islamabad and Peshawar – the region in which he was finally found.
How the world looks from here
News sources are often criticized for being too parochial. That turns out to be a valid complaint, at least for the US news: Leetaru found that even the New York Times, a relatively ‘internationalist’ newspaper, constantly refers reports in other countries back to the US. “Nearly every foreign location it covers is mentioned alongside a US city, usually Washington DC”, he says.
By looking for such co-references to specific cities or other geographical landmarks throughout the world, Leetaru extracted a map of how the global news links nations into ‘world civilizations’. For SWB these correspond largely to the recognized geographical affiliations: Australasia, the Middle East (including much of northeast Africa), the Americas and so forth. But there are anomalies: Spain is linked to South America, and France and Portugal to southern Africa, showing that the imprint of imperial history is still felt in the world. Strikingly, however, the ‘map’ derived from the New York Times alone is rather different: on this measure, the US has its own distinctive view of the world. That matters, says Leetaru. “Understanding how a given country groups the rest of the world gives you critical information on how to approach that country in terms of shaping policy”, he says.
Here’s some more bad news
If you’ve been feeling that the news is always bad these days, you’ve got a point. It has been getting steadily worse for the past 30 years, according to the trend in the tone of the entire data set in the SWB since 1979.
Wednesday, September 07, 2011
Welcome to the Futur(ICT)
I am at a meeting in Italy that is thrashing out the proposal for the FuturICT project, a leading contender for the EU’s Flagship Initiatives scheme which seeks to provide huge funding over ten years for ‘transformative’ initiatives in information and communications technologies. FuturICT is to my mind the most potentially transformative of all the shortlisted candidates, but we’ll see what happens. In the meantime, it is very exciting to see what is being planned. It is in the light of this initiative, and after discussion with its leader Dirk Helbing, that I put down the thoughts below a week or two ago. It seems that events like this one are now almost daily adding to the arguments for why we need something like FuturICT. But Lord knows if we can wait ten years for it.
_____________________________________________________________________
This must be said first: no one really understands what is going on. It’s generally acknowledged that Twitter didn’t cause the Arab Spring – but what did? Labour has been right to avoid pinning the riots on the government cuts – but then, what do we pin them on? Every economist has an explanation for the financial crisis, different to a greater or lesser degree than the others. But it happened somehow.
Can you imagine these things happening two decades ago? The riots in Croydon, Beckenham and Bromley, were not like those in Toxteth and Brixton in the 1980s, not least precisely because of their location, but also because there was no forewarning: the police were justified in saying that they’d had no precedent to prepare them. For all that it looks superficially like the collapse of the Soviet Union, the Arab Spring too was something new. And if the financial crisis was like the Great Depression, we’d know what to do. It was partly about risk hidden so deeply as to cause paralytic fear; it was also about instruments too complicated for users to understand, and about legal and financial systems labyrinthine enough to permit deception, stupidity and knavishness to thrive.
What is qualitatively new about these events is the crucial role of interdependence and interaction and the almost instantaneous transmission of information through social, economic and political networks. That novelty does not by itself explain why they happened, much less help us to identify solutions or ameliorate the unwelcome consequences. But it points to something perhaps even more important: the world has changed. And it is not going to change back. The poverty of the political response to the riots is understandable, because, although they do not like to admit it, politicians are faced with uncharted territory and they do not know how to navigate it. This is a dangerous situation, because it means that the pressure to be seen to be responding may force political leaders to improvise solutions that fail entirely to acknowledge the nature of the problem and therefore stand a good chance of making things worse. Harsh sentencing and housing evictions might conceivably reassure the public that there are strong hands at the helm, but there is no credible, objective evidence that they will prevent recurrences in the future. That we can one moment celebrate the power of social-network technologies to instil change and mobilize crowd movements, and the next demand that these technologies be shut down in times of civil unrest shows that we have no idea how to manage these things, or even what to think about them except that somehow they matter.
In retrospect, the significance of the terrorist attacks almost exactly ten years ago now looks to be that they marked the advent of this new world order – one of decentralization, of fears and dangers so diffuse and distributed as to be impossible to vanquish and perhaps even to define. And what was the response on that occasion? Old-fashioned declarations of war between nations, which are now revealed to be not just ineffective but disastrous. The assassination of Hitler would have probably halted a war; in assassinating Osama bin Laden, there was no war to stop.
This is why politicians and decision makers need to learn a new language, or they will simply lose the capacity to govern, to manage economies, to create stable societies, to keep the world worth living in. Here are some of the words they must come to terms with: complexity, network theory, phase transitions, critical points, emergence, agent-based modelling, social ecology. And they will need to learn the key lesson of the management of complex, interacting systems: solutions cannot be imposed, but must be coaxed out of the dynamic system itself. Earthquakes may never be exactly predictable, but it is possible that they can be managed by mapping out in great detail the accumulating strains that give rise to them, and applying local nudges and shocks to relieve the stressed and minimize the danger and costs of crises. There is no political discourse yet that permits analogous answers, not least because they require investment in such things as unglamorous data-gathering techniques and long-term research that carries no guarantee of quick fixes.
Aspirations towards a science of society date back to the Enlightenment. But not only have they never been fulfilled, they now need to recognize that they must describe a different society from the one in which Adam Smith or even John Maynard Keynes lived. There is some good news in all this: we now have the conceptual and computational tools to create a science that can model the state we’re in – not just politically and socially but environmentally, for no answer to the global crises of environment and ecosystems will work if it is not embedded in a credible socioeconomic context. We cannot, in all honesty, yet know how much any of this will help. Perhaps some ills of the world will always elude rational prediction or solution. But if we don’t even try, it is hard to avoid concluding that we’ll deserve all we get.
_____________________________________________________________________
This must be said first: no one really understands what is going on. It’s generally acknowledged that Twitter didn’t cause the Arab Spring – but what did? Labour has been right to avoid pinning the riots on the government cuts – but then, what do we pin them on? Every economist has an explanation for the financial crisis, different to a greater or lesser degree than the others. But it happened somehow.
Can you imagine these things happening two decades ago? The riots in Croydon, Beckenham and Bromley, were not like those in Toxteth and Brixton in the 1980s, not least precisely because of their location, but also because there was no forewarning: the police were justified in saying that they’d had no precedent to prepare them. For all that it looks superficially like the collapse of the Soviet Union, the Arab Spring too was something new. And if the financial crisis was like the Great Depression, we’d know what to do. It was partly about risk hidden so deeply as to cause paralytic fear; it was also about instruments too complicated for users to understand, and about legal and financial systems labyrinthine enough to permit deception, stupidity and knavishness to thrive.
What is qualitatively new about these events is the crucial role of interdependence and interaction and the almost instantaneous transmission of information through social, economic and political networks. That novelty does not by itself explain why they happened, much less help us to identify solutions or ameliorate the unwelcome consequences. But it points to something perhaps even more important: the world has changed. And it is not going to change back. The poverty of the political response to the riots is understandable, because, although they do not like to admit it, politicians are faced with uncharted territory and they do not know how to navigate it. This is a dangerous situation, because it means that the pressure to be seen to be responding may force political leaders to improvise solutions that fail entirely to acknowledge the nature of the problem and therefore stand a good chance of making things worse. Harsh sentencing and housing evictions might conceivably reassure the public that there are strong hands at the helm, but there is no credible, objective evidence that they will prevent recurrences in the future. That we can one moment celebrate the power of social-network technologies to instil change and mobilize crowd movements, and the next demand that these technologies be shut down in times of civil unrest shows that we have no idea how to manage these things, or even what to think about them except that somehow they matter.
In retrospect, the significance of the terrorist attacks almost exactly ten years ago now looks to be that they marked the advent of this new world order – one of decentralization, of fears and dangers so diffuse and distributed as to be impossible to vanquish and perhaps even to define. And what was the response on that occasion? Old-fashioned declarations of war between nations, which are now revealed to be not just ineffective but disastrous. The assassination of Hitler would have probably halted a war; in assassinating Osama bin Laden, there was no war to stop.
This is why politicians and decision makers need to learn a new language, or they will simply lose the capacity to govern, to manage economies, to create stable societies, to keep the world worth living in. Here are some of the words they must come to terms with: complexity, network theory, phase transitions, critical points, emergence, agent-based modelling, social ecology. And they will need to learn the key lesson of the management of complex, interacting systems: solutions cannot be imposed, but must be coaxed out of the dynamic system itself. Earthquakes may never be exactly predictable, but it is possible that they can be managed by mapping out in great detail the accumulating strains that give rise to them, and applying local nudges and shocks to relieve the stressed and minimize the danger and costs of crises. There is no political discourse yet that permits analogous answers, not least because they require investment in such things as unglamorous data-gathering techniques and long-term research that carries no guarantee of quick fixes.
Aspirations towards a science of society date back to the Enlightenment. But not only have they never been fulfilled, they now need to recognize that they must describe a different society from the one in which Adam Smith or even John Maynard Keynes lived. There is some good news in all this: we now have the conceptual and computational tools to create a science that can model the state we’re in – not just politically and socially but environmentally, for no answer to the global crises of environment and ecosystems will work if it is not embedded in a credible socioeconomic context. We cannot, in all honesty, yet know how much any of this will help. Perhaps some ills of the world will always elude rational prediction or solution. But if we don’t even try, it is hard to avoid concluding that we’ll deserve all we get.
Thursday, September 01, 2011
In search of a third culture
Here is my latest Crucible column for Chemistry World. I’ve also written a Chem World blog about the ASCI exhibition, which shows some of the images.
__________________________________________________________________________
Sciart – the clumsy label commonly attached to collaborations between scientists and artists – means many things to many people. Some, like the physicist Arthur I. Miller who has written about the conceptual connections between relativity and cubism, see it as a way of bridging the Two Cultures divide that might ultimately produce a ‘third culture’ in which art and science are not separate endeavours. Others, such as biologist Lewis Wolpert, are sceptical that it is more than just a fad that allows artists to misappropriate scientific ideas, and that science stands to gain nothing from it.
Recently the French physicist Jean Marc Levy-Leblond, who has a deep appreciation of contemporary arts, launched a stinging attack on the whole genre in a book pointedly titled La science (n’)e(s)t (pas) l’art (Editions Hermann, Paris, 2010), in which he criticizes the naivety of most sciart discourse and argues that the most artists and scientists can realistically hope for are platonic ‘brief encounters’. Although not intended as a riposte, a forthcoming book called Survival of the Beautiful (Bloomsbury, 2011) by musician and animal-song specialist David Rothenberg certainly offers one. Rothenberg argues that we should take seriously the possibility that there is an aesthetic sense at play in nature – for example in the way female peacocks and bower birds react to the elaborate displays of males – and that this can speak to our own artistic sensibilities. He asserts that, despite Wolpert’s claim, it is possible to find cases of science having benefitted from art. And he devotes considerable space to a discussion of chemists’ visual language, instincts and aesthetics by Roald Hoffmann, who developed these themes in his book The Same and Not the Same (Columbia University Press, 1995).
The arguments will doubtless continue. Levy-Leblond is right to ridicule some claims of finding ‘art in science’ – he calls fractal imagery ‘techno-kitsch’, and is critical of scientists’ attachment to an old-fashioned notion of beauty, which for chemists seems archaically tied up with Platonic ideas about symmetry. And it’s true that some of the most successful interactions of art and science, such as Michael Frayn’s play Copenhagen, did not arise from any self-conscious process of enticing artists and scientists into the same room. But if we let a thousand flowers bloom, some are likely to smell good.
That’s evident from a new exhibition of digital art organized by the New York-based Art & Science Collaborations, Inc. (ASCI), a veteran of the sciart (or as they prefer, art-science) field which was formed by artist Cynthia Pannucci in 1988 to ‘raise public awareness about artists and scientists using science and technology to explore new forms of creative expression’. This is ASCI’s thirteenth annual digital-art competition, and this year it celebrates the International Year of Chemistry. ‘Digital2011: The Alchemy of Change’ called for submissions from artists and scientists to ‘show us their vision of this deeply fundamental, magical enabler of life called chemistry’. A selection of the entries will be displayed at the New York Hall of Science from September to next February.
The results are nothing if not eclectic. All of the images have been created by digital manipulation – sometimes of photographic images, sometimes purely computer-generated. Their occasionally colourful, ‘decorative’ quality would doubtless be dismissed by Levy-Leblond as more ‘digital kitsch’. Others place gleaming ball-and-stick models of molecules against images of supernovae and other cosmic phenomena in a way that puts me in mind of the graphical abstracts of JACS and Angewandte Chemie – not by any means unpleasant, but hardly inspiring art. Still others explore the artificially enhanced textures and colours of crystals, flows, precipitates, decay – images that have intrigued many artists in the past, and which raise again Rothenberg’s question of whether nature ‘is more beautiful than it needs to be’.
I enjoyed most of all the images that seem to push up against the limits of what is knowable, expressible and visualizable in chemistry. The alchemists felt those limits keenly and resorted to allegory and metaphor, as Andrew Krasnow does with his bizarre ‘bartender’ mixing up the coloured oxidation states of vanadium. Robbin Juris uses cellular automata to conjure up collages of ‘i(c)onic bonds’ that look simultaneously like pages from a quantum-theory textbook and cubist abstractions. David Hylton’s pearlescent forms put me in mind of the surrealist Roberto Matta, who was himself interested in quantum physics. And Julie Newdoll’s schematic ‘molecules’, developed in association with biochemist Robert Stroud, are like strange symbolic machines whose workings remain obscure.
It’s a shame to have to single out just these few. The exhibition should offer a thought-provoking view of how chemistry looks from outside, and why it is still a rich stimulus to the imagination.
__________________________________________________________________________
Sciart – the clumsy label commonly attached to collaborations between scientists and artists – means many things to many people. Some, like the physicist Arthur I. Miller who has written about the conceptual connections between relativity and cubism, see it as a way of bridging the Two Cultures divide that might ultimately produce a ‘third culture’ in which art and science are not separate endeavours. Others, such as biologist Lewis Wolpert, are sceptical that it is more than just a fad that allows artists to misappropriate scientific ideas, and that science stands to gain nothing from it.
Recently the French physicist Jean Marc Levy-Leblond, who has a deep appreciation of contemporary arts, launched a stinging attack on the whole genre in a book pointedly titled La science (n’)e(s)t (pas) l’art (Editions Hermann, Paris, 2010), in which he criticizes the naivety of most sciart discourse and argues that the most artists and scientists can realistically hope for are platonic ‘brief encounters’. Although not intended as a riposte, a forthcoming book called Survival of the Beautiful (Bloomsbury, 2011) by musician and animal-song specialist David Rothenberg certainly offers one. Rothenberg argues that we should take seriously the possibility that there is an aesthetic sense at play in nature – for example in the way female peacocks and bower birds react to the elaborate displays of males – and that this can speak to our own artistic sensibilities. He asserts that, despite Wolpert’s claim, it is possible to find cases of science having benefitted from art. And he devotes considerable space to a discussion of chemists’ visual language, instincts and aesthetics by Roald Hoffmann, who developed these themes in his book The Same and Not the Same (Columbia University Press, 1995).
The arguments will doubtless continue. Levy-Leblond is right to ridicule some claims of finding ‘art in science’ – he calls fractal imagery ‘techno-kitsch’, and is critical of scientists’ attachment to an old-fashioned notion of beauty, which for chemists seems archaically tied up with Platonic ideas about symmetry. And it’s true that some of the most successful interactions of art and science, such as Michael Frayn’s play Copenhagen, did not arise from any self-conscious process of enticing artists and scientists into the same room. But if we let a thousand flowers bloom, some are likely to smell good.
That’s evident from a new exhibition of digital art organized by the New York-based Art & Science Collaborations, Inc. (ASCI), a veteran of the sciart (or as they prefer, art-science) field which was formed by artist Cynthia Pannucci in 1988 to ‘raise public awareness about artists and scientists using science and technology to explore new forms of creative expression’. This is ASCI’s thirteenth annual digital-art competition, and this year it celebrates the International Year of Chemistry. ‘Digital2011: The Alchemy of Change’ called for submissions from artists and scientists to ‘show us their vision of this deeply fundamental, magical enabler of life called chemistry’. A selection of the entries will be displayed at the New York Hall of Science from September to next February.
The results are nothing if not eclectic. All of the images have been created by digital manipulation – sometimes of photographic images, sometimes purely computer-generated. Their occasionally colourful, ‘decorative’ quality would doubtless be dismissed by Levy-Leblond as more ‘digital kitsch’. Others place gleaming ball-and-stick models of molecules against images of supernovae and other cosmic phenomena in a way that puts me in mind of the graphical abstracts of JACS and Angewandte Chemie – not by any means unpleasant, but hardly inspiring art. Still others explore the artificially enhanced textures and colours of crystals, flows, precipitates, decay – images that have intrigued many artists in the past, and which raise again Rothenberg’s question of whether nature ‘is more beautiful than it needs to be’.
I enjoyed most of all the images that seem to push up against the limits of what is knowable, expressible and visualizable in chemistry. The alchemists felt those limits keenly and resorted to allegory and metaphor, as Andrew Krasnow does with his bizarre ‘bartender’ mixing up the coloured oxidation states of vanadium. Robbin Juris uses cellular automata to conjure up collages of ‘i(c)onic bonds’ that look simultaneously like pages from a quantum-theory textbook and cubist abstractions. David Hylton’s pearlescent forms put me in mind of the surrealist Roberto Matta, who was himself interested in quantum physics. And Julie Newdoll’s schematic ‘molecules’, developed in association with biochemist Robert Stroud, are like strange symbolic machines whose workings remain obscure.
It’s a shame to have to single out just these few. The exhibition should offer a thought-provoking view of how chemistry looks from outside, and why it is still a rich stimulus to the imagination.
Subscribe to:
Posts (Atom)