I have an article in the latest (October) issue of Scientific American that looks at ten big challenges for chemistry in the coming decades. It’s presented by the Sci Am editors as “big mysteries”, though I’m not too sure quite how well that fits: these are not issues about which we’re totally in the dark, but rather, ones that seem to present either challenges to our fundamental understanding or our technological capability. The topics were decided in collaboration with the editors – I’m happy that all justify inclusion, though left to my own devices I’d probably have a slightly different list. The article grew to huge proportions in preparation, before being trimmed severely. So here is the full original text – or rather, an unholy hybrid of that and some of the changes made during the editing process. It's a big post for a blog, but hopefully of some value. And it includes an intro which was snipped out in toto.
There aren’t many novels with chemistry in them, but one of the most famous has a Professor Waldman of the University of Ingolstadt say this: “Chemistry is that branch of natural philosophy in which the greatest improvements have been and may be made.” Waldman is the tutor of Victor Frankenstein in Mary Shelley’s classic from 1818, and he inspires his student to make the discovery that triggers the book’s dark tale.
This association imputes a Faustian aspect to chemistry. But that, like Waldman’s optimism, was transferred in the twentieth century first to physics and then to biology. Chemistry seemed to be left behind as a ‘finished’ science, now just a matter of engineering and devoid of the grand questions that Shelley – a devotee of Humphry Davy – seemed to glimpse in chemistry two hundred years ago. What happened?
Perhaps the answer is that chemistry became too versatile for its own good. It inveigled its way into so many areas of study and production, from semiconductor manufacturing to biomedicine, that we lost sight of it. The core of chemistry remains in making molecules and materials, but these are so diverse – drugs, paints, plastics, microscopic machines – that it is hard to see them as parts of a united discipline.
In this Year of Chemistry, it’s good to take stock – not just to remind ourselves why chemistry is central to our lives, but to consider where it is headed. Here are ten of the key challenges that chemistry faces today. Needless to say, there is no definitive list of this sort, and while all of these ten directions are important, their main value here is perhaps to illustrate that Waldman’s words still remain true. Several of these challenges are concerned with practical applications, as befits chemistry’s role as the most applied and arguably the most useful of the central sciences. But there are also questions about foundations, for the popular idea that chemistry is now conceptually understood, and that all we have to do is use it, is false. It has been only in the past several decades, for example, that the centrality of the non-covalent bond in the chemistry of life has been appreciated, and this sort of ‘temporary stickiness’ of molecules has been recognized as a key aspect of any technological applications, from molecular machines and nanotechnology to the development of surface coatings. Chemistry retains deep intellectual as well as practical challenges.
The last word should also go to Shelley’s Professor Waldman, who tells Victor Frankenstein that “a man would make but a very sorry chemist if he attended to that department of human knowledge alone”. You could perhaps say the same for any branch of science, but it particularly true for chemistry, which depends not just on understanding the world but of finding creative expressions of that knowledge. The creative opportunities for chemists lie everywhere: in making vehicles cleaner, producing artificial leaves, inventing new colours for artists, altering the fate of cells and comprehending the fate of stars. Chemistry is as limitless as art, because it is one.
1. The origins of life, and how life could be different on other planets.
The chemical origin of life used to be a rather parochial topic. That’s not to diminish the profundity, or the difficulty, of the question of how life began on Earth. But now that we have a better view of some of the strange and potentially fertile environments in our solar system – the occasional flows of water on Mars, the petrochemical seas of Saturn’s moon Titan and the cold, salty oceans that seem to lurk under the ice of Jupiter’s moons Europa and Ganymede – the origin of terrestrial life seems only a part of a grander question: under what circumstances can life arise, and how widely can its chemical basis vary? That issue is made even more rich by the discovery over the past 16 years of more than 500 extrasolar planets orbiting other stars – worlds of bewildering variety, forcing us to broaden our imagination about the possible chemistries of life. For instance, while NASA has long pursued the view that liquid water is a prerequisite, now we’re not so sure. How about liquid ammonia, or formamide (CHONH2), or an oily solvent like liquid methane, or supercritical hydrogen on Jupiter? And why should life restrict itself to DNA and proteins – after all, several artificial chemical systems have now been made that exhibit a kind of replication from the component parts without relying on nucleic acids. All you need, it seems, is a molecular system that can serve as a template for making a copy, and then detach itself.
Fixating on terrestrial life is a hang-up, but if we don’t, it’s hard to know where to begin. Looking at life on Earth, says chemist Steven Benner of the University of Florida, “we have no way to decide whether the similarities [such as the use of DNA and proteins] reflect common ancestry or the needs of life universally.” But if we retreat into saying that we’ve got to stick with what we know, he says, “we have no fun.”
All the same, Earth is the only locus of life that we know of, and so it makes sense to start here in trying to understand how matter can come alive and, eventually, know itself. This process seems to have begun extremely quickly in geological terms: there are fossil signs of early life dating back almost to the time that the oceans first formed. On that basis, it looks easy – some suspect, even inevitable. The challenge is no longer to come up with vaguely plausible scenarios, for there are plenty – polymerization catalysed by minerals, chemical complexity fuelled by hydrothermal vents, the RNA world. No, the game is to figure out how to make these more than just suggestive reactions coddled in the test tube. Researchers have made conspicuous progress in recent years, showing for example that certain relatively simple chemicals can spontaneously react to form the more complex building blocks of living systems, such as amino acids and nucleotides, the building blocks of DNA and RNA. In 2009, a team led by John Sutherland, now at the MRC Laboratory of Molecular Biology in Cambridge, England, was able to demonstrate the formation of nucleotides from molecules likely to have existed in the primordial broth. Other researchers have focused on the ability of some RNA strands to act as enzymes, providing evidence in support of the RNA world hypothesis. Through such steps, scientists may progressively bridge the gap from inanimate matter to self- replicating, self-sustaining systems.
Perhaps the dawn of synthetic biology, which includes the construction of primitive lifelike entities from scratch, will help to bridge the gap between the geological formation of simple organic ingredients, as demonstrated by Harold Urey and Stanley Miller in their famous ‘spark’ experiments more than 50 years ago, and the earliest cells.
2. Understanding the nature of the chemical bond and modeling chemistry on the computer.
“The chemistry of the future”, wrote the zoologist D’Arcy Wentworth Thompson in 1917, “must deal with molecular mechanics by the methods and in the strict language of mathematics”. Just 10 years later that seemed possible: the physicists Walter Heitler and Fritz London showed how to describe a chemical bond using the equations of then nascent quantum theory, and the great American chemist Linus Pauling proposed that bonds form when the electron orbitals of different atoms can overlap in space. A competing theory by Robert Mulliken and Friedrich Hund suggested that bonds are the result of atomic orbitals merging into “molecular orbitals” that extend over more than one atom. Theoretical chemistry seemed about to become a branch of physics.
Nearly 100 years later the molecular-orbital picture has become the most common one, but there is still no consensus among chemists that it is always the best way to look at molecules. The reason is that this model of molecules and all others are based on simplifying assumptions and are thus approximate, partial descriptions. In reality, a molecule is a bunch of atomic nuclei in a cloud of electrons, with opposing electrostatic forces fighting a constant tug-of-war with one another, and all components constantly moving and reshuffling. Existing models of the molecule usually try to crystallize such a dynamic entity into a static one and may capture some of its salient properties but neglect others.
Quantum theory is unable to supply a unique definition of chemical bonds that accords with the intuition of chemists whose daily business it is to make and break them. There are now many ways of assigning bonds to the quantum description of molecules as electrons and nuclei. According to quantum chemist Dominik Marx of the University of Bochum in Germany, “some are useful in some cases but fail in others and vice versa”. As a result, he says, “there will always be a search, and thus controversy, for ‘the best method’”.
This is no obstacle to calculating the structures and properties of molecules from quantum first principles – something that can be done to great accuracy if the number of electrons is relatively small. “Computational chemistry can be pushed to the level of utmost realism and complexity”, says Marx. As a result, computer calculations can increasingly be regarded as a kind of virtual experiment that predicts the outcome of a reaction.
But the challenge is to extend these approaches to increasingly complex cases. On the one hand, that may mean simply modelling more molecules. Can a computer model capture the complicated environment inside cells, for example, where many molecules large and small interact, aggregate and react within the responsive, protean medium of salty water? At the moment, most descriptions of such processes use highly simplified descriptions of bonding in which atoms are little more than balls on springs. Can computational chemistry help us understand, say, the detailed workings of a vast biomolecular machine like the ribosome?
On the other hand, can computational methods capture complex chemical processes and behavior, such as catalysis? Attempts to do so tend at the moment to rely on ways of bridging the calculations to intuitive expectations. One promising approach, being developed by Jörg Behler at Bochum, uses neural networks to deduce the energy surfaces on which these reactions happen. It also remains hard to predict subtle behaviour such as superconductivity. But already new materials have been discovered by computation – perhaps in times to come that will become the norm.
3. Graphene and carbon nanotechnology: sculpting with carbon.
The discovery of fullerenes – hollow, cagelike molecules made entirely of carbon – in 1985 was literally the start of something much bigger. The polyhedral shells of these molecules showed how the flat sheets of carbon atoms that make up graphite – where they are joined into hexagonal rings tiled side by side, like chicken wire – can be curved by including some pentagonal rings. With precisely 12 pentagons, the structure curls up into a closed shell. Six years later tubes of graphite-like carbon just a few nanometers in diameter, called carbon nanotubes, fostered the idea that this sort of carbon can be moulded into all manner of curved nanoscale structures. Being hollow, extremely strong and stiff, and electrically conducting, carbon nanotubes promised applications ranging from high-strength carbon composites to tiny wires and electronic devices, miniature molecular capsules and water-filtration membranes.
Now graphite itself has moved centre stage, thanks to the discovery that it can be separated into individual sheets, called graphene, that could supply the fabric for ultra-miniaturized, cheap and robust electronic circuitry. Graphene garnered the 2010 Nobel prize in physics, but the success of this and other forms of carbon nanotechnology might ultimately depend on chemistry. For one thing, ‘wet’ chemical methods may prove the cheapest and simplest for separating graphite into its component sheets. “Graphene can be patterned so that the interconnect and placement problems of carbon nanotubes are overcome”, says carbon specialist Walt de Heer of the Georgia Institute of Technology.
Some feel, however, that graphene has so far been over-hyped in a way that plays down the hurdles to making it a viable technology. “The hype is extreme”, says de Heer. “Many of the newly claimed superlative graphene properties are really graphite properties ‘under new management’ and were known and used for a very long time.” He believes graphitic electronics has not yet been shown to be viable. “The best that has been done to date is to show that ultrathin graphite (including graphene) can be gated [switched electronically, as in transistors]. But the gating is quite poor, since you cannot turn it completely off. Most people would not consider this to be even a starting point for electronics.” And he says that existing methods of graphene patterning are so crude that the edges undo any advantage that graphene nanoribbons have to offer. However, narrow ribbons and networks can be made to measure with atomic precision by using the techniques of organic chemistry to build them up from ‘polyaromatic’ molecules, in which several hexagonal carbon rings are linked together like little fragments of a graphene sheet. It seems quite possible that graphene technology will depend on clever chemistry.
[Watch this space: I’ve just written a piece on graphene for BBC’s pop-sci magazine Focus, which explores all these things in greater depth.]
4. Artificial photosynthesis.
Of all the sources of ‘clean energy’ available to us, sunlight seems the most tantalizing. With every sunrise comes a reminder of the vast resource of which we currently tap only a pitiful fraction. The main problem is cost: the expense of conventional photovoltaic panels made of silicon still restricts their use. But life on Earth, almost all of which is ultimately solar-powered by photosynthesis, shows that solar cells don’t have to be terribly efficient if, like leaves, they can be made abundantly and cheaply enough.
Yet ‘artificial photosynthesis’ and the ‘artificial leaf’ are slippery concepts. Do they entail converting solar to chemical energy, just as the leaf uses absorbed sunlight to make the biological ‘energy molecule’ ATP? Or must the ‘artificial leaf’ mimic photosynthesis by splitting water to make hydrogen – a fuel – and oxygen?
“Artificial photosynthesis means different things to different people”, says photochemist Devens Gust of Arizona State University. “Some people call virtually any sort of solar energy conversion that involves electricity or fuels artificial photosynthesis.” Gust himself reserves the term for photochemical systems that make fuels using sunlight: “I like to define it as the use of the fundamental scientific principles underlying natural photosynthesis for the design of technological solar-energy conversion systems.”
“One of the holy grails of solar energy research is using sunlight to produce fuels”, Gust explains. “In order to make a fuel, we need not only energy from sunlight, but a source of electrons, and some material to reduce to a fuel with those electrons. The source of electrons has to be water, if the process is to be carried out on a scale anything like that of human energy usage. The easiest way to make a fuel from this is to use the electrons to reduce the protons to hydrogen gas.” Nathan S. Lewis and his collaborators at Caltech are developing an artificial leaf that would do just that using silicon nanowires.
MIT chemist Daniel Nocera and his coworkers have recently announced an ‘artificial leaf’: a device the size of a credit card in which silicon solar cells and a photocatalyst of metals such as nickel and cobalt split water into hydrogen and oxygen which can then be used to drive fuel cells. Nocera estimates that a gallon of water would provide enough fuel to power a home in developing countries for a day. “Our goal is to make each home its own power station”, he says. His start-up company Sun Catalytix aims to take the technology to a commercial level.
But “water oxidation is not a solved problem, even at a fundamental level”, according to Gust. “Cobalt catalysts such as the one that Nocera uses, and newly-discovered catalysts based on other common metals are promising”, he says, but there is still no potentially inexpensive, ideal catalyst. “We don’t know how the natural photosynthetic catalyst, which is based on four manganese atoms and a calcium atom, works”, Gust adds.
Carbon-based fuels are easier than hydrogen to transport, store and integrate with current technologies. Photosynthesis makes carbon-based fuels (sugars, ATP) using sunlight. Gust and his colleagues have been working on making molecular assemblies for artificial photosynthesis that more closely mimic their biological inspiration. “We know how to make artificial antenna systems and photosynthetic reaction centers that work in the lab, but questions about stability remain, as they are usually based at least in part on organic molecules.” He admits that “we are not very close to a technologically useful catalyst for converting carbon dioxide to a useful liquid fuel.” On the other hand, he says, “the recent increase in funding, worldwide for solar fuels has meant that many more researchers have gotten into the game.” If this funding can be preserved, he anticipates “really significant advances.” Let’s hope so, since as Gust says, “we desperately need a fuel or energy source that is abundant, inexpensive, environmentally benign, and readily available.”
5. Devising catalysts for making biofuels.
The demand for biofuels – fuels made by conversion of organic matter, primarily plants – isn’t driven just by concern for the environment. While it’s true that a biofuel economy is notionally sustainable – carbon emissions from burning the fuels are balanced by the carbon dioxide taken up to grow the fuel crops – the truth is that it’s increasingly hard to find any good alternatives. Organic liquids (oil and petroleum) remain the main energy source globally, and are forecast to do so at least until the mid-century. But several estimates say that, at current production rates, we have only about 50 years worth of oil reserves left. What’s more, most of these are in politically unstable parts of the world. And currently soaring prices are expected to continue – the days of cheap oil are over.
There’s nothing new about biofuels: time was when there was only wood to burn in winter, or peat or dried animal dung. But that’s a very inefficient way to use the energy bound up in carbon-based molecules. Today’s biofuels are mostly ethanol made from fermenting corn, sugar-cane or switchgrass, or biodiesel, an ester made from the lipids in rapeseed or soybean oils. The case for biofuels seems easy to make – as well as being potentially greener and offering energy security, they can come from crops grown on land unsuitable for food agriculture, and can boost rural economies.
But the initial optimism about biofuels cooled quickly. For one thing, they threaten to displace food crops, particularly in developing countries where selling biofuels abroad can be more lucrative than feeding people at home. And the numbers are daunting: meeting current oil demand will mean requisitioning huge areas of arable land. But these figures depend crucially on how efficiently the carbon is used. Some parts of plants, particularly the resinous lignin, can’t easily be turned into biofuel, especially by biological fermentation. Finding new chemical catalysts to assist this process looks essential if biofuels are to fly.
One of the challenges of breaking down lignin – cracking open ‘aromatic C-O bonds’: benzene rings bridged by an oxygen – was recently met by John Hartwig and Alexey Sergeev of the University of Illinois, who found a nickel-based catalyst that will do the trick. Hartwig points out that, if biomass is to supply non-fossil-fuel chemical feedstocks as well as fuels, it will need to offer aromatic compounds – of which lignin is the only major potential source.
It’s a small part of a huge list of challenges: “There are issues at every level”, says Hartwig. Some of these are political – a carbon tax, for example, could decide the economical viability of biofuels. But many are chemical. The changes in infrastructure and engineering needed for an entirely new liquid fuel (more or less pure alcohol) are so vast that it seems likely the biofuels will need to be compatible with existing technology – in other words, to be hydrocarbons. That means converting the oxidized compounds in plant matter to reduced ones. Not only does this require catalysts, but it also demands a source of hydrogen – either from fossil fuels or ideally, but dauntingly, from splitting of water.
And fuels will need to be liquid for easy transportation along pipelines. But biomass is primarily solid. Liquefaction would need to happen on site where the plant is harvested. And one of the difficulties for catalytic conversion is the extreme impurity of the reagent – classical chemical synthesis does not tend to allow for reagents such as ‘wood’. “There’s no consensus on how all this will be done in the end”, says Hartwig. But an awful lot of any solution lies with the chemistry, especially with finding the right catalysts. “Almost every industrial reaction on a large scale has a catalyst associated”, Hartwig points out.
6. Understanding the chemical basis of thought and memory.
The brain is a chemical computer. Interactions between the neurons that form its circuitry are mediated by molecules: neurotransmitters that pass across the synaptic spaces where one neural cell wires up to another. This chemistry of the mind is perhaps at its most impressive in the operation of memory, in which abstract principles and concepts – a telephone number, say – are imprinted in states of the neural network by sustained chemical signals. How does chemistry create a memory that is at the same time both persistent and dynamic: susceptible to recall, revision and forgetting?
We now know that a cascade of biochemical processes, leading to a change in production of neurotransmitter molecules at the synapse, triggers ‘learning’ for habitual reflexes. But even this ‘simple’ aspect of learning has short- and long-term stages. Meanwhile, more complex so-called ‘declarative’ memory (of people, places and so on) has a different mechanism and location in the brain, involving the activation by the excitatory neurotransmitter glutamate of a protein called the NMDA receptor. Blocking these receptors with drugs prevents memory retention for many types of declarative memory.
Our everyday declarative memories are often encoded in a process called long-term potentiation (LTP), which involves NMDA receptors and in accompanied by an expansion of the synapse, the region of a neuron involved in its communication with others. As the synapse grows, so does the ‘strength’ of its connection with neighbours. The biochemistry of this process has been clarified in the past several years. It involves stimulation of the formation of filaments within the neuron made from the protein actin – the basic scaffolding of the cell, which determine its size and shape. But that process can be undone during a short period before the change is consolidated by biochemical agents that block the newly formed filaments.
Once encoded, long-term memory for both simple and complex learning is actively maintained by switching on genes that produce proteins. It now appears that this can involve a self-perpetuating chemical reaction of a prion, a protein molecule that can switch between two different conformations. This switching process was first discovered for its role in neurodegenerative disease, but prion mechanisms have now been found to have normal, beneficial functions too. The prion protein is switched from a soluble to an insoluble, aggregated state that can then perpetuate itself autocatalytically, and which ‘marks’ a particular synapse to retain a memory.
There are still big gaps in the story of how memory works, many of which await filling with the chemical details. How, for example, is memory recalled once it has been stored? “This is a deep problem whose analysis is just beginning”, says neuroscientist and Nobel laureate Eric Kandel of Columbia University. It may involve the neurotransmitters dopamine and acetylcholine. And what happens at the molecular level when things go wrong, for example in Alzheimer’s-related memory loss and other cognitive disorders that affect memory? Addressing and perhaps even reversing such problems will require a deeper understanding of the many biochemical processes in memory storage, including a better understanding of the chemistry of prions – which in turn seems to point us increasingly towards a more fundamental grasp of protein structure and how it is shaped by evolution.
Getting to grips with the chemistry of memory offers the enticing, and controversial, prospect of pharmacological enhancement. Some memory-boosting substances are already known: neuropeptides, sex steroids and chemicals that act on receptors for nicotine, glutamate, serotonin and other neurotransmitters and their mimics have all been shown to enhance memory. In fact, according to neurobiologist Gary Lynch at the University of California at Irvine, the complex sequence of steps leading to long-term learning and memory means that there are a large number of potential targets for such ‘memory drugs’. However, there’s so far little evidence that known memory boosters improve cognitive processing more generally – that’s to say, it’s not clear that they actually make you smarter. Moreover, just about all studies so far have been on rodents and monkeys, not humans.
Yet it seems entirely possible that effective memory enhancers will be found. Naturally, such possibilities raise a host of ethical and social questions. One might argue that using such drugs is not so different from taking vitamins to improve health, or sleeping pills to get a much-needed good rest, and that it can’t be a bad thing to allow people to become brighter. But can it be right for cognitive enhancement to be available only for those who can afford it? In manipulating the brain’s chemistry, are we modifying the self? As our knowledge and capabilities advance, such ethical questions will become unavoidable.
7. Understanding the chemical basis of epigenetics.
Cells, like humans, become less versatile and more narrowly focused as they age. Pluripotent stem cells present in the early embryo can develop into any tissue type; but as the embryo grows, cells ‘differentiate’, acquiring specific roles (such as blood, muscle or nerve cells) that remain fixed in their progeny. One of the revolutionary discoveries in research on cloning and stem cells, however, is that this process isn’t irreversible. Cells don’t lose genes as they differentiate, retaining only those they need. Rather, the genes are switched off but remain latent – and can be reactivated. The recent discovery that a cocktail of just four proteins is sufficient to cause mature differentiated cells to revert to stem-cell-like status, becoming induced pluripotent cells, might not only transform regenerative medicine but also alters our view of how the human body grows from a fertilized egg.
Like all of biology, this issue has chemistry at its core. It’s slowly becoming clear that the versatility of stem cells, and its gradual loss during differentiation, results from the chemical changes taking place in the chromosomes. Whereas the old idea of biology makes it a question of which genes you have, it is now clear that an equally important issue is which genes you use. The formation of the human body is a matter of chemically modifying the stem cells’ initial complement of genes to turn them on and off.
What is particularly exciting and challenging for chemists is that this process seems to involve chemical events happening at size scales greater than those of atoms and molecules: at the so-called mesoscale, involving the interaction and organization of large molecular groups and assemblies. Chromatin, the mixture of DNA and proteins that makes up chromosomes, has a hierarchical structure. The double helix is wound around cylindrical particles made from proteins called histones, and this ‘string of beads’ is then bundled up into higher-order structures that are poorly understood. Yet it seems that cells exert great control over this packing – how and where a gene is packed into chromatin may determine whether it is ‘active’ or not. Cells have specialized enzymes for reshaping chromatin structure, and these have a central role in cell maturation and differentiation. Chromatin in embryonic stem cells seems to have a much looser, open structure: as some genes fall inactive, the chromatin becomes increasingly lumpy and organized. “The chromatin seems to fix and maintain or stabilize the cells’ state”, says pathologist Bradley Bernstein of the Massachusetts General Hospital in Boston.
What’s more, this process is accompanied by chemical modification of both DNA and histones. Small-molecule tags become attached to them, acting as labels that modify or silence the activity of genes. The question of to what extent mature cells can be returned to pluripotency – whether iPS cells are as good as true stem cells, which is a vital issue for their use in regenerative medicine – seems to hinge largely on how far this so-called epigenetic marking can be reset. If iPS cells remember their heritage (as it seems they partly do), their versatility and value could be compromised. On the other hand, some histone marks seem actually to preserve the pluripotent state.
It is now clear that there is another entire chemical language of genetics – or rather, of epigenetics – beyond the genetic code of the primary DNA sequence, in which some of the cell’s key instructions are written. “The concept that the genome and epigenome form an integrated system is crucial”, says geneticist Bryan Turner of the University of Birmingham in the UK.
The chemistry of chromatin and particularly of histone modifications may be central to how the influence of our genes gets modified by environmental factors. “It provides a platform through which environmental components such as toxins and foodstuffs can influence gene expression”, says Turner. “We are now beginning to understand how environmental factors influence gene function and how they contribute to human disease. Whether or not a genetic predisposition to disease manifests itself will often depend on environmental factors operating through these epigenetic pathways. Switching a gene on or off at the wrong time or in the wrong tissue can have effects on cell function that are just as devastating as a genetic mutation, so it’s hardly surprising that epigenetic processes are increasingly implicated in human diseases, including cancer.”
8. Finding new ways to make complex molecules.
The core business of chemistry is a practical, creative one: making molecules. But the reasons for doing that have changed. Once the purpose of constructing a large natural molecule such as vitamin B12 by painstaking atom-by-atom assembly was to check the molecular structure. If what you build, knowing were each atom is going, is the same as what nature makes, it presumably has the same structure. But we’re now good enough at deducing structures from methods such as X-ray crystallography – often for molecules that it would be immensely hard to make anyway – that this justification is hard to sustain.
Maybe it’s worth making a molecule because it is useful – as a drug, say. That’s true, but the more complicated the molecule, the less useful its synthesis from scratch (‘total synthesis’) tends to be, because of the cost and the small yield of the product after dozens of individual steps. Better, often, to extract the molecule from natural sources, or to use living organisms to make it or part of it, for example by equipping bacteria or yeast with the necessary enzymes.
And total synthesis is typically slow – even if rarely as slow as the 11-year project to make vitamin B12 that began in 1961. Yet new molecules and drugs are often needed very fast – for example, new antibiotics to outstrip the rise of resistant microorganisms.
As a result, total synthesis is “a lot harder to justify than it once was”, according to industrial chemist Derek Lowe. It’s a great training ground for chemists, but are there now more practical ways to make molecules? One big hope was combinatorial chemistry, in which new and potentially useful molecules were made by a random assembly of building blocks followed by screening to identify those that do a job well. Once hailed as the future of medicinal chemistry, ‘combi-chem’ fell from favour as it failed to generate anything useful.
But after the initial disappointments, combi-chem may enjoy a brighter second phase. It seems likely to work only if you can make a wide enough range of molecules and find good ways of picking out the minuscule amounts of successful ones. Biotechnology might help here – for example, each molecule could be linked to a DNA-based ‘barcode’ that both identifies it and aids its extraction. Or cell-based methods might coax combinatorial schemes towards products with particular functions using guided (‘directed’) evolution in the test tube.
There are other new approaches to bond-making too, which draw on nature’s mastery of uniting fragments in highly selective yet mild ways. Proteins, for example, have a precise sequence of amino acids determined by the base sequence of the messenger RNA molecule on which they are assembled in the ribosome. Using this model, future chemists might program molecular fragments to assemble autonomously in highly selective ways, rather than relying on the standard approach of total synthesis that involves many independent steps, including cumbersome methods for protecting the growing molecule from undesirable side reactions. For example, David Liu at Harvard University and his coworkers have devised a molecule-making strategy inspired by nature’s use of nucleic-acid templates to specify the order in which units are linked together. They tagged small molecules with short DNA strands that ‘programme’ them for linkage on a DNA template. And they have created a ‘DNA walker’ which can step along a template strand sequentially attaching small molecules dangling from the strand to produce a macromolecular chain – a process highly analogous to protein synthesis on the ribosome, essentially free from undesirable side reactions. This could be a handy way to tailor new drugs. “Many molecular life scientists believe that macromolecules will play an increasingly central, if not dominant, role in the future of therapeutics”, says Liu.
9. Integrating chemistry: creating a chemical information technology.
Increasingly, chemists don’t simply want to make molecules but also to communicate with them: to make chemistry an information technology that will interface with anything from living cells to conventional computers and fibre-optic telecommunications. In part, this is an old idea: biosensors in which chemical reactions are used to report on concentrations of glucose in the blood date back to the 1960s, although only recently has their use for monitoring diabetes been cheap, portable and widespread. Chemical sensing has countless applications – to detect contaminants in food and water at very low concentrations, say, or to monitor pollutants and trace gases in the atmosphere.
But it is in biomedicine that chemical sensors have the most dramatic potential. Some of the products of cancer genes circulate in the bloodstream long before the condition becomes apparent to regular clinical tests – if they could be detected early, prognoses would be vastly improved. Rapid genomic profiling would enable drug regimes to be tailored to individual patients, reducing risks of side-effects and allowing some medicines to be used that today are hampered by their dangers to a genetic minority. Some chemists foresee continuous, unobtrusive monitoring of all manner of biochemical markers of health and disease, perhaps in a way that is coupled remotely to alarm systems in doctors’ surgeries or to automated systems for delivering remedial drug treatments. All of this depends on developing chemical methods for sensing and signaling with high selectivity and often at very low concentrations. “Advances are needed in improving the sensitivity of such systems so that biological intermediates can be detected a much lower levels”, says chemist Allen Bard of the University of Texas at Austin. “This raises a lot of challenges. But such analyses could help in the early detection of disease.”
Integrated chemical information systems might go much further still. Prototype ‘DNA computers’ have been developed in which strands of bespoke DNA in the blood can detect, diagnose and respond to disease-related changes in gene activity. Clever chemistry can also couple biological processes to electronic circuitry, for example so that nerve cells can ‘speak’ to computers. Information processing and logic operations can be conducted between individual molecules. The photosynthetic molecular apparatus of some organisms even seems able to manipulate energy using the quantum rules that physicists are hoping to exploit in super-powerful quantum computers. It is conceivable that mixtures of molecules might act as super-fast quantum computers to simulate the quantum behavior of other molecules, in ways that are too computationally intensive on current machines. According to chemistry Nobel laureate Jean-Marie Lehn of the University of Strasbourg, this move of chemistry towards what he calls a science of informed (and informative) matter “will profoundly influence our perception of chemistry, how we think about it, how we perform it.”
10. Exploring the limits of applicability of the periodic table, and new forms of matter that lie outside it.
The periodic tables that adorn the walls of classrooms are now having to be constantly revised, because the number of elements keeps growing. Using particle accelerators to crash atomic nuclei together, scientists can create new ‘superheavy’ elements, with more protons and neutrons than the 92 or so elements found in nature. These engorged nuclei are not very stable – they decay radioactively, often within a tiny fraction of a second. But while they exist, the new ‘synthetic’ elements such as seaborgium (element 106) and hassium (108) are like any other insofar as they have well defined chemical properties. In dazzling experiments, the properties of both of these synthetic elements have been investigated from just a handful of the elusive atoms in the instant before they fall apart.
Such studies probe not just the physical but the conceptual limits of the periodic table: do these superheavy elements continue to display the trends and regularities in chemical behavior that make the table periodic in the first place? Some do, and some don’t. In particular, such massive nuclei hold on to the atoms’ innermost electrons so tightly that they move at close to the speed of light. Then the effects of special relativity increase their mass and play havoc with the quantum energy states on which their chemistry – and thus the table’s periodicity – depends.
Because nuclei are thought to be stabilized by particular ‘magic numbers’ of protons and neutrons, some researchers hope to find an ‘island of stability’, a little beyond the current capabilities of element synthesis, in which these superheavies live for longer. But is there any fundamental limit to their size? A simple calculation suggests that relativity prohibits electrons from being bound to nuclei of more than 137 protons. But more sophisticated calculations defy that limit. “The periodic system will not end at 137; in fact it will never end”, insists nuclear physicist Walter Greiner of the Johann Wolfgang Goethe University in Frankfurt, Germany. The experimental test of that claim remains a long way off.
Besides extending the periodic table, chemists are stepping outside it. Conventional wisdom has it that the table enumerates all the ingredients that chemists have at their disposal. But that’s not quite true. For one thing, it has been found that small clusters of atoms can act collectively like single ‘giant’ atoms of other elements. A so-called ‘superatom’ of aluminum containing precisely 13 atoms will behave like a giant iodine atom, while an Al14 cluster behaves like an alkaline earth metal. “We can take one element and have it mimic several different elements in the Periodic Table”, says Shiv Khanna of Virginia Commonwealth University in Richmond, Virginia. It’s not yet clear how far this superatom concept can be pushed, but according to one of its main advocates, A. Welford Castleman of Pennsylvania State University, it potentially makes the periodic table three-dimensional, each element being capable of mimicking several others in suitably sized clusters. There’s no fundamental reason why such superatoms have to contain just one element either, nor why the ‘elements’ they mimic need be analogues of others in the table.
Furthermore, physicists have made synthetic atoms that are not like traditional ones at all, with nuclei of protons (and perhaps neutrons) surrounded by electrons. The electron’s heavier cousin the muon can replace the electron in ‘muonium’, a kind of heavy hydrogen. And the anti-electron, or positron, can act as the positive nucleus of ‘positronium’, a super-light analogue of hydrogen. A slightly heftier version of ‘light hydrogen’ has been made that substitutes the central proton for a positively charged muon. These synthetic atoms have been used to test aspects of the quantum theory of chemical reactions. And by comparing the spectrum of muonium with that of ordinary hydrogen, researchers have been able to obtain a new, more accurate value for the mass of the proton.