Saturday, August 26, 2006

Tyred out

Here’s my Materials Witness column for the September issue of Nature Materials. It springs from a recent broadcast in which I participated on BBC Radio 4’s Material World – I was there to talk about synthetic biology, but the item before me was concerned with the unexpectedly fascinating, and important, topic of tyre disposal. It seemed to me that the issue highlighted the all too common craziness of our manufacturing systems, in which potentially valuable materials are treated as ‘waste’ simply because we have not worked out the infrastructure sensibly. We can’t afford this profligacy, especially with oil-based products. I know that incineration has a bad press, and I can believe that is sometimes deserved; but surely it is better to recover some of this embodied energy rather than to simply dump it in the nearest ditch?

*****

In July it became illegal to dump almost any kind of vehicle tyres in landfill sites in Europe. Dumping of whole tyres has been banned since 2003; the new directive forbids such disposal of shredded tyres too. That is going to leave European states with an awful lot of used tyres to dispose of in other ways. What can be done with them?

This is a difficult question for the motor industry, but also raises a broader issue about the life cycle of industrial materials. The strange thing about tyres is that there are many ways in which they could be a valuable resource, and yet somehow they end up being regarded as toxic waste. Reduced to crumbs, tyre rubber can be incorporated into soft surfacing for sports grounds and playgrounds. Added to asphalt for road surfaces, it makes the roads harder-wearing.

And rubber is of course an energy carrier: a potential fuel. Pyrolysis of tyres generates gas and oil, recovering some of the carbon that went into their making. This process can be made relatively clean – certainly more so than combustion of coal in power stations.

Alternatively, tyres can simply be burnt to create heat: they have 10% more calorific content than coal. At present, the main use of old tyres is as fuel for cement kilns. But the image of burning tyres sounds deeply unappealing, and there is opposition to this practice from environmental groups, who dispute the claim that it is cleaner than coal. Such concerns make it hard to secure approval for either cement-kiln firing or pyrolysis. And the emissions regulations are strict – rightfully so, but reducing the economic viability. As a result, these uses tend to be capacity-limited.

Tyre retreads have a bad image too – they are seen as second-rate, whereas the truth is that they can perform very well and the environmental benefits of reuse are considerable. Such recycling is also undermined by cheap imports – why buy a second-hand tyre when a new one costs the same?

Unfortunately, other environmental concerns are going to make the problem of tyre disposal even worse. Another European ruling prohibits the use of polycyclic aromatic hydrocarbon oil components in tyre rubber because of their carcinogenicity. It’s a reasonable enough precaution, given that a Swedish study in 2002 found that tyre wear on roads was responsible for a significant amount of the polycyclic aromatics detected in aquatic organisms around Stockholm. But without these ingredients, a tyre’s lifetime is likely to be cut to perhaps just a quarter of its present value. That means more worn-out tyres: the current 42 million tyres discarded in the UK alone could rise to around 100 million as a consequence.

Whether Europe will avoid a used-tyre mountain remains to be seen. But the prospect of an evidently useful, energy-rich material being massively under-exploited seems to say something salutary about the notion that market economics can guarantee efficient materials use. Perhaps it’s time for some incentives?

Sunday, August 06, 2006

Star treatment

Am I indulging in cheap ‘kiss & tell’ by musing on news@nature about my meeting with Madonna? Too late now for that kind of soul-searching, but in any case I figured that (1) this is now ancient history; (2) she’s talked about her interest in ‘neutralizing nuclear waste’ to Rolling Stone; and (3) I’ve no interest in trying to make a famous person sound silly. As far as I’m concerned, it’s great that some people with lots of money will look into ways of investing it philanthropically. But I did feel some obligation to suggest to her that this scheme did not seem like a particularly good investment. After all, part of the reason why she asked me over was to proffer advice (at least, I hope so – I’d no intention of acting simply as the PR officer).

The point I really wanted to make in this article, however, is how perpetually alluring these cultural myths of science are. Once you start to dig into the idea that radioactivity can be ‘neutralized’, it’s astonishing what is out there. My favourite is Brown’s gas, the modern equivalent of a perpetual-motion machine (actually a form of electrolysed water, though heaven forbid that we should suggest it is hydrogen + oxygen). None of this, however, is to deny that radioactive half-lives really can be altered by human means – but by such tiny amounts that there doesn’t seem much future, right now, in that dream of eliminating nuclear waste. So as I say in the article, it seems that for now we will have to learn to live with the stuff. Keith Richards does – apparently he drinks it. In his case it's just a nickname for his favourite cocktail of vodka and orange soda. But as everyone knows, Keith can survive anything.

Wednesday, August 02, 2006



Numerology

… is alive and well, and living somewhere between Chartres cathedral and Wall Street. I am one of the few remaining humans not to have read The Da Vinci Code, but it seems clear from what I have been told that it has given our collective system another potent dose of the Fibonacci virus. This is something that, as the author of a forthcoming book about Chartres, I knew I’d have to grapple with at some point. That point came last week, in the course of a fascinating day in York exploring aspects of Gothic in a summer school for architects. I’m too polite to mention names, but a talk in the evening on ‘sacred geometry’ was, for this naïve physicist, and eye-opener about the pull that numerology continues to exercise. There is plenty of room for healthy arguments about the degree to which the Gothic cathedrals were or weren’t designed according to Platonic geometry, and there will surely be no end to the time-honoured practice of drawing more or less fanciful geometric schemes on the ground plan of Chartres using thick pencil to reveal the builders’ ‘hidden code’. John James is a part master of this art, while Nigel Hiscock is one of the few to make a restrained and well argued case for it.

But the speaker last week was determined to go well beyond Chartres, by revealing divine geometry in a teleological universe. Most strikingly, he suggested that many of the planetary orbits, when suitably ‘corrected’ to get rid of the inconvenient eccentricity, become circles that can be inscribed or circumscribed on a variety of geometric figures with uncanny accuracy. One such, relating the orbits of the Earth and Venus, is shown in Figure a. The claim was that these ratios are extraordinarily precise. For example, in this case the orbits fit the construction to greater than 99% precision, it was asserted.

This seemed to me like a wonderful exercise to set A level students: how might one assess such claims? I decided to do that for myself. The Earth/Venus case is a relatively easy one to test: the ratio of the two ‘circular’ orbits should be equal to the square root of 2, which is approximately 1.414. Now, there is a question of exactly what is the right way to ‘circularize’ an elliptical orbit, but it seems to me that the most reasonable way is to use the average distances of the two planets from the sun – the mean of the major and minor axes of the ellipses. This apparently gives 149,476,000 km for Earth (to 6 s.f.) and 108,209,000 km for Venus. That gives us a ratio of 1.381. Not within 99% of root 2, then – but not bad, only out by about 2.4%. (I’m sure ‘sacred geometers’ will insist there is a ‘better’ way to circularize the orbits, but I think it would be hard to find one that is as neutral as this.)

How do we know if this near-coincidence is mere chance or not? Well, a relatively simple test is to consider all the geometric figures of the type shown by the speaker, and see how much of numerical space is spanned by the corresponding ratios – given a leeway of, say, 3% either way, to allow for the fact that (as was explained to me) the real world lacks pure Platonic perfection. So I did this, considering just the inscribed and circumscribed circles for the perfect polygons up to the hexagon, along with a couple of others in which these polygons are adorned with equilateral triangles on each side (see Figure b). (I know the latter look a little contrived, but one of them was used in this context in the talk.) I’m sure one can come up with several other ‘geometric’ figures of this kind, but this seemed like a reasonable minimal set. The ratios concerned then cover the space between 1 and 2. With the exception of Mars/Jupiter, all of the planetary orbits produce a ratio within this range when we consider each planet in turn and the next one beyond it.

Now, at the low end of the range (close to 1), one can get more or less any number by using a sufficiently many-sided polygon. For hexagons, the two circles produce a ratio range of 1.12 to 1.19, allowing for 3% variation each way. And in any event, while I don’t know the exact number, it seems highly likely from the dynamics of solar-system formation that one can’t get two orbits too close together – I suspect a lower limit on the radius ratio of something like 1.1.

OK, so adding up all the ranges covered by these figures leaves us with just 32% or so of the range between 1 and 2 not included. In other words, draw two circles at random with a radius ratio of between 1 and 2, and there is a two in three chance that you can fit this ratio to one of these geometric figures with 3% precision. With seven pairs to choose from in the solar system, we’d expect roughly 4-5 of them to ‘fit’.

It took me less than an hour to figure this out using school maths. The speaker last week was hardly lacking in arithmetical skills, but it seems not to have occurred to him to test his ‘coincidences’ in this way. I can only understand that in one way: these numerological arguments are ones he desperately wants to believe, to a degree that blinds him to reason.

That was born out by other statements about the ‘foolishness’ of science that would have been disproved with the most minimal of checking (such as that scientists discovered in 1987 that crystals with five-fold symmetry are possible, while Islamic artists had known that for centuries). The only explanation I can see is that of judgement being clouded by an anti-science agenda: we rarely question ‘facts’ that fit our preconceptions. I have to confess that I do find it troubling that educated people develop such an antipathy to science, and such a desperation to believe in some cosmic plan that generally turns out to be remarkably banal (whereby God fills nature with cheap number tricks), that they abandon all their critical faculties to embrace it – and to serve it up in a seemingly authoritative manner to people who don’t necessarily have the resources to assess it. I’d like to recommend to them the words of Adelard of Bath, who suggests that the intellectuals of the twelfth century had a rather more astute grip on these matters than we sometimes do today: “I do not detract from God. Everything that is, is from him, and because of him. But [nature] is not confused and without system, and so far as human knowledge has progressed it should be given a hearing. Only when it fails utterly should there be recourse to God.”

Wednesday, July 26, 2006

Genetic revisionism

Is it time for a thorough re-evaluation of how the genome is structured and how it operates? Recent work seems to be hinting that that could be so, as I’ve argued in an article in the August issue of Prospect, reproduced below (with small changes due to editing corrections that somehow got omitted). The idea is further supported by very recent work in Nature from Eran Segal at the Weizmann Institute and Jonathan Widom of Northwestern, which claims to have uncovered a kind of genomic code for DNA packing in nucleosomes. I suspect there is much more to come – this latest work already raises lots of interesting questions. Is it time to start probing the informational basis of the mesoscale structure of chromatin, which is clearly of great importance in transcription and regulation? I hope so. Watch this space.

[From Prospect, August 2006, p.14, with minor amendments]

The latest findings in genetics and molecular biology are revealing the human genome, the so-called “book of life”, to be messy to the point of incomprehensibility. Each copy is filled with annotations that change the meaning, there are some instructions that the book omits, the words overlap, and we don’t have a clue about the grammar.

Or maybe we simply have the wrong metaphor. The genome is no book, and the longer we talk about it in that way, the harder it will be to avoid misconceptions about how genes work.
But it’s not just the notion of the genome as a “list of parts” that now appears under threat. The entire central dogma of genetics—that a gene is a self-contained stretch of a DNA molecule that encodes instructions for making an enzyme, and that genetic inheritance works via DNA—is now under revision. It is not that this picture is wrong, but it is certainly incomplete in ways that are challenging the textbook image of how genes work.

Take, for example, the discovery in May by a team of French researchers that mice can show the physiological expression (the phenotype) of a genetic mutation that produces a spotty tail even if they don’t carry the mutant gene. Minoo Rassoulzadegan’s group in Nice found that the spotty phenotype can be induced by molecules of RNA that are passed from sperm to egg and then affect (in ways that are not understood) the operation of genes in the developing organism.

RNA is not normally regarded as an agent of inheritance. It is the ephemeral intermediate in the translation of genetic information on DNA to protein enzymes. According to the standard model of genetics, a gene on DNA is “transcribed” into an RNA molecule, which is then “translated” into a protein. In the book metaphor, the RNA is like a word that is copied from the pages of the genome and sent to a translator to be converted into the “protein” language.

The RNA transcripts are thrown away once they have been translated—they aren’t supposed to ferry genetic information between generations. Yet it seems that sometimes they can. That is profoundly challenging to current ideas about the role of genes in inheritance. It is not exactly a Lamarckian process (in which characteristics acquired by an organism through its interaction with the environment may be inherited)—but it is not consistent with the usual neo-Darwinian picture.

“Inheritance” via RNA is an example of so-called epigenetic variation: a change in an organism’s phenotype that is not induced by a change in its genome. Rassoulzadegan thinks that this offers nature a way of “trying out” mutations in the Darwinian arena without committing to the irreversible step of altering the genome itself. “It may be a way to induce variations quickly without taking the risk of changing primary DNA sequences,” he says. “Epigenetic variations in the phenotype are reversible events, and probably more flexible.” He also suspects they could be common.

There is even evidence that genetic mutations can be “corrected” in subsequent generations, implying that back-up copies of the original genetic information are kept and passed on between generations. This all paints a more complex picture than the standard neo-Darwinian story of random mutation pruned by natural selection.

Epigenetic influences on the development of organisms have been known for a long time. For example, identical twins with the same genomes don’t necessarily have the same physical characteristics or disposition to genetic disease. It’s clear that the influence of genes may be altered by their environment: a study in 2005 showed that differences in the activity of genes in identical twins become more pronounced with age, as the messages in the genome get progressively more modified. These “books” are constantly being revised and edited.

One way in which this happens is by chemical modification of DNA, such as the attachment of tags that “silence” the genes. These modifications can be strongly influenced by environmental factors such as diet. In effect, such epigenetic alterations constitute a second kind of code that overwrites the primary genetic instructions. An individual’s genetic character is therefore defined not just by his or her genome, but by the way it is epigenetically annotated. So rapid genetic screening will provide only part of the picture for the kind of personalised medicine that has been promised in the wake of the genome project: merely possessing a gene doesn’t mean that it is “used.”

Even the basic idea of a gene as a self-contained unit of biological information is now being contested. It has been known for 30 years that a single gene can encode several different proteins: the genetic information can be reshuffled after transcription into RNA. But the situation is far more complex than that. The molecular machinery that transcribes DNA doesn’t just start at the beginning of a gene and stop at the end: it seems regularly to overrun into another gene, or into regions that don’t seem to encode proteins at all. (A sobering 98-99 per cent of the human genome consists of such non-coding DNA, often described as “junk.”) Thus, many RNA molecules aren’t neat copies of genetic “words,” but contain fragments of others and appear to disregard the distinctions between them. If this looks like sloppy work, that may be because we simply don’t understand how the processes of transcription and translation really operate, and have developed an over-simplistic way of describing them.

This is backed up by observations that some RNA transcripts are composites of “words” from completely different parts of the genome, as though the copyist began writing down one word, then turned several pages and continued with another word entirely. If that’s so, one has to wonder whether the notions of copying, translation and books really have much value at all in describing the way genetics works. Worse, they could be misleading, persuading scientists that they understand more than they really do and tempting them towards incorrect interpretations. Already there are disagreements about precisely what a gene is.

In 2002 US scientists reported that the known protein-encoding regions of the genome account for perhaps less than a tenth of what gets transcribed into RNA. It is hard to imagine that cells would bother with the energetically costly process of making all that RNA unless it needed to. So apparently genetics isn’t all, or perhaps even primarily, about making proteins from genetic instructions.“The concept of a gene may not be as useful as it once was”, admits Thomas Gingeras of the biotech company Affymetrix. He suggests that a gene may be not a piece of DNA but a collective phenomenon involving a whole group of protein-coding and non-coding RNA transcripts. Perhaps these transcripts, not genes in the classical sense, are the fundamental functional units of the genome.

Scientists are constantly trying to express what they discover using concepts that are already familiar—not only when they communicate outside their field, but also when they talk among themselves. This is natural, and probably essential in order to gain a foothold on the slopes of new knowledge. But there’s no guarantee that those footholds are the ones that will lead us in the right direction, or anywhere at all. Relativity and quantum mechanics are still, after 100 years, deemed difficult and mysterious, not because they truly are but because we don’t really possess any good metaphors for them: the quantum world is not a game of billiards, the universe is not an expanding balloon and a light beam is not like a bullet train. Genetics, in contrast, looked accessible, because we thought we knew how to talk about information: that it is held as discrete, self-contained and stable packages of meaning that may be kept in data banks, copied, translated. Meaning is constructed by assembling those entities into linear strings organised by grammatical rules. The delight that accompanied the discovery of the structure of DNA half a century ago was that nature seemed to use this model too.

And indeed, the notion of information stored in the genome, passed on by copying, and read out as proteins, still seems basically sound. But it is looking increasingly doubtful that nature acts like a librarian or a computer programmer. It is quite possible that genetic information is parcelled and manipulated in ways that have no direct analogue in our own storage and retrieval systems. If we cleave to simplistic images of books and libraries, we may be missing the point.

Wednesday, July 12, 2006


Stormy Starry Night

Did Vincent van Gogh have a deep intuition for the forms of turbulence? That's what has been suggested by a recent mathematical analysis of the structure of his paintings. It seems that these display the statistical fingerprint of genuine turbulence – but only when the artist was feeling particularly turbulent himself. Well, maybe – I think that conclusion will have to await a more comprehensive analysis of the paintings. All the same, it is striking that other artists noted for their apparently turbulent canvases, such as Turner and Munch, don't seem to capture this same statistical signature in the correlations between patches of light and shade (I've received an analysis of Turner's stormy Mouth of the Seine, which confirms that this is so). Did van Gogh, then, achieve what Leonardo strove towards in his depictions of flowing water? I'll explore this further in my forthcoming new version of my 1999 book on pattern formation, The Self-Made Tapestry.

Wednesday, June 21, 2006

Who’s afraid of nanoparticles?

Lots of people, it seems. They have the potential to become the new DDT or dioxins or hormone mimics, the invisible ingredients of our environment and our synthetic products that are suspected of wreaking biochemical havoc. I don’t say the notion is ridiculous, but we need to keep it in proportion. The Royal Society/RAE report on the hazards and ethics of nanotechnology did a good job of giving some perspective on the concerns: we shouldn’t take it for granted that nanoparticles are safe (even if their bulkier counterparts are), but neither are we totally ignorant about exposure to such particles, and it is unlikely that we can make many generalizations about their health risks. Certainly, we need legislation to stop these tiny grains from slipping through current health & safety regulations.

My recent article on the damaging effects that titania nanoparticles apparently have on mouse microglia, the defensive cells of the brain, will probably be welcomed as ammunition by those who want a moratorium on all nanoparticle research. It does not give grounds for that – this research is a long way from establishing actual neurotoxocity – but it does give pause for thought about nanoparticle sun creams. I rather suspect this stuff is not going to be a major hazard, but we can’t be sure of that, and I confess that I’ll prefer to avoid them this summer.

For various mundane reasons, some comments by the EPA researchers involved in this work didn’t make it into the article. But they help to put the implications in perspective, and so I’m posting them here:
Responses from Dr. Bellina Veronesi
Question: Apart from sun creams, which consumer products that involve contact with the human body currently use titania nanoparticles?
Answer: Cosmetics, prosthetics (artificial joints, for example)
Question: You mention toothpaste and cosmetics - do you know of specific examples of these?
Answer: Most product labels would probably not note if a given chemical concentration was in the “nano” range. Often times, the titanium oxide is listed in the ingredients, but is not identified as "nano-." More specific information might be considered to be confidential business information (CBI) and not available.
Question: I'm finding it hard to see from the paper exactly how long the production of ROS tended to continue for after the microglia were exposed.
Answer: Over a 120 minute period, which was the extent of our measurements.
Question: It seems that the worry is not about the response per se, but that it is sustained.
Answer: The concern from the neurobiology/neurotoxicology point of view is that a cell type (the microglia), whose job it is to react to offending foreign stimuli in the brain by releasing free radicals (ROS), is doing just that in response to nanosize Titanium dioxide. If those free radicals are not neutralized by anti-oxidants present in the brain (Vitamin C, Vitamin E, super oxide dismutase), they can damage neurons.
But remember, these measurements were made in isolated microglia, so we can't yet say if it is neurotoxic. Rather, the next step would be to examine the consequences of ROS release in a more complex culture system consisting of mixtures of brain cells, including microglia and neurons. Based on those findings, we would then test in animals.
Question: What do we know about how such nanoparticles might get transported around the body? Can you say anything about the chances of them reaching the brain?
Answer: Experts such as Dr. Wolfgang Kreyling (GSF Institute for Inhalation Biology (Munich)) have shown that nanosize particles, such as TiO2, can leave the lungs of exposed animals and distribute to other organs. However, it is still undetermined whether TiO2 can cross the blood brain barrier and enter the brain.
Question: Can you say anything about whether the concentrations you studied might be realistic in terms of exposure levels?
Answer: It is not “good science” to extrapolate in vitro data to whole animal/human response. There are many obligatory steps/test models that must be tested first. Similarly, our study was not designed to assess whether the test concentrations used in the cell culture studies have relevance to those found in consumer products.
Question: How worrying are the results at this point, given that they are not in vivo studies?
Answer: This was a carefully designed study that followed a format prescribed in the nanoparticle scientific literature ((Nel et al., Science 2006) That format entails moving from cell culture to animal testing in a tiered fashion. We are examining further the possibility that TiO2 may be neurotoxic in culture. If these results prove positive, we will adhere to the format and next test in more complex culture models that use neurons or dissociated whole brain to determine. Results of these studies will determine if animal studies should be pursued.
Question: What are the major uncertainties about how the findings might translate to humans, and what are the next steps?
Answer: This study exposed TiO2 to isolated, brain cells taken from a mouse. Within the confines of this model, it would be speculative to say what the effects would occur in human cells, let alone a human being. Such a prediction requires an extremely lengthy course of testing, involving successively more complicated experimental models. As I noted in my previous answer, we will follow a format that allows for such sequential research.
Question: How do you feel about the fact that titania nanoparticles are currently in use in consumer products? Would you want to use such products yourself?
Answer: Nano-size TiO2 has been in commercial use/multiple routes of human exposure for several years, providing great benefits without incident. Numerous already published studies give TiO2 (nanosize, larger size) a clean bill of health.
The uniqueness of this study is that we are looking at the response of cells with very high resolution, state-of-the-art measurements. Again, this is the initial stage of a very lengthy experimental process the findings of which will provide better insight and guidance related to the use of such products.

Wednesday, June 14, 2006

To boldly go…?

My Nature article on NASA’s manned spaceflight program has drawn some flak, as I suspected it might. That’s good – it is only by compelling people to voice their arguments for such a goal, rather than taking its validity as a self-evident truth, that we can assess them. And how woeful they can seem. Certainly, I find it depressing when people suggest that the only way the human race can survive is to get off the planet as soon as possible. I’m struck also at the difference in attitude between the US and the rest of the world – it really does seem as though the national narrative of frontiers and pioneers in the States shapes so much of public thinking in a way that just doesn’t resonate elsewhere. I don’t want to be too critical of that – it’s surely driven some of the great American achievements of modern times. But I do wonder whether it has led to distortions of history all the way from Columbus’s voyage to the Apollo missions – for example, a failure to appreciate how much commerce and military dominance have played a part in such events. That's certainly brought out in the gobsmacking article by Michael Lembeck mentioned by one of the correspondents - read it and weep. I've responded to that on the Nature site.

Brian Enke makes a new point, however: that manned spaceflight is needed to keep the public on board. I took this up with Brian, and we've had what I think is a productive exchange (certainly more so than some of the stuff posted on the Nature weblog). Here it is:

Dear Brian,
Your comment on the Nature weblog raises an interesting point that I'd not heard put before: that a manned space program is necessary to keep the public on board in funding space science. As a pragmatic expedient, I could accept that. If the only way public funding of space science can be sustained is to provide the public with the justification that ultimately the aim is to get us ‘out there’, then so be it. But that would be a sad state of affairs, and one that I'd find somewhat intellectually dishonest. It may be that some space researchers do indeed feel this is the appropriate end goal, but I suspect that just as many would argue that it is right to fund intellectual inquiry into the universe and our place in it just as it is right to fund any other kind of basic 'big' science, or indeed to fund research on history or linguistics or to give money to the arts. High-energy physics does not need to appeal to any great mission beyond that of finding out about the world. Space science poses equally grand questions, and these ought to be justification enough. Certainly, I find it highly disingenuous to argue (as some have done, though you have not) that manned spaceflight is warranted for economic reasons such as space mining and tourism. I take the point that, in the current climate, science has to be able to 'sell itself', but I'd prefer that we can be as honest as possible about why it is being done.

I'm not against manned spaceflight per se. Indeed, I think we need it for good space science: the Hubble repair showed just why it is valuable to be able to put people into low-Earth orbit. And if we could put a person on Mars tomorrow, I'd be hugely excited. I simply don't see the latter as such a priority that it needs so much of NASA's budget channelled in that direction. This is really the point: the science is suffering, and at the expense of a politically motivated program. I agree that it would be wonderful for NASA to be given enough money to do all things well (I can't say I'd consider the money well spent that would go on a manned mission to the moon or Mars, but I'd rather that than have it spent on defence). But this isn't going to happen, it seems. With limited resources, it seems a tragedy to have to devote so much of them to what is basically a PR exercise.

The international scientific community owes NASA a huge debt of gratitude for the fantastic things it has done. But it does seem to me that Mike Griffin is now having to practice some realpolitik that goes against his inclinations, and that seems a shame.
Best wishes,
Phil


Hi Phil -
Thanks for the comments - it's always exciting to have a good dialogue on these issues. Most of the time, people end up stomping around, posturing, blind to any alternative viewpoints, and ultimately frustrated for that very reason.

I was simply pointing out how things are, motivationally, rather than how they "should be" (if anyone could ever possibly agree on that). I hear your words, respect your opinion, and agree with most of it... but I personally don't have a problem with science being MORE than a mere intellectual pursuit. If science can lead to something tangible, so much the better. That's how you tie into real funding... make a business case for the science. Otherwise, we're left over with the dregs - mere stipends that keep a program alive but lacking inspiration and excitement. Off my philosophy box... ;)

Here's a major point for you to consider quite carefully. You refer to "the NASA budget" below, as is common... but what exactly IS "the NASA budget". You can take the stipend view - our government doles out X amount of dollars to create Y number of high-tech jobs, and what comes out of it really doesn't matter. In that sad state of affairs, one science job equals one engineering job, more or less... and one science dollar equals one human exploration dollar. Again, what is accomplished doesn't matter - it's all a line-item in the federal budget - a happy-pill to placate the masses. OR you can take (IMHO) a far more progressive view - the NASA budget is a combination of individual programs, each of which has meaning and stands strongly on its own merits. Sounds much more honest, right? But watch out - here's the trap - in that progressive view, taking a dollar away from human spaceflight doesn't necessarily add a dollar to space science research. They are individual programs - and the budgets must (eventually) be justified individually. Taking a dollar from human spaceflight eliminates a dollar from the federal deficit. That's it, period.

As we all know, in reality, NASA is a combination of the two approaches... and it IS likely that taking a dollar from human spaceflight will (in the short-term) add a dollar, or 50 cents, or a dime, or whatever to space science. This effect is little more than a local optimization within the government bureaucracy. Long term, it's obvious that everyone at NASA loses, as I pointed out in my initial comments.

There's one other trap in the above paradigm.... or rather a juicy angle for us to exploit. If each program truly stands on its own merits, then a dollar of space science funding equals a dollar of hurricane Katrina relief or Iraq whatever-they're-doing-over-there. Better yet, it's a dollar of Medicare fraud. In this budget-view, there is ZERO reason to take money away from human spaceflight. Bob Park is notorious for preaching a flawed, divisive premise - he wants to fund space science at the explicit expense of the human spaceflight community. Winners vs Losers. I'm going to eat your cake because I like cake, and your piece looks really yummy. Why did he decide to pick on human spaceflight? Did a manned rocket land on his cat or something? Why isn't he on a rampage against Medicare - which wastes nearly twice the entire NASA budget every year funding fraudulent claims?

A robust space industry is either a national priority or it isn't. If space is important, we'll have plenty of funds for space science AND human exploration. If not, we won't. So... I'd rather dedicate my personal efforts toward increasing public awareness of the importance of space science AND exploration. That way, we all win.

Cheers,
- Brian

Dear Brian,
You almost persuade me. That's to say, if NASA budgetting really isn't a simplistic zero-sum game, then I can see the case for why I shouldn't be too concerned if the US administration wants to spend billions on a program that doesn't strike me as having much intrinsic value at this point in time. But it does seem clear that increases in spending on manned spaceflight have taken money from science projects, even if not in a directly zero-sum manner. I do understand and accept your point, however, that providing a vision that captivates the public might in the long term mean that the science gains too.

I suppose I also can't suppress some concern that that vision can lead to snowballing rhetoric that some people take seriously, so that we end up with chaps like the one on the Nature weblog who genuinely believe that humanity is doomed if we don't get on and colonize the moon or Mars quickly. That seems a bit unhealthy.

In any event, thanks very much for your considered response - it has given me something to think about.
With best wishes,
Phil


Hi Phil -
I've truly enjoyed our exchange, and I admire your open-mindedness. Perhaps you even want to throw in a mention of my recent science fiction novel, Shadows of Medusa? It turned out to be a fun read, I'm happy to say, and it touches on the theme we've been discussing - that science and human exploration can co-exist (though I take some fictional liberties and digressions, heh heh). If anyone out there wants to order the book, they should go through the website, though... because the full publisher price on Amazon is a total rip-off.

And I agree - there are some reasons for exploring and settling space thrown about occasionally that I don't personally agree with either. Sometimes you have to step back and say, "hmmm... no thanks." But on the other hand, most of the time the people with the opinions strongly believe them, and one has to be extremely cautious and respectful when stepping on people's beliefs. I suppose that's MY belief. :)

In fact, tying that position into the whole government budget and zero-sum game matter... I don't personally believe the government should be spending MY money (there's a charged term you hear all the time) on certain things that I don't see much intrinsic value in. And I reserve the right to complain about it occasionally. But I do have to recognize and accept that for every single government dollar spent, someone out there somewhere feels very strongly that it's being spent well.

So in the end, perhaps "almost" being persuaded by someone is a good thing, if it leads to a healthy dose of tolerance.

Oh, one more parting thought, spurred by what you noted below: "it does seem clear that increases in spending on manned spaceflight have taken money from science projects". Agreed again - it shouldn't be that way in a perfect world, but that's just the way it is. On the other hand, one can find examples of spending in human spaceflight leading directly to increases in science spending. Our LAMP instrument (in my day job, I work at the Southwest Research Institute) is a great example - it will fly on the Lunar Reconnaissance Orbiter, an entire juicy science mission funded/motivated with human exploration dollars. This is beautiful synergy - great science coupled directly with human exploration. Personally, I'd like to see more of that.

Cheers,
- Brian

Wednesday, May 31, 2006

Get lucky

William Perkin did 150 years ago, when he discovered the first aniline dye. (Luck had little to do, however, with the commercial success that he had from it.) In an article in Chemistry World I explore this and other serendipitous discoveries in chemistry. Perkin’s wonderful dye is shown in all its glory on the magazine’s cover, and here is the article and the leader that I wrote for Nature to celebrate the anniversary:

*************
Perkin, the mauve maker

150 years ago this week, a teenager experimenting in his makeshift home laboratory made a discovery that can be said without exaggeration to have launched the modern chemicals industry. William Perkin was an 18-year-old student of August Wilhelm Hofmann at the Royal College of Chemistry in London, where he worked on the chemical synthesis of natural products. In one of the classic cases of serendipity for which chemistry is renowned, the young Perkin chanced upon his famous ‘aniline mauve’ dye while attempting to synthesize something else entirely: quinine, the only known cure for malaria.

As a student of Justus von Liebig, Hofmann made a name for himself by showing that the basic compound called aniline that could be obtained from coal tar was the same as that which could be distilled from raw indigo. Coal tar was the residue of gas production, and the interest in finding uses for this substance led to the discovery of many other aromatic compounds. At his parents’ home in Shadwell, east London, Perkin tried to make quinine from an aniline derivative by oxidation, based only on the similarity of their chemical formulae (the molecular structures are quite different). The reaction produced only a reddish sludge; but when the inquisitive Perkin tried it with aniline instead, he got a black precipitate which dissolved in methylated spirits to give a purple solution. Textiles and dyeing being big business at that time, Perkin was astute enough to test the coloured compound on silk, which it dyed richly.

Boldly, Perkin resigned from the college that autumn and persuaded his father and brother to set up a small factory with him in Harrow to manufacture the dye, called mauve after the French for ‘mallow’. The Perkins and others (including Hofmann) soon discovered a whole rainbow of aniline dyes, and by the mid-1860s aniline dye companies already included the nascent giants of today’s chemicals industry.

(From Nature 440, p.429; 23 March 2006)

***************
A colourful past

The 150th anniversary of William Perkin’s synthesis of aniline mauve dye (see page 429) is more than just an excuse to retell a favourite story from chemistry’s history. It’s true enough that there is still plenty to delight at in that story – Perkin’s extraordinary youth and good fortune, the audacity of his gamble in setting up business to mass-produce the dye, and the chromatic riches that so quickly flowed from the unpromising black residue of coal gas production. As a study in entrepreneurship it could hardly be bettered, for all that Perkin himself was a rather shy and retiring man.

But perhaps the most telling aspect of the story is the relationship that it engendered between pure and applied science. The demand for new, brighter and more colourfast synthetic dyes, along with means of mordanting them to fabrics, stimulated manufacturing companies to set up their own research divisions, and cemented the growing interactions between industry and academia.

Traditionally, dye-making was a practical craft, a combination of trial-and-error experimentation and the rote repetition of time-honoured recipes. This is not to say that the more ‘scholarly’ sciences failed sometimes to benefit from such empiricism – an interest in colour production led Robert Boyle to propose colour-change acidity indicators, for instance. But the idea that chemicals production required real chemical expertise did not surface until the eighteenth century, when the complexities of mordanting and multi-colour fabric printing began to seem beyond the ken of recipe-followers.

That was when the Scottish chemist William Cullen announced that if the mason wants cement, the dyer a dye and the bleacher a bleach, “it is the chemical philosopher who must supply these.” Making inorganic pigments preoccupied some of the greatest chemists of the early nineteenth century, among them Nicolas-Louis Vauquelin and Louis-Jacques Thénard and Humphry Davy. Perkin’s mauve was, however, an organic compound, and thus, in the mid-nineteenth century, rather more mysterious than metal salts. While the drive to understand the molecular structure of carbon compounds during this time is typically presented now as a challenge for pure chemistry, it owed as much to the profits that might ensue if the molecular secrets of organic colour were unlocked.

August Hofmann, Perkin’s one-time mentor, articulated the ambition in 1863: “Chemistry may ultimately teach us systematically to build up colouring molecules, the particular tint of which we may predict with the same certainty with which we at present anticipate the boiling point.” Both the need to understand molecular structure and the demand for synthetic methods were sharpened by chemists’ attempts to synthesize alizarin (the natural colourant of madder) and indigo. When Carl Graebe and Carl Liebermann found a route to the former in 1868, they quickly sold the rights to the Badische dye company, soon to become BASF. One of those who found a better route in 1869 was Ferdinand Riese, who was already working for Hoechst. (Another was Perkin.) These and other dye companies, including Bayer, Ciba and Geigy, had already seen the value of having highly skilled chemists on their payroll – something that was even more evident when they began to branch into pharmaceuticals in the early twentieth century. Then, at least, there was no doubt that good business needs good scientists.

(From Nature 440, p.384; 23 March 2006)

Platinum sales

We all know that platinum is a precious metal, but paying close to $3 million for a few grams of it seems excessive. Yet that is what a private art collector has just done. The inflated value of the metal in this case stems from how it is arranged: as tiny black particles scattered over a sheet of gummed paper so as to portray an image of the moon rising over a pond on Long Island in 1904. This is, in other words, a photograph, defined in platinum rather than silver. It was taken by the American photographer Edward Steichen, and in February it sold at Sotheby’s of New York for $2,928,000 – a record-breaking figure for a photo.

At the same sale, a photo of Georgia O’Keeffe’s hands recorded in a palladium print by Alfred Stieglitz in 1918 went for nearly $1.5 million (the story is told by Mike Ware here). Evidently these platinum-group images have become collector’s items.

The platinotype process was developed (excuse the pun) in the nineteenth century to address some of the shortcomings of silver prints. In particular, while silver salts have the high photosensitivity needed to record an image ‘instantly’, the metal fades to brown over time because of its conversion to sulphide by reaction with atmospheric sulphur gases. That frustrated John Herschel, one of the early pioneers of photography, who confessed in 1839 that ‘I was on the point of abandoning the use of silver in the enquiry altogether and having recourse to Gold or Platina’.

Herschel did go on to create a kind of gold photography, called chrysotype. But it wasn’t until the 1870s that a reliable method for making platinum prints was devised. The technique was created by the Englishman William Willis, and it became the preferred method for high-quality photography for the next 30 years. A solution of iron oxalate and chloroplatinate was spread onto a sheet coated with gum arabic or starch and exposed to light through the photographic negative. As the platinum was photochemically reduced to the finely divided metal, the image appeared in a velvety black that, because of platinum’s inertness, did not fade or discolour. ‘The tones of the pictures thus produced are most excellent, and the latter possess a charm and brilliancy we have never seen in a silver print’, said the British Journal of Photography approvingly.

To enrich his moonlit scene, Steichen added a chromium salt to the medium, which is trapped in the gum as a greenish pigment, giving a tinted, painterly image reminiscent of the night scenes painted by James McNeill Whistler. Steichen and Stieglitz helped to secure the recognition of photography as a serious art form in the USA.

Stieglitz’s use of palladium rather than platinum in 1918 reflects the demise of the platinotype. The metal was used to catalyse the manufacture of high explosives in World War I, and so it could not be spared for so frivolous a purpose: making shells was more important than making art.

(This article will appear in the July issue of Nature Materials.)

Friday, May 26, 2006

Science, voodoo… or just ideology?

The past few weeks have been a time of turmoil for economic markets. They have been lurching and plunging all over the place, prompting a rash of ‘explanations’ from market analysts. ‘The noise in the markets is the sound of everyone and his dog coming up with post-facto justifications for the apparently random movements in assets from gold to equities, copper and the dollar’, says Tom Stevenson in his 23 May Investment Column in the Daily Telegraph. ‘And while the latest rationalisation sounds plausible enough when you’ve just heard it, so too does the next believable but contradictory explanation.’

Full marks to Stevenson for his honesty. But then he goes on to suggest that ‘what drives financial markets is not the ebb and flow of investment ratios and economic statistics but the fickle and often lemming-like workings of investors’ minds… In other words, asset prices are falling simply because they’ve been rising sharply and investors have become more nervous.’ Now, I am not a highly paid investments analyst, but I can’t help feeling that Stevenson is hardly dropping a bombshell here. Weather forecasters would be unlikely to gain many plaudits by telling us that ‘tomorrow it will rain because some water vapour has evaporated and then condensed up in the sky.’

I suppose we should be grateful that Stevenson is at least not recycling one of the many voodoo tales to which he alludes. But his comments are a symptom of the extraordinary state of economic punditry today – which in itself is a reflection of the bizarre state of economic theory itself. When Thomas Carlyle called it the ‘dismal science’, he was (contrary to popular belief) making a judgement not about its quality but about its seemingly gloomy message. But ‘dismal’ hardly does the situation justice today – there is no other ‘science’, hard or soft, that has got itself into a comparably strange and parlous state.

Everyone knows that market statistics, such as commodity values, fluctuate wildly over a wide range of timescales (while, in the long term, showing generally steady growth). There is nothing particularly remarkable or surprising about that: clearly, the economy is a complex system (one of the most complex we know of, in fact), and such systems, whether they be earthquakes or landslides or biological populations or electronic circuits, show pronounced and seemingly random noise. What is unusual about economic noise, however , is that an awful lot of money rides on it.

That is why, rather than regard it indeed as noise, economists and market analysts are desperate to ‘explain’ it. Imagine a physicist looking through a magnifying glass at the wiggles in her data, and deciding to find a causal explanation for each individual spike. But that is precisely the game in market analysis.

The standard approach to this aspect of economic theory is as revealing as it is disturbing. Economic noise is a ‘bad thing’, because it seems to undermine the notion that economists understand the economy. And so it is banished. Noise, they say, has nothing to do with the operation of the market. In the ‘neoclassical’ theory that dominates all of academic economics today, markets are instantaneously in equilibrium, so that they display optimal efficiency and all goods find their way effectively to those who want them. So the marketplace would run as smoothly as the Japanese rail network – if only it did not keep getting disrupted by external ‘shocks’.

These shocks come from factors such as technological change – an idea that stems back to Marx – which force the market constantly to readjust itself. The very language of this process, in which economists talk of ‘corrections’ to the market, betrays their insistence that none of this is the fault of the market itself, which is simply doing its best to accommodate the nasty outside world. “Nothing more useless than listening to a newscaster tell us how the market just made a little ‘correction’”, says Joe McCauley of the University of Houston, who believes that ideas from physics can help explain what is really going on in economics. (I have a forthcoming article in Nature on this topic.)

“The economists incorrectly try to imagine that the system is in equilibrium, and then gets a shock into a new equilibrium state”, says McCauley. “But real economic systems are never in equilibrium. There is, to date, no empirical evidence whatsoever for either statistical or dynamic equilibrium in any real market. In their way of thinking, they have treat one, single point in a time series as ‘equilibrium’, and that is total nonsense. It’s completely unscientific.”

Unscientific perhaps – but politically useful. While the idea of perfect market efficiency rules, it is easy to argue that any tampering with market mechanisms – any regulation of free markets – is harmful to the economy and therefore pernicious. And that is the suggestion that has dominated the climate of US economic policy since the Reagan administration, as Paul Krugman points out in his excellent book Peddling Prosperity (W. W. Norton, 1994). Of course, the existence of an unregulated economy suits big business just fine, even if there is no objective reason at all to believe that it is the optimal solution to anything.

A little history makes it clear how economics got into this state. Adam Smith’s idea of a ‘hidden hand’ that matches supply to demand was a truly fundamental insight, elucidating how a market can be self-regulating. It chimed with the Newtonian tenor of Smith’s times, and led to the notion (which Smith himself never expressed as such) of ‘market forces’. Then at the end of the nineteenth century, the architects of what became the standard neoclassical economic theory of today – men like Francis Edgeworth and Alfred Marshall – were highly influenced by the ideas on thermodynamic equilibrium developed by scientists such as James Clerk Maxwell and Ludwig Boltzmann. Concepts of equilibration and balancing of forces, imported from physics, are to blame for the misleading notions at the heart of modern economics.

Ah, but there was still no escaping those fluctuations – certainly not in the 1930s, when they led to the biggest ever market crash and the Great Depression that followed. But at that time the scientific concept of noise, pioneered by Einstein and Marian Smoluchowski, was in its infancy. Much better understood was the concept of oscillations – of periodic movements. And so economists decided that, because sometimes prices rose and sometimes they fell, they must be ‘cyclic’, which is to say, periodic. This gave rise to the most extraordinary proliferation of theories about economic ‘cycles’ – there were Kitchin cycles, Juglar cycles, Kuznets and Kondriateff cycles… The Great Crash was then simply an unfortunate piling up of troughs in different cycles. Some were more sophisticated: in the 1930s US accountant Ralph Elliott proposed that markets wax and wane in waves based on the Fibonacci series, a piece of numerology that even today leaves analysts making forecasts based on the Golden Mean, as though they have been spending rather too long with The Da Vinci Code.

As a result, we are now seemingly stuck with the concept of the ‘business cycle’, a piece of lore so deeply embedded in economics that it is almost impossible to discuss market behaviour without it. So let’s get it straight: there is no business cycle in any meaningful sense – nothing cyclic about ‘bull’ and ‘bear’ markets. Sometimes traders decide to buy; sometimes they sell. Sometimes prices rise; sometimes they fall. Things go up and down. That is not a ‘cycle’. “There is no empirical evidence for cycles or periodicity”, says McCauley. “And there is also no evidence for periodic motion plus noise. There is only noise with [long-term] drift. So the better phrase would be 'business fluctuations'.” But ‘fluctuations’ sounds little scary – as though there might be something unpredictable about it.

And anyway, there is a neoclassical explanation for business cycles, so no need to worry. It was concocted by Finn Kydland and Edward Prescott and they won the 2004 Nobel prize for it. Surely that counts for something? Not in McCauley’s view: “The K&P model is totally misleading and, like many other examples in economics, should have been awarded a booby prize, certainly not a Noble Prize.” This model gives the favoured explanation: those pesky fluctuations are ‘exogenous’, imposed from outside. The market is perfect; it’s just that the world gets in the way.

You’d think we must have a good understanding of business cycles, because some economic policies depend on it. One of the five criteria for the UK entering into the European monetary union – for adopting the euro – is that UK business cycles should converge with those of countries that now have the euro. This, I assumed, must be predicated on some model of what causes the business cycle – surely we wouldn’t make a criterion like this if we didn’t understand the phenomenon that underpinned it? So I wrote to the UK Treasury two years ago to ask what model they used to understand the business cycle. I’m still waiting for an answer.

Economics as a whole is not a worthless discipline – indeed, some of the recent Nobels (look at 1998, 2002 and 2005, for example) have been awarded for genuinely exciting work. But McCauley is not alone in thinking that its core is rotten. Steve Keen’s Debunking Economics (Pluto Press, 2001) demonstrates why, even if you accept the absurdly simplistic first principles of neoclassical microeconomic theory, the way the theory unfolds is internally inconsistent. Economists interested in using agent-based modelling to explore realistic agent behaviour and non-equilibrium markets have become so fed up with their exclusion from the mainstream that they are starting their own journal, the Journal of Economic Interaction and Coordination. Such models have made it very clear that Tom Stevenson’s hunch that market fluctuations come from herding behaviour – that the noise is intrinsic to the economy, not an external disturbance – is right on the mark. So much so, in fact, that it is odd and disheartening to see commentators still needing to disclose this as though it was some kind of revelation.

Monday, May 22, 2006


Weird things in a bucket of water

That's all you need to punch a geometric hole in water. Take a look. When the bucket is rotated so fast that the depression in the central vortex reaches the bottom, it can develop a cross-section shaped like triangles, squares, pentagons and hexagons. My story about it is here.

Harry Swinney at Texas says that this isn't unexpected – a symmetry-breaking wavy instability is bound to set in eventually as the rotation speed rises. Harry has seen related things in rotating disk-shaped tanks (without a free water surface) created to model the flows on Jupiter (see Nature 331, 689; 1988).

The intriguing question is whether this has anything to do with the polygonal flows and vortices seen in planetary atmospheres – both in hurricanes on Earth and in the north polar circulation of Saturn. It's not clear that it does – Swinney points out that the Rossby number (the dimensionless number that dictates the behaviour in the planetary flows) is very different in the lab experiments. But he doesn't rule out the possibility that the phenomenon could happen in smaller-scale atmospheric features, such as tornadoes. Tomas Bohr tells me anecdotally that he's heard of similar polygonal structures having been produced in the 'toy tornadoes' made by Californian artist Ned Kahn – whose work, frankly, you should check out in any case.

Friday, May 19, 2006



The return of Dr Hooke

Yes, he seemed pleased to be back at the Royal Society after 303 years – though disconcerted at the absence of his portrait, especially while those of his enemies Oldenburg and Newton were prominently displayed. Hooke had returned to present his long-lost notes to the President, Sir Martin Rees. These, Hooke’s personal transcriptions of the minutes of the Royal Society between 1661 and 1682, were found in a cupboard in Hampshire and due to be auctioned at Bonhams until the Royal Society managed to raise the £1 million or so needed to make a last-minute deal. The official hand-over happened on 17th May, and the chaps at the Royal Society decided it would be fun to have Hooke himself do the honours. But who should don the wig? Well, I hear that bloke Philip Ball has done a bit of acting…

I didn’t take much persuading, I have to admit. Partly this is because I can’t help feeling some affinity to Hooke – not because I am short with a hunched back, a “man of strange unsociable temper”, and constantly getting into priority disputes, but because I too was born on the Isle of Wight. (There is scandalously little recognition of that fact on the Island – I suspect that most residents do not realise that is why Hook Hill in Freshwater is so named.) But it was also because I could not pass up the opportunity to get my hands on his papers. I didn’t, however, reckon on getting half an hour alone with them. I had a good look through, but I’m afraid I’m not able to reveal any exclusive secrets about what they contain – not because I am sworn to silence, but because, what with Hooke’s incredibly tiny scrawl and my struggles to keep my stockings from falling down without garters, I didn’t get much chance to study them in detail. All the same, it was thrilling to see pages headed “21 November 1673: Chris. Wren took the chair.” And I enjoyed a comment that the Royal Society had received a letter from Antoni von Leeuwenhoek but deferred reading it until the next meeting because “it is in Low Dutch and very long.”

The anonymous benefactors truly deserve our gratitude for keeping these pages where they belong – at the Royal Society. There will be a video of the proceedings. on the RS’s web pages soon.

Tuesday, May 16, 2006

Are chemists designers?

Not according to a provocative article by Martin Jansen and Christian Schön in Angewandte Chemie. They argue that 'design' in the strict sense doesn't come into the process of making molecules, because the freedom of chemists is so severely constrained by the laws of physics and chemistry. Whereas a true designer shapes and combines materials plastically to make forms and structures that would never have otherwise existed, chemists are simply exploring predefined minima in the energy landscape that determines the stable configurations of atoms. Admittedly, they say, this is a big space (the notion of 'chemical space' has recently become a hot topic in drug discovery) – but nonetheless all possible molecules are in principle predetermined, and their structures cannot be varied arbitrarily. This discreteness and topological fixity of chemical space means (they say) that "the possibility for 'design' is available only if the desired function can be realized by a structure with essentially macroscopic dimensions." You can design a teapot, but not a molecule.

Chemists won't like this, because they (rightly) pride themselves in their creativity and often liken their crafting of molecules to a kind of art form. Having spoken in two books and many articles about molecular and materials design, I might be expected to share that response. And in fact I do, though I think that Jansen and Schön's article is extremely and usefully stimulating and makes some very pertinent points. I suppose that the most immediate and, I think, telling objection to their thesis is that the permutations of chemical space are so vast that it really doesn't matter much that they are preordained and discrete. One estimate gives the number of small organic molecules alone as 10^60, which is more than we could hope to explore (at today's rate of discovery/synthesis) in a billion years.

Given this immense choice, chemists must necessarily use their knowledge, intuition and personal preferences to guide them towards molecules worth making – whether that is just for the fun of it or because the products will fulfil a specific function. Designers do the same – they generally look for function and try to achieve it with a certain degree of elegance. The art of making a functional molecule is generally not a matter of looking for a complete molecular structure that does the job; it usually employs a kind of modular reasoning, considering how each different part of the structure must be shaped to play its respective role. We need a binding group here, a spacer group there, a hydrophilic substituent for solubility, and so on. That seems a lot like design to me.

Moreover, while it's true that one can't in general alter the length or angle of a bond arbitrarily, one can certainly establish principles that enable a more or less systematic variation of such quantities. For example, Roald Hoffmann and his colleagues have recently considered how one might compress carbon-carbon bonds in cage structures, and have demonstrated (in theory) an ability to do this over a wide range of lengths (see the article here). The intellectual process here surely resembles that of 'design' rather than merely 'searching' for stable states.

Jansen and Schön imply that true design must include an aesthetic element. That is certainly a dimension open to chemists, who regularly make molecules simply because they consider them beautiful. Now, this is a slippery concept – Joachim Schummer has pointed out that chemists have an archaic notion of beauty, defined along Platonic lines and thus based on issues of symmetry and regularity. (In fact, Platonists did not regard symmetry as aesthetically beautiful – rather, they felt that order and symmetry defined what beauty meant.) I have sometimes been frustrated myself that chemists' view of what 'art' entails so often falls back on this equating of 'artistic' with 'beautiful' and 'symmetric', thus isolating themselves from any real engagement with contemporary ideas about art. Nonetheless, chemists clearly do possess a kind of aesthetic in making molecules – and they make real choices accordingly, which can hardly be stripped of any sense of design just because they are discrete.

Jansen and Schön suggest that it would be unwise to regard this as merely a semantic matter, allowing chemists their own definition of 'design' even if technically it is not the same as what designers do. I'd agree with that in principle – it does matter what words mean, and all too often scientists co-opt and then distort them for their own purposes (and are obviously not alone in that). But I don't see that the meaning of 'design' actually has such rigid boundaries that it will be deformed beyond recognition if we apply it to the business of making molecules. Keep designing, chemists.

Wednesday, May 10, 2006

The Big Bounce

The discovery in 1996 that the universe is not just expanding but accelerating was inconvenient because it meant that cosmologists could no longer ignore the question of the cosmological constant. The acceleration is said to be caused by ‘dark energy’ that makes empty space repulsive, and the most obvious candidate for that is the vacuum energy, due to the constant creation and annihilation of particles and their antiparticles. The problem is that quantum theory implies that this energy should be enormous – too great, in fact, to allow stars and galaxies to form at all. While we could assume that the cosmological constant was zero, it was reasonable to imagine that this energy was somehow cancelled out perfectly by another aspect of physical law, even if we didn’t know what it was. But now it seems that such ‘cancellation’ is not perfect, but is absurdly fine-tuned to within a whisker of zero: to one part in 10^120, in fact. How do we explain that?

A new proposal invokes a cyclic universe. I asked one of its authors, Paul Steinhardt, about the idea, and he made some comments which didn’t find their way into my article but which I think are illuminating. So here they are. Thank you, Paul.


PB: How is a Big Crunch driven, in a universe that has been expanding for a trillion years or so with a positive cosmological constant, i.e. a virtually empty space? I gather this comes from the brane model, where the cyclicity is caused by an attractive potential between the branes that operates regardless of the matter density of the universe - is that right?

PS: Yes, you have it exactly right. The cycles are governed by the spring-like force between branes that causes them to crash into one another at regular intervals.

PB: What is your main objection to explaining the fine-tuning dilemma using the anthropic principle? One might wonder whether it is more extravagant to posit an infinite number of universes, with different fundamental constants, or a (quasi?)infinite series of oscillations of a single universe.

PS: I have many objections to the anthropic principle. Let me name just three:
a) It relies on strong, untestable* assumptions about what the universe is like beyond the horizon, where we are prevented by the laws of physics from performing any empirical tests.
b) In current versions, it relies on the idea that everything we see is a rare/unlikely/bizarre possibility. Most of the universe is completely different - it will never be habitable; it will never have physical properties similar to ours; and so on. So, instead of looking for a fundamental theory that predicts what we observe as being LIKELY, we are asked to accept a fundamental theory that predicts what we see is UNLIKELY. This is rather significant deviation from the kind of scientific methodology that has been so successful for the last 300 years.

*I would like to emphasize that I said "untestable assumptions". Many proponents of the anthropic model like to argue that they make predictions and that those predictions can be tested. But, it is important to appreciate that this is not the standard that must be reached for proper science. You must be able to test the assumptions as well. For example, the Food and Drug Administration (thankfully) follow proper scientific practice in this sense. If I give you a pill and "predict" it will cure your cold; and then you take the pill and your cold is cured; the FDA is not about to give its imprimitur to your pill. You must show that your pill really has the active ingredient that CAUSED the cure. Here, that means proving that there is a multiverse, that the cosmological constant really does vary outside our horizon, that it follows the kind of probability distribution that is postulated, etc. – all things that cannot ever be proved because they entail phenomena that lie outside our allowed realm of observation.

PB: Could you explain how your model of cyclicity and decaying vacuum energy leads to an observable prediction concerning axions - and what this prediction is? (What are axions themselves, for example?)

PS: This may be much for your article, but....
Axions are fields that many particle physicists believe are necessary to explain a well-known difficult of the "standard model" of particles called "the strong CP problem." For cosmological purposes, these are examples of very light, very weakly interacting fields that very slowly relax to the small value required to solve the strong CP problem. In string theory, there are many analogous light fields; they control the size and shape of extra dimensions; they are also light and slowly relax.

A potential problem with inflation is that inflation excites all light fields. It excites the field responsible for inflation itself, which is what give rise to the temperature variations seen in the cosmic microwave background and are responsible for galaxy formation. So this is good.

But what is bad, potentially, is that they also excite the axion and all light degrees of freedom. This acts like a new form of energy in the universe that can overtake the radiation and change the expansion history of the universe in a way that is cosmologically disastrous. So, you have to find some way to quell these fields before they do their damage. There is a vast literature on complex mechanisms for doing this. Even so, some have become so desperate as to turn to the anthropic principle once again (maybe we live in the lucky zone where these fields aren't excited).

In the cyclic models, these fields would only be excited when the cosmological constant was very large, which is a long, long, LONG time ago. There have been so many cycles (and these do not disturb the axions or the other fields) that there has been plenty of time to relax away to negligible values.

In other words, the same concept being used to solve the cosmological constant problem – namely, more time – is also automatically ensuring that axions and other light fields are not problematic.

Friday, May 05, 2006

Myths in the making

Or the unmaking, perhaps. It was such a lovely story: a mysterious but very real force of attraction between objects caused by the peculiar tendency of empty space to spawn short-lived quantum particles has a maritime analogue in which ships are attracted because of the suppression of long-wavelength waves between them. That’s what was claimed ten years ago, and it became such a popular component of physics popularization that, when he failed to mention it in his book ‘Zero’ (Viking, 2000; which explored this aspect of the quantum physics of emptiness), Charles Seife was taken to task. But it seems that no such analogy really exists – or at least, that there is no evidence for it. The myth is unpicked here.

This vacuum force is called the Casimir effect, and was identified by Dutch physicist Hendrik Casimir in 1948 – though, being so weak and operating at such short distances, it wasn’t until the late 1990s that it was measured directly. It provides fertile hunting ground for speculative and sometimes plain cranky ideas about propulsion systems or energy sources that tap into this energy of the vacuum. (And it certainly seems that there is a lot of energy there – or at least, there ought to be, but something seems to cancel it out almost perfectly, which is why our universe can exist in its present form at all. Here’s a new idea for where all this vacuum energy has gone.)

So how did the false story of a naval analogy start? It was suggested in a paper in the American Journal of Physics by Sipko Boersma. Just as two closely spaced plates suppress the quantum fluctuations of the vacuum at wavelengths longer than the spacing between them, so Boersma proposed that two ships side by side suppress sea waves in a heavy swell. By the same token, Boersma suggests that a ship next to a steep cliff or wall is repelled, because the reflection of the ocean waves at the wall (without a phase shift, as occurs for electromagnetic waves) creates a kind of ‘virtual image’ of the ship within the wall, rolling perfectly out of phase – which reverses the sign of the force.

It sounds persuasive. But there doesn’t seem to be any evidence for such a force between real ships. The only real evidence that Boersma offered in his paper came from a nineteenth-century French naval manual by P. C. Caussée, where indeed a ‘certain attractive force’ was said to exist between ships moored close together. But Fabrizio Pinto has unearthed the old book, and he finds that the force was in fact said to operate only in perfectly calm (‘plat calme’, or flat calm) seas, not in wavy conditions. The engraving that Boersma showed from this manual was for a different set of circumstances, in a heavy swell (where the recommendation was simply to keep the ships apart so that their rigging doesn’t become entangled as they roll).

Regarding this discrepancy, Boersma says the following: “Caussée is not very exact. His mariners told him about ‘une certaine force attractive’ in calm weather and he made out of it an attraction on a flat sea… The reference to ‘Flat Calm’ is clearly an editing error; Caussée’s Album is not a scientific document. He should have referred his attraction to the drawing 14 ‘Calm with heavy swell’, or better still to the drawing 15 ‘Flat Calm’ but then modified with a long swell running. Having read my 1996 paper, one sees immediately what Caussée should have written.”

I’m not sure I follow this: it seems to mean not that Caussée made an ‘editing error’ but that he simply didn’t understand what he had been told about the circumstances in which the force operates. That might be so, but it requires that we take a lot on trust, and rewrite Caussée’s manual to suit a different conclusion. If Caussée was mistaken about this, should we trust him at all? And there doesn’t seem to be any other strong, independent evidence of such a force between ships.

But perhaps getting to the root of the confusion isn’t the point. The moral, I guess, is that it’s never a good idea to take such stories on trust – always check the source. Fabrizio says that scientists rarely do this; on the contrary, they embrace such stories as a part of the lore of their subject, and then react indignantly when they are challenged. “Because of the lamentable utter lack of philosophical knowledge background that afflicts many graduating students especially in the United States, sometimes these behaviors are closer to the tantrums of children who have learned too early of possible disturbing truths about Santa Claus”, he says. Well, that’s possible. Certainly, we would do well to place less trust in histories of science written by scientists, some of whom do not seem terribly interested in history as such but are more concerned simply to show how people stopped believing silly things and started believing good things (i.e. what we believe today). This Whiggish approach to history was abandoned by historians over half a century ago – strange that it still persists unchallenged among scientists. The ‘Copernican revolution’ is a favourite of physicists (it’s commonly believed that Copernicus got rid of epicycles, for instance), and popular retellings of the Galileo story are absurdly simplistic. (And while we’re at it, can we put an end to the notion that Giordano Bruno was burned at the stake because he believed in the heliocentric model? Or would that damage scientists’ martyr complex?) It may not matter so much that a popular idea about the Casimir effect seems after all to be groundless; it might be more important that this episode serves as a wake-up call not to be complacent about history.

Tuesday, May 02, 2006




Swarms

That's the title of an exhibition at the Fosterart gallery in Shoreditch, London, running until 14 May. The work is by Farah Syed, and there are examples of it here . Farah tells me she is interested in complexity and self-organization: "sudden irregularities brought about by a minute and random event; a swarm reassembling itself after the disturbance in its path." Looks to me like an interesting addition to the works of art that have explored ideas and processes related to complexity, several of which were discussed in Martin Kemp's book Visualizations (Oxford University Press, 2000).

Thursday, April 27, 2006

It’s flat, it’s hot, and it’s very weird

Graphene, that is. I have been talking to some fellows about this new wonder-stuff, which wowed the crowds at the American Physical Society meeting in March. Mainly to Andre Geim at Manchester, who is one of those wry chaps you feel you can inherently trust not to load you down with hype. I’m working on a feature on this for New Scientist, which will delve into the decidedly wacky physics of these single-atom-thick sheets of pure carbon. It’s not your ordinary two-dimensional semimetal (yes I know, name me another), mainly because the electrons behave as though they are travelling at close to the speed of light. So here’s an everyday material in which one can investigate Dirac’s relativistic quantum mechanics, which normally applies only in the kind of astrophysical environment you wouldn’t want to end up in by mistake. Anyway, that’s to come. By way of an hors d’oeuvre, here’s a short piece on the materials aspects of graphene which will appear in the June issue of Nature Materials :


Carbon goes flat out

Graphene has revealed itself from a direction that, in retrospect, seems opposite to what one might have expected. First came the zero-dimensional form: C60 and the other fullerenes, nanoscopically finite in every direction. Then there was the carbon nanotube, whose one-dimensional, tubular form set everyone thinking in terms of fibres and wires. It was just two years ago that the two-dimensional form, graphene itself, appeared: flat sheets of carbon one atom thick (Novoselov et al., Science 306, 666; 2004), which, when stacked in the third dimension, return us to familiar, lustrous graphite.

Now it’s tempting to wonder if the earlier focus on reduced dimensionality and curvature may have been misplaced. C60 is a fascinating molecule, but useful materials tend to be extended in at least one dimension. Carbon nanotubes can be matted into ‘bucky paper’, but without exceptional strength. Long, thin single-molecule transistors are all very well, but today’s microelectronics is inherently two-dimensional. Graphene is the master substance of all these structures, and perhaps, so far as materials and electronics are concerned, sheets were what we needed all along.

You can cut up these sheets into device-styled patterns – but that’s best done with chemistry (etching with an oxygen plasma, say), since attempts to tear single-layer graphene with a diamond tip just make it blunt. (As carbon nanotubes have shown, graphite’s reputation for weakness gives a false impression.) And graphene is a semimetal with a tunable charge-carrier density that makes it suitable for the conducting channel of transistors.

But its conductivity is more extraordinary than that. For one thing, the electron transport is ballistic, free from scattering. That recommends graphene for ultrahigh-frequency electronics, since scattering processes limit the switching speeds. More remarkably, the mobile electrons behave as Dirac fermions (Novoselov et al., Nature 438, 197; 2005), which mimic the characteristics of electrons travelling close to the speed of light.

From the perspective of applications, however, one key question is how to make the stuff. Peeling away flakes of graphite with Scotch tape, or in fact just rubbing a piece of graphite on a surface (popularly known as drawing) will produce single-layer films – but neither reliably nor abundantly. Walt de Heer of the Georgia Institute of Technology and coworkers have recently flagged up the value of a method several years old, by which silicon carbide heated in a vacuum will decompose to form graphitic films one layer at a time (Berger et al., Science Express, doi:10.1126/science.1125925).

But maybe wet chemistry will be better still. Graphite was exfoliated (separated into layers) nearly 150 years ago by oxidation, producing platelets of water-soluble oxidized graphene, which may include single sheets. But reducing them triggers aggregation via hydrophobic interactions. This can be prevented by the use of amphiphilic polymers (Stankovich et al., J. Mat. Chem. 16, 155; 2006). Anchoring bare, single graphene sheets to a surface remains a challenge – but one that may benefit, in this approach, from the wealth of experience of organic chemists.

Wednesday, April 26, 2006

Condensed matter

A couple of brief items from the Institute of Physics Condensed Matter and Materials Physics meeting in Exeter, based on press releases that I wrote, are available here and here. The IoP got me into this because they've found CMMP particularly hard to 'sell' in the past. To the extent that the work in this area can involve exploring arcane electronic effects in strange and exotic solids at unearthly low temperatures, this isn't perhaps surprising. But condensed matter is at the heart of modern physics (contrary to popular impressions), and so it seems odd and mildly distressing that most people outside of physics don't even know what it is. To the world at large, physics is about quarks and cosmology. Why so? I'll come back to this.
All the old stuff

It’s on my web site. Here you’ll find writings on chemistry, physics, nanotechnology, science and art, alchemy, materials, water, colour and all sorts of other things. Also links to my books, reviews and some radio work.

I write monthly columns in Prospect and Nature Materials, and weekly science news for News@Nature.

My latest book, a biography of Paracelsus, is here (or here for the UK).

My latest article for News@Nature is based on a paper about the possibility of tsunamis in the Gulf of Mexico being triggered by hurricanes: you can read it here. I will be regularly adding extra info and comments on these stories on this blog.