Sunday, April 29, 2012

Fantastic colours

I have an article on physical colours in nature, and their mimicry in artificial systems, in the latest issue of Scientific American. All you can get online without a subscription is a ‘preview’. But I shall put an extended version of the piece on my website soon.

Friday, April 27, 2012

Bad faith

I have a new Muse piece up on Nature news – very little done in editing, so I’ll just give the link. I fear that there will be more griping about my being soft on religion, but I don’t see it that way at all. The fact that so many religious people have so little interest in the intellectual tradition of religion should cause far more concern among religious leaders than it does. Of course, maybe some of them like it that way, their followers passive and unquestioning. Anyway, the point is that you can disagree with Aquinas et al., but it is absurd to suggest that they were just deluded or lacking in analytical acumen. That isn’t in any way the implication of the Science paper discussed here, but I imagine some interpretations will take that angle.

Saturday, April 21, 2012

Imagine that!

I was a bit tetchy about Steven Poole’s criticisms in his review of The Music Instinct in the Guardian, although subsequent discussions with him helped me to understand why he raised them. But now I see I escaped lightly. In today’s Guardian Review, Steve comprehensively demolishes Johan Lehrer’s new book Imagine, calling it a prime example of the sort of ‘neuroscientism’ which purports to explain everything about everyone with a few brightly coloured MRI scans. There is some seriously cruel stuff here: “‘For Shakespeare’, Lehrer affects to know, ‘the act of creation was inseparable from the act of connection.’” I confess to a degree of guilty pleasure in reading this unrelenting dissection, though I’d feel bad for Jonah if it wasn’t evident that it would take much more than this to tarnish his growing reputation as the next Malcolm Gladwell. I suspect there is an element here of Steve’s contrarian nature rebelling against the way Lehrer has been otherwise universally hailed as a Wunderkind. But it’s not just that. I fully recognize Steve’s complaint about the current simplistic infatuation with neuroscientific jargon and imagery, as if saying that an activity activates the anterior superior temporal gyrus is equivalent to having explained it. I’ve not read Jonah’s book, and so have to reserve judgement about whether it really is a prime offender in this regard. But it’s certainly high time this tendency were put in its place. A couple of reviewers of The Music Instinct who are neuroscientists were a bit sniffy about how it didn’t make more of the wonderful advances in understanding of musical activity that brain imaging has yielded. Now, there certainly have been significant discoveries made using those technologies – I think in particular of, say, Robert Zatorre’s work on the activation of reward centres when people experience ‘musical chills’, or Petr Janata’s amazing demonstration of harmonic maps imprinted on the grey matter (both of which I mention). But Dan Levitin, while generally quite nice to the book, seemed to want more about how “listening to music activates reward and pleasure circuits in brain regions such as the nucleus accumbens, ventral tegmental area and amygdala”. Ah, so that’s how music works! This was the kind of thing I intentionally omitted, rather than overlooked, because I think that at present it does little more than fool the easily impressed reader into thinking that we’ve really ‘got inside the brain’, while in truth we often have very little idea what these increases in blood flow signify about cognition. I have to add, though, that this is the second book review I’ve read recently (the first being Richard Evans’ review in the New Statesman of A. N. Wilson’s little book on Hitler, which triggered an entertaining spat) that makes me wonder whether the Hatchet Award has upped the ante. I’m sure I’m not alone in my anxiety. [By the way, how do you put paragraph breaks into this new-look blogger tool?]

Sunday, April 15, 2012

Architectural designs

I have a paper on pattern formation in the March/April issue of Architectural Design, a special issue devoted to ‘material computation’. My piece is fairly old stuff, I confess, although this is a topic that architects are becoming increasingly interested in. I will put a version on my website, once I have figured out why it seems to have (temporarily?) vanished from the webosphere. But there’s a lot of other interesting stuff in this issue, some of which I have written about in my next column (May) for Nature Materials.

Friday, April 13, 2012

Something for the weekend

I was on BBC Radio 4’s Start the Week programme this week, still accessible here (for just a day or two) on BBC iPlayer. And a copy of the book just arrived in the post – it’s a fatty, out at the beginning of May. I’m currently most of the way through Peter Carey’s The Chemistry of Tears, and enjoying it as much as I knew I would.

Thursday, April 12, 2012

Touchy-feel chemistry

Here’s my latest Crucible column for Chemistry World.
___________________________________________________________

What does it feel like to be a molecule? Anthropomorphizing molecules is a familiar enough pedagogical trick – we’ve all seen those cutesy grinning balls-and-sticks in children’s texts on chemistry, and I’ve indulged in this exercise myself to explain the hydrogen-bonding arrangements of water. But perhaps we might stand to learn more from the opposite manoeuvre: not humanizing molecules, but molecularizing humans.

I was set thinking about this after seeing Jaron Lanier, the computer-science pioneer who coined the term ‘virtual reality’ and has done much to develop it as a technology, speak in New York about where VR may be headed. While describing exploratory research in which people are given non-human avatars (could you control a lobster body, say?), Lanier dropped one of those apercus that reveal why he is where he is. This isn’t just an extravagant computer game, he said – in such manifestations VR can be considered to be exploring the pre-adaptations of the human brain. That’s to say, it shows us what kinds of physicality, beyond the bounds of the human body, our brains are equipped to adapt themselves to. This sort of pre-adaptation is a crucial aspect of evolution: a genetic mutation might not simply alter an existing function, for better or worse, but can sometimes unleash the potentiality already latent in the organism’s genetic program. That is quite probably how Hox genes came to facilitate whole new ranges of body plans.

And then one might ask – as Lanier did – whether, as well as lobsters, our brains have the capacity to make themselves at home in a ‘molecule’s body’. Of course, molecules, unlike lobsters, don’t move of their own volition. But might our brains in some sense be able to perceive and intuit the forces that molecules experience: to assemble such sensory data into a coherent image of the molecular world?

Why ask such a seemingly arcane question? Lanier suspects that the embodied experience of VR, by engaging more sensory processes than, say, just vision or logical thinking alone, can offer us new routes to understanding and problem-solving. This is demonstrably true. Lanier, an accomplished musician, pointed out how improvising instrumentalists find their fingers accessing solutions to harmonic or melodic problems – how do I get from here to there – that would be far harder to identify by just sitting down and thinking it out.

Chemists probably need less persuading of this than other scientists. You don’t tend to work out a complex synthesis in your head: you draw out the molecular structures, and the visual information doesn’t just record your thoughts but informs them. For some problems you need to get even more tactile, building molecular models and moving them around, turning and twisting to see if they will fit together as you’d like. That has surely been evident ever since John Dalton devised his wooden ball-and-stick models.

There are already signs that molecular science wants to take this notion of ‘feeling molecules’ to a deeper level. Some years ago I tried out the ‘haptic’ (touch-based) interface of an atomic force microscope developed by Metin Sitti’s group at Carnegie Mellon University in Pittsburgh. This allows the user to feel a representation, in real time, of what the AFM tip is ‘felling’, such as the atomic topography of a surface and the forces that adsorbed molecules exert. It was certainly instructive – so much so that I remember the sensation vividly years later, just as I have never forgotten the feeling of putting my finger into mercury as a child. The haptic AFM felt quite different from the impression you’d get from an animation of what the instrument does: jerkier, somehow grittier.

Chemists have not so far made very extensive use of a more all-embracing VR. One exception is the Duke immersive Virtual Environment (DiVE) developed by the RISE science-education program at the Duke University Medical Center in Durham, North Carolina. This software can be used online, but is best experienced by the user fitted out with VR goggles and joystick manipulator in a small cube-shaped ‘theatre’ with images projected onto the walls and ceiling – a version of the CAVE created at the University of Illinois at Chicago.

Among the projects run for DiVE is ‘DiVe into Alcohol’, an experience that lets you follow the progress of ethanol molecules as they travel through an avatar’s gastrointestinal tract and become oxidized by the enzyme alcohol dehydrogenase in the liver. If you’re in Durham NC you can literally see for yourself: the RISE team offers an open house to all comers on Thursdays.

But Lanier seems to have something more ambitious in mind: the sensation of actually being a molecule. That sounds a little scary: what is it like to be oxidized by having your hydrogens pulled off? But who knows what insights we might gather in the process? Lanier is even exploring how to make such realizations governed by quantum rather than semiclassical rules. Might it be that the famously counterintuitive principles of quantum physics would become less so if we can actually experience them?

Thursday, April 05, 2012

Dreaming of ferroelectric sheep

Here’s the pre-edited version of another of my pieces for BBC Future (again, this link will only work outside the UK).
___________________________________________________________

There are some scientific discoveries that you never get to hear about simply because they’re too perplexing to bring news writers running. That’s likely to be true of findings reported by mechanical engineer Jiangyu Li of the University of Washington in Seattle, Yanhang Zhang of Boston University, and their colleagues. They’ve found that the tough, flexible tissue that makes up the aorta of pigs has the surprising property of ferroelectricity.

This arcane but technologically useful behaviour is found in certain crystals and liquid crystals. It’s a sort of electrical equivalent of magnetism. Indeed, that analogy explains why the phenomenon is called ferroelectricity, despite the absence of iron (ferrum) in materials that show it, because of the similarities with what is technically called ferromagnetism, as displayed by magnetic iron.

A ferroelectric substance is electrically polarized: one side has a positive electrical charge and the other a negative charge. This polarization can be switched to the opposite direction by placing the substance in an electric field that reorients the charges. It has its origin in an uneven distribution of electrical charges in the arrangement of constituent atoms or molecules. Just as a magnetic field can make a magnetized compass needle change direction, so an electric field can pull all the little electrical charges into a different alignment.

The switchability is why ferroelectric crystals are being studied for use in electronic memory devices, where binary data would be encoded in the electrical polarization of the memory elements. They are also used in heat sensors (the switching can be very sensitive to temperature), vibration sensors and switchable liquid-crystal displays.

Li usually works on synthetic materials like these for applications such as energy harvesting and storage. He and his colleagues discovered ferroelectricity in pig aorta by placing a thin slice of it in a special microscope containing a sensitive needle tip that could detect the electrical polarization. They found that they could switch this polarization with an electric field.

Why on earth should any animal tissue be ferroelectric? Well, the living world does make use of some unexpected material properties. Bone, for example, is piezoelectric: it becomes electrically polarized, and so sets up an electric field, when squeezed. Piezoelectricity is also a useful kind of behaviour in technology: it is exploited, for instance, in pressure and vibration sensors like those in your computer keyboard. It seems that bony creatures use this principle too: the electrical response to squeezing of bone helps tissues gauge the forces they experience. In seashells, meanwhile, piezoelectricity helps prevent fracture by dissipating the energy of a shock impact as electricity.

OK – but ferroelectricity? Who needs that? Commenting on the findings, engineers Bin Chen and Huajian Gao have speculated that the property might supply another way for the tissue to register forces, and thus perhaps to monitor blood pressure. Or perhaps to sense blood temperature, or again to dissipate mechanical energy and prevent damage. Or even to act as a sort of ‘tissue memory’ in conjunction with (electrically active) nerves. Li, meanwhile, speculates that switching of the ferroelectricity might alter the way cholesterol, sugars or fats stick to and harden blood vessels.

Notice how these researchers have no sooner identified a new characteristic of a living organism than they start to wonder what it is for. The assumption is that there must be some purpose: that evolution has selected the property because it confers some survival benefit. In other words, the property is assumed to be adaptive. This is a good position to start from, because most material properties of tissues are indeed adaptive, from the flexibility of skin to the transparency of the eye’s cornea. But it’s possible that ferroelectricity could be just a side-effect of some other adaptive function of the tissue – a result of the way the molecules just happen to be arranged, which, if does not interfere with other functions, will go unnoticed by evolution. Not every aspect of biology has a ‘purpose’.

All the same, tissue ferroelectricity could be handy. If Li is right to suspect that ferroelectricity can influence the way blood vessels take up fats, sugars or lipids, then switching it with an applied electric field might help to combat conditions such as thrombosis and atherosclerosis.

Paper
: Y. Liu et al., Physical Review Letters 108, 078103 (2012).

Saturday, March 24, 2012

On looking good

I seem to have persuaded Charlotte Raven, and doubtless now many others too, that Yves Saint Laurent’s Forever Youth Liberator skin cream is the real deal. I don’t know if my ageing shoulders can bear the weight of this responsibility. All I can say in my defence is that it would have been unfair to YSL to allow my instinct to scoff override my judgement of the science - on which, more here. But as Ms Raven points out, ultimately I reserve judgement. Where I do not reserve judgement is in saying, with what I hope does not seem like cloying gallantry, that it is hard to see why she would feel the need to consider shelling out for this gloop even if it does what it says on the tin. Or rather, it is very easy to understand why she feels the pressure to do so, given the ways of the world, but also evident that she has every cause to ignore it. No look, I’m not saying that if she didn’t look so good without makeup and rejuvenating creams then she’d be well advised to start slapping them on, it’s just that… Oh dear, this is a minefield.

Thursday, March 22, 2012

Wonders of New York

Here’s a piece about an event in NYC in which I took part at the end of last month.
______________________________________________________________________

It was fitting that the ‘Wonder Cabinet’, a public event at a trendy arthouse cinema in New York at the end of February, should have been opened by Lawrence Weschler, director of New York University’s Institute for the Humanities under whose auspices the affair was staged. For Weschler’s Pulitzer-shortlisted Mr Wilson’s Cabinet of Wonder (1995) tells the tale of David Wilson’s bizarre Museum of Jurassic Technology in Los Angeles, in which you can never quite be sure if the bizarre exhibits are factual or not (they usually are, after a fashion). And that was very much the nature of what followed in the ten-hour marathon that Weschler introduced.

Was legendary performance artist Laurie Anderson telling the truth, for instance, about her letter to Thomas Pynchon requesting permission to create an opera based on Gravity’s Rainbow? Expecting no reply from the famously reclusive author, she was surprised to receive a long epistle in which Pynchon proclaimed his admiration for his work and offered his enthusiastic endorsement of her plan. There was just one catch: he insisted that it be scored for solo banjo. Anderson took this to be a uniquely gracious way of saying no. I didn’t doubt her for a second.

The day had a remarkably high quota of such head-scratching and jaw-dropping revelations, the intellectual equivalent of those celluloid rushes in the movies of Coppola, Kubrick and early Spielberg. Even if you thought, as I did, that you know a smattering about bowerbirds – the male of which constructs an elaborate ‘bower’ of twigs and decorates it with scavenged objects to lure a female – seeing them in action during a talk by ornithologist Gail Patricelli of the University of California at Davis was spectacular. Each species has its own architectural style and, most strikingly, its own colour scheme: blue for the Satin Bowerbird (which went to great lengths to steal the researchers’ blue toothbrushes), bone-white and green for the Great Bowerbird. Some of these constructions are exquisite, around a metre in diameter. But all that labour is only the precursor to an elaborate mating ritual in which the most successful males exhibit an enticing boldness without tipping into scary aggression. This means that female choice selects for a wide and subtle range of male social behaviour, among which are sensitivity and responsiveness to the potential mate’s own behavioural signals. And all this for the most anti-climactic of climaxes, an act of copulation that lasts barely two seconds.

Or take the octopus described by virtual-reality visionary (and multi-instrumentalist) Jaron Lanier, which was revealed by secretly installed CCTV to be the mysterious culprit stealing the rare crabs from the aquarium in which it was kept. The octopus would climb out of its tank (it could survive out of water for short periods), clamber into the crabs’ container, help itself and return home to devour the spoils and bury the evidence. And get this: it closed the crab-tank lid behind it to hide its tracks – and in doing so, offered what might be interpreted as evidence for what developmental psychologists call a ‘theory of mind’, an ability to ascribe autonomy and intention to other beings. Octopi and squid would rule the world, Lanier claimed, if it were not for the fact that they have no childhood: abandoned by the mother at birth, the youngsters are passed on none of the learned culture of the elders, and so (unlike bower birds, say) must always begin from scratch.

All this orbited now close to, now more distant from the raison d’être of the event, which was philosopher David Rothenberg’s new book Survival of the Beautiful, an erudite argument for why we should take seriously the notion that non-human creatures have an aesthetic sense that exceeds the austere exigencies of Darwinian adaptation. It’s not just that the bower bird does more than seems strictly necessary to get a mate (although what is ‘necessary’ is open to debate); the expression of preferences by the female seems as elaborately ineffable and multi-valent as anything in human culture. Such reasoning of course stands at risk of becoming anthropomorphic, a danger fully appreciated by Rothenberg and the others who discussed instances of apparent creativity in animals. But it’s conceivable that this question could be turned into hard science. Psychologist Ofer Tchernichovski of the City University of New York hopes, for example, to examine whether birdsong uses the same musical tricks (basically, the creation and subsequent violation of expectation) to elicit emotion, for example by measuring in the song birds the physiological indicators of a change in arousal such as heartbeat and release of the ‘pleasure’ neurotransmitter dopamine that betray an emotional response in humans. Even if you want to quibble over what this will say about the bird’s ‘state of mind’, the question is definitely worth asking.

But it was the peripheral delights of the event, as much as the exploration of Rothenberg’s thesis, that made it a true cabinet of wonders. Lanier elicited another ‘Whoa!’ moment by explaining that the use of non-human avatars in virtual reality – putting people in charge of a lobster’s body, say – amounts to an exploration of the pre-adaptation of the human brain: the kinds of somatic embodiments that it is already adapted to handle, some of which might conceivably be the shapes into which we will evolve. This is a crucial aspect of evolution: it’s not so much that a mutation introduces new shapes and functions, but that it releases a potential that is already latent in the ancestral organism. Beyond this, said Lanier, putting individuals into more abstract avatars can be a wonderful educational tool by engaging mental resources beyond abstract reasoning, just as the fingers of an improvising pianist effortlessly navigate a route through harmonic space that would baffle the logical mind. Children might learn trigonometry much faster by becoming triangles; chemists will discover what it means to be a molecule.

Meanwhile, the iPad apps devised by engagingly modest media artist Scott Snibbe provided the best argument I’ve so far seen for why this device is not simply a different computer interface but a qualitatively new form of information technology, both in cognitive and creative terms. No wonder it was to Snibbe that Björk (“an angel”, he confided) went to realise her multimedia ‘album’ Biophilia. Whether this interactive project represents the future of music or an elaborate game remains to be seen; for me, Snibbe’s guided tour of its possibilities evoked a sensation of tectonic shift akin to that I vaguely recall now on being told that there was this thing on the internet called a ‘search engine’.

But the prize for the most arresting shaggy dog story went again to Anderson. Her attempts to teach her dog to communicate and play the piano were already raised beyond the status of the endearingly kooky by the profound respect in which she evidently held the animal. But when in the course of recounting the dog’s perplexed discovery during a mountain hike that death, in the form of vultures, could descend from above – another 180 degrees of danger to consider – we were all suddenly reminded that we were in downtown Manhattan, just a few blocks from the decade-old hole in the financial district. And we suddenly felt not so far removed from these creatures at all.

Wednesday, March 21, 2012

The beauty of an irregular mind

Here’s the news story on this year’s Abel Prize that I’ve just written for Nature. You’ve always got to take a deep breath before diving into the Abel. But it is fun to attempt it.
___________________________________________________________

Maths prize awarded for elucidating the links between numbers and information.

An ‘irregular mind’ is what has won this year’s Abel Prize, one of the most prestigious awards in mathematics, for Endre Szemerédi of the Alfred Rényi Institute of Mathematics in Budapest, Hungary.

This is how Szemerédi was described in a book published two years ago to mark his 70th birthday, which added that “his brain is wired differently than for most mathematicians.”

Szemerédi has been awarded the prize, worth 6m Norwegian krone (about US$1m), “for his fundamental contributions to discrete mathematics and theoretical computer science, and in recognition of the profound and lasting impact of these contributions on additive number theory and ergodic theory”, according to the Norwegian Academy of Science and Letters, which instituted the prize as a kind of ‘mathematics Nobel’ in 2003.

Mathematician Timothy Gowers of Cambridge University, who has worked some of the same areas as Szemerédi, says that he has “absolutely no doubt that the award is extremely appropriate.”

Nils Stenseth, president of the Norwegian Academy of Science, who announced the award today, says that Szemerédi’s work shows how research that is purely curiosity-driven can turn out to have important practical applications. “Szemerédi’s work supplies some of the basis for the whole development of informatics and the internet”, he says, “He showed how number theory can be used to organize large amounts of information in efficient ways.”

Discrete mathematics deals with mathematical structures that are made up of discrete entities rather than smoothly varying ones: for example, integers, graphs (networks), permutations and logic operations. Crudely speaking, it entails a kind of digital rather than analogue maths, which helps to explain its relationship to aspects of computer theory.

Szemerédi was spotted and mentored by another Hungarian pioneer in this field, Paul Erdös, who is widely regarded as one of the greatest mathematicians of the 20th century – even though Szemerédi began his training not in maths at all, but at medical school.

One of his first successes was a proof of a conjecture made in 1936 by Erdös and his colleague Paul Turán concerning the properties of integers. They aimed to establish criteria for whether a series of integers contains arithmetic progressions – sequences of integers that differ by the same amount, such as 3, 6, 9…

In 1975 Szeremédi showed that subsets of any large enough string of integers must contain arithmetic progressions of almost any length [1]. In other words, if you had to pick, say, 1 percent of all the numbers between 1 and some very large number N, you can’t avoid selecting some arithmetic progressions. This was the Erdös-Túran conjecture, now supplanted by Szemerédi’s theorem.

The result connected work on number theory to graph theory, the mathematics of networks of connected points, which Erdös had also studied. The relationship between graphs and permutations of numbers is most famously revealed by the four-colour theorem, which states that it is possible to colour any map (considered as a network of boundaries) with four colours such that no two regions with the same colour share a border. The problem of arithmetic progressions can be considered analogous if one considers giving numbers in a progression the same colour.

Meanwhile, relationships between number sequences become relevant to computer science via so-called sorting networks, which are hypothetical networks of wires, like parallel train tracks, that sort strings of numbers into numerical sequence by making pairwise comparisons and then shunting them from one wire to another. Szemerédi and his Hungarian collaborators Miklós Ajtai and Janós Komlós discovered an optimal sorting network for parallel processing in 1983 [2], one of several of Szemerédi’s contributions to theoretical computer science.

When mathematicians discuss Szemerédi’s work, the word ‘deep’ often emerges – a reflection of the connections it often makes between apparently different fields. “He shows the advantages of working on a whole spectrum of problems”, says Stenseth.

For example, Szemerédi’s theorem brings number theory in contact with the theory of dynamical systems: physical systems that evolve in time, such as a pendulum or a solar system. As Israeli-American mathematician Hillel Furstenberg demonstrated soon after the theorem was published [3], it can be derived in a different way by considering how often a dynamical system returns to a particular state: an aspect of so-called ergodic behaviour, which relates to how thoroughly a dynamical system explores the space of possible states available to it.

Gowers says that many of Szemerédi’s results, including his celebrated theorem, are significantly not so much for what they prove but because of the fertile ideas developed in the course of the proof. For example, Szemerédi’s theorem made use of another of his key results, called the Szemerédi regularity lemma, which has proved central to the analysis of certain types of graphs.


References
1. E. Szemerédi, "On sets of integers containing no k elements in arithmetic progression", Acta Arithmetica 27: 199–245 (1975).

2. M. Ajtai, J. Komlós & E. Szemerédi, "An O(n log n) sorting network", Proceedings of the 15th Annual ACM Symposium on Theory of Computing, pp. 1–9 (1983).

3. H. Fürstenberg, "Ergodic behavior of diagonal measures and a theorem of Szemerédi on arithmetic progressions", J. D’Analyse Math. 31: 204–256 (1977).

Friday, March 16, 2012

Genetic origami

Here’s another piece from BBC Future. Again, for non-UK readers the final version is here.
_______________________________________________________________

What shape is your genome? It sounds like an odd question, for what has shape got to do with genes? And therein lies the problem. Popular discourse in this age of genetics, when the option of having your own genome sequenced seems just round the corner, has focused relentlessly on the image of information imprinted into DNA as a linear, four-letter code of chemical building blocks. Just as no one thinks about how the data in your computer is physically arranged in its microchips, so our view of genetics is largely blind to the way the DNA strands that hold our genes are folded up.

But here’s an instance where an older analogy with computers might serve us better. In the days when data was stored on magnetic tape, you had to worry about whether the tape could actually be fed over the read-out head: if it got tangled, you couldn’t get at the information.

In living cells, DNA certainly is tangled – otherwise the genome couldn’t be crammed inside. In humans and other higher organisms, from insects to elephants, the genetic material is packaged up in several chromosomes.

The issue isn’t, however, simply whether or not this folding leaves genes accessible for reading. For the fact is that there is a kind of information encoded in the packaging itself. Because genes can be effectively switched off by tucking them away, cells have evolved highly sophisticated molecular machinery for organizing and altering the shape of chromosomes. A cell’s behaviour is controlled by manipulations of this shape, as much as by what the genes themselves ‘say’. That’s clear from the fact that genetically identical cells in our body carry out completely different roles – some in the liver, some in the brain or skin.

The fact that these specialized cells can be returned to a non-specialized state that performs any function – as shown, for example, by the cloning of Dolly the sheep from a mammary cell – indicates that the genetic switching induced by shape changes and other modifications of our chromosomes is at least partly reversible. The medical potential of getting cells to re-commit to new types of behaviour – in cloning, stem-cell therapies and tissue engineering – is one of the prime reasons why it’s important to understand the principles behind the organization of folding and shape in our chromosomes.

In shooting at that goal, Tom Sexton, Giacomo Cavalli and their colleagues at the Institute of Human Genetics in Montpellier, France, in collaboration with a team led by Amos Tanay of the Weizmann Institute of Science in Israel, have started by looking at the fruitfly genome. That’s because it is smaller and simpler than the human genome (but not too small or simple to be irrelevant to it), and also because the fly is genetically the best studied and understood of higher creatures. A new paper unveiling a three-dimensional map of the fly’s genome is therefore far from the arcane exercise it might seem – it’s a significant step in revealing how genes really work.

Scientists usually explore the shapes of molecules using techniques for taking microscopic snapshots: electron microscopes themselves, as well as crystallography, which considers how beams of X-rays, electrons or neutrons are reflected by molecules stacked into crystals. But these methods are hard or impossible to apply to molecular structures as complex as chromosomes. Sexton and colleagues use a different approach: a method that reveals which parts of a genome sit close together. This allows the entire map to be patched together piece by piece.

It’s no surprise that the results show the fruitfly genome to be carefully folded and organized, rather than just scrunched up any old how. But the findings put flesh on this skeletal picture. The chromosomes are organized on many levels, rather like a building or a city. There are ‘departments’ – clusters of genes – that do particular jobs, sharply demarcated from one another by boundaries somewhat equivalent to gates or stairwells, where ‘insulator’ proteins clinging to the DNA serve to separate one domain from the next. And inactive genes are often grouped together, like disused shops clustered in a run-down corner of town.

What’s more, the distinct physical domains tend to correspond with parts of the genome that are tagged with chemical ‘marker’ groups, which can modify the activity of genes, rather as if buildings in a particular district of a city all have yellow-painted doors. There’s evidently some benefit for the smooth running of the cell in having a physical arrangement that reflects and reinforces this chemical coding.

It will take a lot more work to figure out how this three-dimensional organization controls the activity of the genes. But the better we can get to grips with the rules, the more chance we will have of imposing our own plans on the genome – silencing or reawakening genes not, as in current genetic engineering, by cutting, pasting and editing the genetic text, but by using origami to hide or reveal it.

Reference: T. Sexton et al., Cell 148, 458-472 (2012); doi:10.1016/j.cell.2012.01.010

Tuesday, March 13, 2012

Under the radar

I have begun to write a regular column for a new BBC sci/tech website called BBC Future. The catch is that, as it is funded (not for profit) by a source other than the license fee, you can’t view it from the UK. If you’re not in the UK, you should be able to see the column here. It is called Under the Radar, and will aim to highlight papers/work that, for one reason or another (as described below), would be likely to be entirely ignored by most science reporters. The introductory column, the pre-edited version of which is below, starts off by setting out the stall. I have in fact 3 or 4 pieces published here so far, but will space them out a little over the next few posts.
_____________________________________________________________________

Reading science journalism isn’t, in general, an ideal way to learn about what goes on in science. Almost by definition, science news dwells on the exceptional, on the rare advances that promise (even if they don’t succeed) to make a difference to our lives or our view of the universe. But while it’s always fair to confront research with the question ‘so what?’, and while you can hardly expect anyone to be interested in the mundane or the obscure, the fact is that behind much if not most of what is done by scientists lies a good, often extraordinary, story. Yet unless they happen to stumble upon some big advance (or at least, an advance that can be packaged and sold as such), most of those stories are never told.

They languish beneath the forbidding surface of papers published by specialized journals, and you’d often never guess, to glance at them, that they have any connection to anything useful, or that they harbour anything to spark the interest of more than half a dozen specialists in the world. What’s more, science then becomes presented as a succession of breakthroughs, with little indication of the difficulties that intervene between fundamental research and viable applications, or between a smart idea and a proof that it’s correct. In contrast, this column will aim to unearth some of those buried treasures and explain why they’re worth polishing.

Another reason why much of the interesting stuff gets overlooked is that good ideas rarely succeed all at once. Many projects get passed over because at first they haven’t got far enough to cross a reporter’s ‘significance threshold’, and then when the work finally gets to a useful point, it’s deemed no longer news because much of it has been published already.

Take a recent report by Shaoyi Jiang, a chemical engineer at the University of Washington in Seattle, and his colleagues in the Germany-based chemistry journal journal Angewandte Chemie. They’ve made an antimicrobial polymer coating which can be switched between a state in which it kills bacteria (eliminating 99.9% of sprayed-on E. coli) and one where it shrugs off the dead cells and resists the attachment of new ones. That second trick is a valuable asset for a bug-killing film, since even dead bacteria can trigger inflammation.

The thing is, they did this already three years ago. But there’s a key difference now. Before, the switching was a one-shot affair: once the bacteria were killed and removed, you couldn’t get the bactericidal film back. So if more bacteria do slowly get a foothold, you’re stuffed.

That’s why the researchers have laboured to make their films fully reversible, which they’ve achieved with some clever chemistry. They make a polymer layer sporting dangling molecular ‘hairs’ like a carpet, each hair ending in a ring-shaped molecule deadly to bacteria. If the surface is moistened with water, the ring springs open, transformed into a molecular group to which bacteria can’t easily stick. Just add a weak acid – acetic acid, basically vinegar – and the ring snaps closed again, regenerating a bactericidal surface as potent as before.

This work fits with a growing trend to make materials ‘smart’ – able to respond to changes in their environment. Time was when a single function was all you got: a non-adhesive ‘anti-fouling’ film, say, or one that resists corrosion or reduces light reflection (handy for solar cells). But increasingly, we want materials that do different things at different times or under different conditions. Now there’s a host of such protean substances: materials that can be switched between transparent and mirror-like say, or between water-wettable and water-repelling.

Another attraction of Jiang’s coating is that these switchable molecular carpets can in principle be coated onto a wide variety of different surfaces – metal, glass, plastics. The researchers say that it might be used on hospital walls or on the fabric of military uniforms to combat biological weapons. That sort of promise is generally where the journalism stops and the hard work begins, to turn (or not) this neat idea into mass-produced materials that are reliable, safe and affordable.

Reference: Z. Cao et al., Angewandte Chemie International Edition online publication doi:10.1002/anie.201106466.

Thursday, March 08, 2012

Science and politics cannot be unmixed

One of the leaders in this week’s Nature is mine; here’s the original draft.
____________________________________________________

Paul Nurse will not treat his presidency of the Royal Society as an ivory tower. He has made it clear that he considers that scientists have duties to fulfil and battles to fight beyond the strictly scientific, for example to “expose the bunkum” of politicians who abuse and distort science. This social engagement was evident last week when Nurse delivered the prestigious Dimbleby Lecture, instituted in memory of the British broadcaster Richard Dimbleby. Previous scientific incumbents have included George Porter, Richard Dawkins and Craig Venter.

Nurse identified support for the National Health Service, the need for an immigration policy that attracts foreign scientists, and inspirational science teaching in primary education as some of the priorities for British scientists. These and many of the other issues that he raised, such as increasing scientists’ interactions with industry, commerce and the media, and resisting politicization of climate-change research, are relevant around the globe.

All the more reason not to misinterpret Nurse’s insistence on a separation of science and politics: as he put it more than once, “first science, then politics”. What Nurse rightly warned against here is the intrusion of ideology into the interpretation and acceptance of scientific knowledge, as for example with the Soviet Union’s support of the anti-Mendelian biology of Trofim Lysenko. Given recent accounts of political (and politically endorsed commercial) interference in climate research in the US (see Nature 465, 686; 2010), this is a timely reminder.

But it is all too easy to render this equation too simplistically. For example, Nurse also cited the rejection of Einstein’s “Jewish” relativistic physics by Hitler. But that is not quite how it was. “Jewish physics” was a straw man invented by the anti-Semitic and pro-Nazi physicists Johannes Stark and Philipp Lenard, partly because of professional jealousies and grudges. The Nazi leaders were, however, largely indifferent to what looked like an academic squabble, and in the end lost interest in Stark and Lenard’s risible “Aryan physics” because they needed a physics that actually worked.

Therein lies one reason to be sceptical of the common claim, repeated by Nurse, that science can only flourish in a free society. Historians of science in Nazi Germany such as Kristie Macrakis (in Surviving the Swastika; 1993) have challenged this assertion, which is not made true simply because we would like it to be so. Authoritarian regimes are perfectly capable of putting pragmatism before ideology. The scientific process itself is not impeded by state control in China – quite the contrary – and the old canard that Chinese science lacks innovation and daring is now transparently nonsense. During the Cold War, some Soviet science was vibrant and bold. Even the most notorious example of state repression of science – the trial of Galileo – is apt to be portrayed too simplistically as a conflict of faith and reason rather than a collision of personalities and circumstances (none of which excuses Galileo’s scandalous persecution).

There is a more edifying lesson to be drawn from Nazi Germany that bears on Nurse’s themes. This is that, while political (and religious) ideology has no place in deciding scientific questions, the practice of doing science is inherently political. In that sense, science can never come before politics. Scientists enter into a social contract, not least because they are not their own paymasters. Much if not most scientific research has social and political implications, often broadly visible from the outset. In times of economic and political crisis (like these), scientists must respond intellectually and professionally, and not merely by safeguarding their funding, important though that is.

The consequences of imagining that science can remain aloof from politics became acutely apparent in Germany in 1933, when the consensus view that politics was, as Heisenberg put it, an unseemly “money business” meant that most scientists saw no reason to mount concerted resistance to the expulsion of Jewish colleagues – regarded as a political rather than a moral matter. This ‘apolitical’ attitude can now be seen as a convenient myth that led to acquiescence and made it easy for the German scientists to be manipulated. It would be naïve to imagine that only totalitarianism could create such a situation.

The rare and most prominent exception to ‘apolitical’ behaviour was Einstein, whose outspokenness dismayed even his principled friends Max Planck and Max von Laue. “I do not share your view that the scientist should observe silence in political matters”, he told them. “Does not such restraint signify a lack of responsibility?” There was no hint of such a lack in Nurse’s talk. But we must take care to distinguish the political immunity of scientific reasoning from the political dimensions and obligations of doing science.

Wednesday, March 07, 2012

The unavoidable cost of computation

Here’s the pre-edited version of my latest news story for Nature. I really liked this work. I was lucky to meet Rolf Landauer before he died, and discovered him to be one of those people who is so genial, wry and unaffected that you aren’t awed by how phenomenally clever they are. He was also extremely helpful when I was preparing The Self-Made Tapestry, setting me straight on the genesis of notions about dissipative structures that sometimes assign the credit in the wrong places. Quite aside from that, it is worth making clear that this is in essence the first experimental proof of why Maxwell’s demon can’t do its stuff.
________________________________________________

Physicists have proved that forgetting is the undoing of Maxwell’s demon.

Forgetting always takes a little energy. A team of scientists in France and Germany has now demonstrated exactly how little.

Eric Lutz of the University of Augsburg and his colleagues have found experimental proof of a long-standing claim that erasing information can never be done for free. They present their result in Nature today [1].

In 1961, physicist Rolf Landauer argued that to reset one bit of information – say, to set a binary digit to zero regardless of whether it is initially 1 or 0 – must release at least a certain minimum amount of heat, proportional to the temperature, into the environment.

“Erasing information compresses two states into one”, explains Lutz, currently at the Free University of Berlin. “It is this compression that leads to heat dissipation.”

Landauer’s principle implies a limit on how low the energy dissipation – and thus consumption – of a computer can be. Resetting bits, or equivalent processes that erase information, are essential for operating logic circuits. In effect, these circuits can only work if they can forget – for how else could they perform a second calculation once they have done a first?

The work of Lutz and colleagues now appears to confirm that Landauer’s theory was right. “It is an elegant laboratory realization of Landauer's thought experiments”, says Charles Bennett, an information theorist at IBM Research in Yorktown Heights, New York, and Landauer’s former colleague.

“Landauer's principle has been kicked about by theorists for half a century, but to the best of my knowledge this paper describes the first experimental illustration of it”, agrees Christopher Jarzynski, a chemical physicist at the University of Maryland.

The result doesn’t just verify a practical limit on the energy requirement of computers. It also confirms the theory that safeguards one of the most cherished principles of physical science: the second law of thermodynamics.

This law states that heat will always move from hot to cold. A cup of coffee on your desk always gets cooler, never hotter. It’s equivalent to saying that entropy – the amount of disorder in the universe – always increases.

In the nineteenth century, the Scottish scientist James Clerk Maxwell proposed a scenario that seemed to violate this law. In a gas, hot molecules move faster. Maxwell imagined a microscopic intelligent being, later dubbed a demon, that would open and shut a trapdoor between two compartments to selectively trap ‘hot’ molecules in one of them and cool ones in the other, defying the tendency for heat to spread out and entropy to increase.

Landauer’s theory offered the first compelling reason why Maxwell’s demon couldn’t do its job. The demon would need to erase (‘forget’) the information it used to select the molecules after each operation, and this would release heat and increase entropy, more than counterbalancing the entropy lost by the demon’s legerdemain.

In 2010, physicists in Japan showed that information can indeed be converted to energy by selectively exploiting random thermal fluctuations, just as Maxwell’s demon uses its ‘knowledge’ of molecular motions to build up a reservoir of heat [2]. But Jarzynski points out that the work also demonstrated that selectivity requires the information about fluctuations to be stored.

He says that the experiment of Lutz and colleagues now completes the argument against using Maxwell’s demon to violate the second law, because it shows that “the eventual erasure of this stored information carries a thermodynamic penalty” – which is Landauer's principle.

To test this principle, the researchers created a simple two-state bit: a single microscopic silica particle, 2 micrometres across, held in a ‘light trap’ by a laser beam. The trap contains two ‘valleys’ where the particle can rest, one representing a 1 and the other a 0. It could jump between the two if the energy ‘hill’ separating them is not too high.

The researchers could control this height by the power of the laser. And they could ‘tilt’ the two valleys to tip the bead into one of them, resetting the bit, by moving the physical cell containing the bead slightly out of the laser’s focus.

By very accurately monitoring the position and speed of the particle during a cycle of switching and resetting the bit, they could calculate how much energy was dissipated. Landauer’s limit applies only when the resetting is done infinitely slowly; otherwise, the energy dissipation is greater.

Lutz and colleagues found that, as they used longer switching cycles, the dissipation got smaller, but that this value headed towards a plateau equal to the amount predicted by Landauer.

At present, other inefficiencies mean that computers dissipate at least a thousand times more energy per logic operation than the Landauer limit. This energy dissipation heats up the circuits, and imposes a limit on how small and densely packed they can be without melting. “Heat dissipation in computer chips is one of the major problems hindering their miniaturization”, says Lutz.

But this energy consumption is getting ever lower, and Lutz and colleagues say that it’ll be approaching the Landauer limit within the next couple of decades. Their experiment confirms that, at that point, further improvements in energy efficiency will be prohibited by the laws of physics. “Our experiment clearly shows that you cannot go below Landauer’s limit”, says Lutz. “Engineers will soon have to face that”.

Meanwhile, in fledgling quantum computers, which exploit the rules of quantum physics to achieve greater processing power, this limitation is already being confronted. “Logic processing in quantum computers already is well within
the Landauer regime, and one has to worry about Landauer's principle
all the time”, says physicist Seth Lloyd of the Massachusetts Institute of Technology.

References
1. Bérut, A. et al., Nature 483, 187-189 (2012).
2. Toyabe, S., Sagawa, T., Ueda, M., Muneyuki, E. & Sano, M. Nat. Phys. 6, 988-992 (2010).

Monday, February 27, 2012

Unmaking history

For Francophones, I have a piece in the February issue of La Recherche on spacetime cloaking, part of a special feature on invisibility. For some reason it’s not included in the online material. But here in any case is how it began in my mother tongue.
_______________________________________________________________

We all have experiences that we’d rather never happened – or perhaps that we just wish no one else had seen. Now researchers have shown how to carry out this kind of editing of history. They use the principles behind invisibility cloaks, which have already been shown to hide objects from light. But instead of hiding objects, we can hide events. In other words, we can apparently carve out a hole in spacetime so that no one on the outside can tell that whatever goes on inside it has ever taken place.

“Such speculations are not fantasy”, insist physicist Martin McCall of Imperial College in London and his colleagues, who came up with the idea last July [1]. They imagine a safe-cracker casting a spacetime cloak over the scene of the crime, so that he can open the safe and remove the contents while a security camera would see just a continuously empty room.

Suppose the cloak was used to conceal someone’s journey from one place to another. Because the device splices together the spacetime on either side of the ‘hole’, it would look as though the person vanished from the starting point and, in the blink of an eye, appeared at her destination. This would then create “the illusion of a Star Trek transporter”, the researchers say.

“It’s definitely a cool idea”, says Ulf Leonhardt, a specialist in invisibility cloaking at the University of St Andrews in Scotland. “Altering the history has been the metier of undemocratic politicians”, he adds, pointing to the way Soviet leaders would doctor photographs to remove individuals who had fallen from favour. “Now altering history has become a subject of physics.”

Lost in spacetime

Conventional invisibility cloaks hide objects by bending light rays around them and then bringing the rays back onto their original trajectory on the far side. That way, it looks to an observer as though the light has passed through an empty space where the hidden object resides. In contrast, the spacetime cloak would manipulate not the path of the rays but their speed. It would be made of materials that slow down light or speed it up. This means that some of the light that would have been scattered by the hidden event is ushered forward to pass before it happened, while the rest was held back until after the event.

These slowed and accelerated rays are then rejoined seamlessly so that there seems to be no gap in spacetime. It’s like bending rays in invisibility cloaks, except that they are bent not in space but in spacetime.

How do you slow down or speed up light? Both have been demonstrated already in some exotic substances such as ultracold gases of alkali metals: light has been both brought to a standstill and speeded up by a factor of 300, so that, bizarrely, a pulse seems to exit the system before it has even arrived. But the spacetime cloak needs to manipulate light in ways that are both simpler and more profound. Light is slowed down in any medium relative to its speed in a vacuum – that is precisely why it bends when it enters water or gas from air, causing the phenomenon of refraction. The amount of slowing down is measured by the refractive index: the bigger this value, the slower the speed relative to a vacuum.

In a spacetime cloak, the light must simply be slowed or speeded up relative to its speed before it entered the cloak. If the cloak itself is surrounded by some cladding material, then the light must be speeded up or retarded only relative to this – there’s no need for fancy tricks that seem to make light travel faster than its speed in a vacuum.

But to obtain perfect and versatile cloaking demands some sophisticated manipulation of the light, for which you need more that just any old transparent materials. For one thing, you need to alter both the electric and the magnetic components of the electromagnetic wave. Most materials (such as glass), being non-magnetic, don’t affect the latter. What’s more, the effects on the electric and magnetic components must be the same, since otherwise some light will be reflected as it enters the material – in this case, making the cloak itself visible. When the electric and magnetic effects are equalized, the material is said to be “impedance matched”. “For a perfect device, we need to modulate the refractive index while also keeping it impedance matched”, explains Paul Kinsler, McCall’s colleague at Imperial.

Hidden recipe

There aren’t really any ordinary materials that would satisfy all these requirements. But they can be met using the same substances that have been used already to make invisibility shields: so-called metamaterials. These are materials made from individual components that interact with electromagnetic radiation in unusual ways. Invisibility cloaks for microwaves have been built in which the metamaterial ‘atoms’ are little electrical circuits etched into copper film, which can pick up the electromagnetic waves like antennae, resonate with them, and re-radiate the energy. Because the precise response of these circuits can be tailored by altering their size and shape, metamaterials can be designed with a range of curious behaviours. For example, they can be given a negative refractive index, so that light rays are bent the wrong way. “Metamaterials that work by resonance offer a large range of strong responses that allow more design freedom”, says Kinsler. “They are also usually designed to have both electric and magnetic responses, which will in general be different from one another.”

Using a combination of these materials, McCall and colleagues offer a prescription for how to put together a spacetime cloak. It’s a tricky business: to divert light around the spacetime hole, one needs to change the optical properties of the cloaking material over time in a particular sequence, switching each layer of material by the right amount at the right moment. “The exact theory requires a perfectly matched and perfectly timed set of changes to both the electric and magnetic properties of the cloak”, says Kinsler.

The result, however, is a sleight of hand more profound than any that normal invisibility shields can offer. “If you turn an ordinary invisibility cloak on and off, you will see a cloaked object disappear and reappear”, explains Kinsler. “With our concept, you never see anything change at all.” At least, not from one side. The spacetime hole opened up by the cloak is not symmetrical – it operates from one side but not the other (although the cloak itself would be invisible from both directions). So an observer on one side might see an event that an observer on the other side will swear never took place.

Could such a device really be used to hide events in the macroscopic world? Physicist John Pendry, also at Imperial (but not part of McCall’s group) and one of the pioneers of invisibility cloaks, considers that unlikely. But he agrees with McCall and colleagues that there might well be more immediate and more practical applications for the technique. “Possible uses might be in a telecommunications switching station, where several packets of information might be competing for the same channel”, he says. “The time cloak could engineer a seamless flow in all channels” – by cloaking interruptions of one signal by another, it would seem as though all had simultaneously flowed unbroken down the same channel.

There could be some more fundamental implications of the work too. This manipulation of spacetime is analogous to what happens at a black hole. Here, light coming from the region near the hole is effectively brought to a standstill at the event horizon, so that time itself seems to be arrested there: an object falling into the hole seems, to an outside observer, to be stopped forever at the event horizon. The parallel between transformation optics and black-hole physics has been pointed out by Leonhardt and his coworkers, who in 2008 revealed an optical analogue of a black hole made from optical fibres. Leonhardt says that the analogy exists for spacetime cloaks also, and that therefore these systems might be used to create the analogue of Hawking radiation: the radiation predicted by Stephen Hawking to be emitted from black holes as a result of the quantum effects of the distortion of spacetime. Such radiation has never been detected yet in astronomical observations of real black holes, but its production at the edge of a spacetime ‘hole’ made by cloaking would provide strong support for Hawking’s idea.

Unlike black holes, however, a spacetime cloak doesn’t really distort spacetime – it just looks as though it does. “I can certainly imagine a transformation device that gives the illusion that causal relationships are distorted or even reversed – a causality editor, rather than our history editor”, says Kinsler. “But the effects generated are only an illusion.”

In the pipeline

In order to manipulate visible light, the component ‘atoms’ of a metamaterial have to be about the same size as the wavelength of the light – less than a micrometre. This means that, while microwave invisibility cloaks have been put together from macroscale components, optical metamaterials are much harder to make.

There’s an easier way, however. Some researchers have realised that another way to perform the necessary light gymnastics is to use transparent substances with unusual optical properties, such as birefringent minerals in which light travels at different speeds in different directions. Objects have been cloaked from visible light in this way using carefully shaped blocks of the mineral calcite (Iceland spar).

In the same spirit, McCall and colleagues realised that sandwiches of existing materials with ‘tunable’ refractive indices might be used to make ‘approximate’ spacetime cloaks. For example, one could use optical fibres whose refractive indices depend on the intensity of the light passing through them. A control beam would manipulate these properties, opening and closing a spacetime cloak for a second beam.

However, as with the ‘simple’ invisibility cloaks made from calcite, the result is that although the object or event can be fully hidden, the cloak itself is not: light is still reflected from it. “Although the event itself can in principle be undetectable, the cloaking process itself isn't”, Kinsler says.

This idea of manipulating the optical properties of optical fibres for spacetime cloaking has already been demonstrated by Moti Fridman and colleagues at Cornell University [2]. Stimulated by the Imperial team’s proposal, they figured out how to put the idea into practice. They use so-called ‘time lenses’ which modify how a light wave propagates not in space, like an ordinary lens, but in time. Just as an ordinary lens can separate different light frequencies in space, and can thus be used to spread out or focus a beam, so a time lens uses the phenomenon of dispersion (the frequency dependence of the speed at which light travels through a medium) to separate frequencies in time, slowing some of them down relative to others.

Because of this equivalence of space and time in the two types of lens, a two-part ‘split time-lens’ can bend a probe beam around a spacetime hole in the same way as two ordinary lenses could bend a light beam around either side of an object to cloak it in space. In the Cornell experiment, a second split time-lens then restored the probe to its original state. In this way, the researchers could temporarily hide the interaction between the probe beam and a second short light pulse, which would otherwise cause the probe signal to be amplified. Fridman and colleagues presented their findings at a Californian meeting of the Optical Society of America in October. “It's a nice experiment, and achieved results remarkably quickly”, says Kinsler. “We were surprised to see it – we were expecting it might take years to do.”

But the spacetime cloaking in this experiment lasts only for a fleeting moment – about 15 picoseconds (trillionths of a second). And Fridman and colleagues admit that the material properties of the optical fibres themselves will make it impossible to extend the gap beyond a little over one millionth of a second. So there’s much work to be done to create a more perfect and more long-lasting cloak. In the meantime, McCall and Kinsler have their eye on other possibilities. Perhaps, they say, we could also edit sound this way by applying the same principles to acoustic waves. As well as hiding things you wish you’d never done, might you be able to literally take back things you wish you’d never said?

1. M. W. McCall, A. Favaro, P. Kinsler & A. Boardman, Journal of Optics 13, 024003 (2011).
2. M. Fridman, A. Farsi, Y. Okawachi & A. L. Gaeta, Nature 481, 62-65 (2012).

Friday, February 24, 2012

Survival in New York

Well, I'm here and just thought it possible that someone in NYC might see this before tomorrow (25 Feb) is up. I'm taking part in this event, linked to David Rothenberg's excellent new book. It's free (the event, not the book), and promises to be great fun. If you're in Manhattan - see you tomorrow?

Thursday, February 16, 2012

Call to arms

I wrote a leader for this week’s Nature on the forthcoming talks for an international Arms Trade Treaty. Here’s the original version.
________________________________________________________________

Scientists have always been some of the strongest voices among those trying to make the world a safer place. Albert Einstein’s commitment to international peace is well known; Andrei Sakharov and Linus Pauling are among the scientists who have been awarded the Nobel Peace prize, as is Joseph Rotblat, the subject of a new autobiography (see Nature 481, 438; 2012), in conjunction with the Pugwash organization that he helped to found. This accords not only with the internationalism of scientific endeavour but with the humanitarian goals that mostly motivate it.

At the same time, the military applications of science and technology are never far from view, and defence funding supports a great deal of research (much of it excellent). There need be no contradiction here. Nations have a right to self-defence, and increasingly armed forces are deployed for peace-keeping rather than aggression. But what constitutes responsible use of military might is delicate and controversial, and peace-keeping is generally necessary only because aggressors have been supplied with military hardware in the first place.

Arms control is a thorny subject for scientists. When, at a session on human rights at a physics conference several years ago, Nature asked if the evident link between the arms trade and human-rights abuses might raise ethical concerns about research on offensive weaponry, the panel shuffled their feet and became tongue-tied.

There are no easy answers to the question of where the ethical boundaries of defence research lie. But all responsible scientists should surely welcome the progress in the United Nations towards an international Arms Trade Treaty (ATT), for which a preparatory meeting in New York next week presages the final negotiations in July. The sale of weapons, from small arms to high-tech missile systems, hinders sustainable development and progress towards the UN’s Millennium Development Goals, and undermines democracy.

Yet there are dangers. Some nations will attempt to have the treaty watered down. That the sole vote against the principle at the UN General Assembly in October 2009 was from Zimbabwe speaks volumes about likely reasons for opposition. But let’s not overlook the fact that in the previous vote a year earlier, Zimbabwe was joined by one other dissenter: the United States, still at that point governed by George W. Bush’s administration. Would any of the current leading US Republican candidates be better disposed towards an ATT?

Paradoxical as it might seem, however, a binding international treaty on the arms trade is not necessarily a step forward anyway. Most of the military technology used for recent human-rights abuses was obtained by legal routes. Such sales from the UK, for example, helped Libya’s former leaders to suppress ‘rebels’ in 2011 and enabled Zimbabwe to launch assaults in the Democratic Republic of Congo in the 1990s.

The British government admits that it anticipates that the Arms Trade Treaty, which it supports, will not reduce arms exports. It says that the criteria for exports “would be based on existing obligations and commitments to prevent human rights abuse” – which have not been notably effective. According to the UK’s Foreign and Commonwealth Office (FCO), the ATT aims “to prevent weapons reaching the hands of terrorists, insurgents and human rights abusers”. But as Libya demonstrated, one person’s insurgents are another’s democratizers, while today’s legitimate rulers can become tomorrow’s human-right abusers.

The FCO says that the treaty “will be good for business, both manufacturing and export sales.” Indeed, arms manufacturers support it as a way of levelling the market playing field. The ATT could simply legitimize business as usual by more clearly demarcating it from a black market, and will not cover peripheral military hardware such as surveillance and IT systems. Some have argued that the treaty will be a mere distraction to the real problem of preventing arms reaching human-rights violators (D. P. Kopel et al., Penn State Law Rev. 114, 101-163; 2010).

So while there are good reasons to call for a strong ATT, it is no panacea. The real question is what a “responsible” arms trade could look like, if this isn’t merely oxymoronic. That would benefit from some hard research on how existing, ‘above-board’ sales have affected governance, political stability and socioeconomic conditions worldwide. Such quantification is challenging and contentious, but several starts have been made (for example, www.unidir.org and www.prio.no/nisat). We need more.

Tuesday, February 14, 2012

... but I just want to say this

With the shrill cries of new atheists ringing in my ears (you would not believe some of that stuff, but I won’t go there), I read John Gray’s review of Alain de Botton’s book Religion for Atheists in the New Statesman and it is as though someone has opened a window and let in some air – not because of the book, which I’ve not read, but because of what John says. Sadly you can’t get it online: the nearest thing is here.

Sunday, February 12, 2012

Moving swiftly on

This piece in the Guardian has caused a little storm, and I’m not so naive as to be totally surprised by that. There’s much I could say about it, but frankly it never helps. I’m tired of how little productive dialogue ever seems to stem from these things and figure I will just leave the damned business alone (no doubt to the delight of the more rapid detractors). I will say here only a few things about this pre-edited version, which was necessarily slimmed down to fit the slot in the printed paper: (1) to those who thought I was saying “hey, wouldn’t it be a great idea if sociologists studied religion”, note the reference to Durkheim as a shorthand way of acknowledging that this notion goes back a long, long way; (2) note that I’m not against everything Dawkins stands for on this subject – I agree with him on more than just the matter of faith schools mentioned below, although of course I do disagree with other things. The “for us or against us” attitude that one seems to see so much of in online discussions is the kind of infantile attitude that I figure we should be leaving to the likes of George W. Bush.
______________________________________________________________

The research reported this week showing that American Christians adjust their concept of Jesus to match their own sociopolitical persuasion will surely surprise nobody. Liberals regard Christ primarily as someone who promoted fellowship and caring, say psychologist Lee Ross of Stanford University in California and his colleagues, while conservatives see him as a firm moralist. In other words, he’s like me, only more so.

Yes, it’s pointing out the blindingly obvious. Yet the work offers a timely reminder of how religious thinking operates that has so far been resolutely resisted by some strident “new atheists”.

You might imagine that it’s uncontentious to suggest that religion is essentially a social phenomenon, not least because particular varieties of it – fundamentalist, tolerant, mystical – tend to develop within specific communities united by geography or cultural ties rather than arising at random throughout society. Without entering the speculative debate about whether religiosity has become hardwired by evolution, it seems clear enough that specific types of religious behaviour are as prone to be transmitted through social networks as are, say, obesity and smoking.

Bizarrely, this is ignored by some of the most prominent opponents of religion today. Arguments about science and religion are mostly conducted as if Emile Durkheim had never existed, and all that matters is whether or not religious belief is testable. Many atheists prefer to regard religion as a virus that jumps from one hapless individual to another, or a misdirection of evolutionary instincts – in any case, curable only with a strong shot of reason. These epidemiological and Darwinian models have an elegant simplicity that contamination with broader social and cultural factors would spoil. Yet the result is akin to imagining that, to solve Africa’s AIDS crisis, there is no point in trying to understand African societies.

Thus arch new atheist Sam Harris swatted away my suggestion that we might approach religious belief as a social construct with the contemptuous comment that I was saying something “either trivially true or obscurantist”. I find it equally peculiar that chemist Harry Kroto should insist that “I am not interested in why religion continues” while so devoutly wishing that it would not.

At face value, this apparent lack of interest in how religion actually manifests and propagates in society is odd coming from people who so loudly deplore its prevalence. But I think it may not be so hard to explain.

For one thing, regarding religion as a social phenomenon would force us to see it as something real, like governments or book groups, and not just a self-propagating delusion. It is so much safer and easier to ridicule a literal belief in miracles, virgin births and other supernatural agencies than to consider religion as (among other things) one of the ways that human societies have long chosen to organize their structures of authority and status, for better or worse.

It also means that one might feel compelled to abandon the heroic goal of dislodging God from his status as Creator in favour of asking such questions as whether particular socioeconomic conditions tend to promote intolerant fundamentalism over liberal pluralism. It turns a Manichean conflict between truth and ignorance into a mundane question of why some people are kind or beastly towards others. Yet to suggest that we can relax about some forms of religious belief – that they need offer no obstacle to an acceptance of scientific inquiry and discovery, and will not demand the stoning of infidels – is already, for some new atheists, to have conceded defeat. They will not have been pleased with David Attenborough’s gentle agnosticism on Desert Island Discs, although I doubt that they will dare say so.

The worst of it is that to reject an anthropological approach to religion is, in the end, unscientific. To decide to be uninterested in questions of how and why societies have religion, of why it has the many complexions that it does and how these compete, is a matter of personal taste. But to insist that these are pointless questions is to deny that this important aspect of human behaviour warrants scientific study. Harris’s preference to look to neuroscience – to the individual, not society – will only get you so far, unless you want to argue that brains evolved differently in Kansas (tempting, I admit).

Richard Dawkins is right to worry that faith schools can potentially become training grounds for intolerance, and that daily indoctrination into a particular faith should have no place in education. But I’m sure he’d agree that how people formulate their specific religious beliefs is a much wider question than that. The Stanford research reinforces the fact that a single holy book can provide the basis both for a permissive, enquiring and pro-scientific outlook (think tea and biscuits with Richard Coles) or for apocalyptic, bigoted ignorance (think a Tea Party with Sarah Palin). Might we then, as good scientists alert to the principles of cause and effect, suspect that the real ills of religion originate not in the book itself, but elsewhere?

Friday, February 10, 2012

Impractical magic

I have a review of a book about John Dee in the latest issue of Nature. Here's how it started.
_______________________________________________________
The Arch-Conjuror of England: John Dee
by Glyn Parry
Yale University Press, 2011
ISBN 978-0-300-11719-6
335 pages

The late sixteenth-century mathematician and alchemist John Dee exerts a powerful grip on the public imagination. In recent times, he has been the subject of several novels, including The House of Doctor Dee by Peter Ackroyd, and inspired the pop opera Doctor Dee by Damon Albarn of the group Blur. Now, in The Arch-Conjuror of England, historian Glyn Parry gives us probably the most meticulous account of Dee’s career to date.

In some ways, all this attention seems disproportionate. Dee was less important in the philosophy of natural magic than such lesser-known individuals as Giambattista Della Porta and Cornelius Agrippa, and less significant as a transitional figure between magic and science than his contemporaries Della Porta, Bernardino Telesio and Tommaso Campanella, both anti-Aristotelian empiricists from Calabria. Dee’s works, such as the notoriously opaque Monas hieroglyphica, in which the unity of the cosmos was represented in a mystical symbol, were widely deemed impenetrable even in his own day.

There’s no doubt that Dee was prominent during the Elizabethan age – he probably provided the model for both Shakespeare’s Prospero and Ben Jonson’s Subtle in the satire The Alchemist. Yet what surely gives Dee his allure more than anything else is the same thing that lends glamour to Walter Raleigh, Francis Drake and Philip Sidney: they all fell within the orbit of Queen Elizabeth herself. Benjamin Woolley’s earlier biography of Dee draws explicitly on this connection, calling him ‘the queen’s conjuror’. Yet in a real sense he was precisely that, on and off, as his fortunes waxed and waned in the fickle, treacherous Elizabethan court.

There is no way to make sense of Dee without embedding him within the magical cult of Elizabeth, just as this holds the key to Spenser’s epic poem The Faerie Queen and to the flights of fancy in A Midsummer Night’s Dream. To the English, the reign of Elizabeth heralded the dawn of a mystical Protestant awakening. In Germany that dream died in the brutal Thirty Years War; in England it spawned an empire. Dee was the first to coin the phrase ‘the British Empire’, but his vision was less colonialist than a magical yoking of Elizabeth to the Arthurian legend of Albion.

It is one of the strengths of Glyn Parry’s book that he shows how deeply woven magic and the occult sciences were into the fabric of early modern culture. Elizabeth was particularly knowledgeable about alchemy. After all, why would a monarch who had no reason to doubt the possibility of transmutation of gold pass up this chance to fill the royal coffers? Because she believed he could make the philosopher’s stone, the queen was desperate to lure Dee’s former associate, the slippery Edward Kelley, back to England after he left with Dee for Poland and Prague in 1583. The Holy Roman Emperor Rudolf II was equally eager to keep Kelley in Bohemia, making him a baron. Even Dee’s involvement in the failed quest of the adventurer Martin Frobisher to find a northwest passage to the Pacific had an alchemical tint when it was rumoured that Frobisher had found gold-containing ore.

The relationship with Kelley is another element of the popular fascination with Dee. Kelley claimed to be able to converse with angels via Dee’s crystal ball, and Dee’s faith in Kelley’s prophecies and angelic commands never wavered even when the increasingly deranged Kelley told him that the angels had commanded them to swap wives. The inversion of the servant-master relationship as Kelley’s reputation grew in Bohemia makes Dee a pathetic figure towards the end of their ill-fated excursion on the continent – forced on them after Dee blundered in Elizabeth’s court.

He was always doing that. However brilliant his reputation as a magician and mathematician, Dee was hopeless at court politics, regularly backing the wrong horse. He ruined his chances in Prague by passing on Kelley’s angelic reprimand to Rudolf for his errant ways. But Dee can’t be held entirely to blame. Parry makes it clear just how miserable it was for any courtier trying to negotiate the subtle currents of the court, especially in England where the memory of Mary I’s brief and bloody reign still hung in the air along with a lingering fear of papist plots.

This is probably the most meticulous account of Dee’s career to date, although the details aren’t always given shape. Often the political intrigues become as baffling and Byzantine for the reader as they must have been for Dee. But what I really missed was context. It is hard enough to locate Dee in history without hearing about other contemporary figures who also sought to expand natural philosophy, such as Della Porta and Francis Bacon. Bacon in particular was another intellectual whose grand schemes and attempts to gain the queen’s ear were hampered by court rivalries.

But to truly understand Dee’s significance, we need more than the cradle-to-grave story. For example, although Parry patiently explains the numerological and symbolic mysticism of Dee’s Monas hieroglyphica, its preoccupation with divine and Adamic languages can seem sheer delirium if not linked to, say, the later work of the German Jesuit Athanasius Kircher (the most Dee-like figure of the early Enlightenment) or of John Wilkins, one of the Royal Society’s founders.

Likewise, it would have been easier to evaluate Dee’s mathematics if we had been told that this subject had, even until the mid-seventeenth century, a close association both with witchcraft and with mechanical ingenuity, at which Dee also excelled. Wilkins’ Mathematical Magick (1648) was a direct descendant of Dee’s famed Mathematical Preface to a new volume of Euclid. We’d never know from this book that Dee influenced the early modern scientific world via the likes of Robert Fludd, Elias Ashmole and Margaret Cavendish, nor that his works were studied by none other than Robert Boyle, and probably by Isaac Newton. Parry has assembled an important contribution to our understanding of how magic became science. It’s a shame he didn’t see it as part of his task to make that connection.