Monday, March 17, 2014

The value of ambiguity

Here's my latest piece for "Under the Radar" at BBC Future.

____________________________________________________

Listen, I’m going to be straight with you. Well, that’s what I’d intended, but already language has got in the way – you’re not “listening” at all, and “straight” has so many meanings that you should be unsure what is going to follow. All the same, I doubt if any of you thought this meant I was going to stand to attention or be rigorously heterosexual. Language is ambiguous – and yet we cope with it.

But surely that’s a bit of a design flaw, right? We use language to communicate, so shouldn’t it be geared towards making that communication as clear and precise as possible, so that we don’t have to figure out the meaning from the context, or are forever asking “Say that again?” Imagine a computer language that works like a natural language – would the silicon chips have a hope of catching our drift?

Yet the ambiguity of language isn’t a problem foisted on it by the corrupting contingencies of history and use, according to complex-systems scientists Ricard Solé and Luís Seoane of the Pompeu Fabra University in Barcelona, Spain. They say that it is an essential part of how language works. If real languages were too precise and well defined, so that every word referred to one thing only, they would be almost unusable, the researchers say, and we’d struggle to communicate ideas of any complexity.

That linguistic ambiguity has genuine value isn’t a new idea. Cognitive scientists Ted Gibson and Steven Piantadosi of the Massachusetts Institute of Technology have previously pointed out that a benefit of ambiguity is that it enables economies of language: things that are obvious from the context don’t have to be pedantically belaboured in what is said. What’s more, they argued, words that are easy to say and interpret can be “reused”, so that more complex ones aren’t required.

Now Solé and Seaone show that another role of ambiguity is revealed by the way we associate words together. Words evoke other words, as any exercise in free association will show you. The ways in which they do so are often fairly obvious – for example, through similarity (synonymy) or opposition (antonymy). “High” might make you think “low”, or “sky”, say. Or it might make you think “drugs”, or “royal”, which are semantic links to related concepts.

Solé and Seoane look at the intersecting networks formed from these sematic links between words. There are various ways to plot these out – either by searching laboriously through dictionaries for associations, or by asking people to free-associate. There are already several data sets of semantic networks freely available, such as WordNet, which use fairly well-defined rules to determine the links. It’s possible to find paths through the network from any word to any other, and in general there will be more than one connecting route. Take the case of the words “volcano” and “pain”: on WordNet they can be linked via “pain-ease-relax-vacation-Hawaii-volcano” or “pain-soothe-calm-relax-Hawaii-volcano”.

A previous study found that WordNet’s network has the mathematical property of being “scale-free”. This means that there is no real average number of links per word. Some words have lots of links, most have hardly any, and there is everything in between. There’s a simple mathematical relationship between the probability of a word having k connections (P(k)) and the value of k itself: P(k) is proportional to k raised to some power, in this case approximately equal to 3. This is called a power law.

A network in which the links are apportioned this way has a special feature: it is a “small world”. This means that it’s just about always possible to find shortcuts that will take you from one node of the network (one word) to any other in just a small number of hops. It’s the highly connected, common words that provide these shortcuts. Some social networks seem to have this character too, which is why we speak of the famous “six degrees of separation”: we can be linked to just about anyone on the planet through just six or so acquaintances.

Solé and Seoane now find that this small-world feature of the semantic network is only a small world when it includes words that have more than one meaning (in linguistic terms this is called polysemy). Take away polysemy, the researchers say, and the route between any pair of words chosen at random will be considerably longer. By having several meanings, polysemic words can connect clusters of concepts that otherwise might remain quite distinct (just as “right” joins words about spatial relations to words about justice). Again, much the same is true of our social networks, which seem to be “small” because we each have several distinct roles or personas – as professionals, parents, members of a sports team, and so on, meaning that we act as a link between quite different social groups - the web is easy to navigate.

The small-world character of social networks helps to make them efficient at spreading and distributing information. For example, it makes them “searchable”, so that if we want advice on bee-keeping, we might well have a friend who has a bee-keeping friend, rather than having to start from scratch. By the same token Solé and Seaone think that small-world semantic networks make language efficient at enabling communication, because words with multiple meanings make it easier to put our thoughts into words. “We browse through semantic categories as we build up conversations”, Seoane explains. Let’s say we’re talking about animals. “We can quickly retrieve animals from a given category (say reptiles) but the cluster will soon be exhausted”, he says. “Thanks to ambiguous animals that belong to many categories at a time, it is possible to radically switch from one category to another and resume the search in a cluster that has been less explored.”

What’s more, the researchers argue that the level of ambiguity we have in language is at just the right level to make it easy to speak and be understood: it represents an ideal compromise between the needs of the speaker and the needs of the listener. If every single object and concept has its own unique word, then the language is completely unambiguous – but the vocabulary is huge. The listener doesn’t have to do any guessing about what the speaker is saying, but the speaker has to say a lot. (For example, “Come here” might have to be something like “I want you to come to where I am standing.”) At the other extreme, if the same word is used for everything, that makes it easy for the speaker, but the listener can’t tell if she is being told about the weather or a rampaging bear.

Either way, communication is hard. But Solé and Seoane argue that with the right amount of polysemy, and thus ambiguity, the two can find a good trade-off. What’s more, it seems that this compromise brings the advantage also of “collapsing” semantic space into a denser net that allows us to make fertile connections between disparate concepts. We have even arguably turned this small-world nature of ambiguity into an art form – we call it poetry. Or as you might put it,
Words, after speech, reach
Into the silence. Only by the form, the pattern,
Can words or music reach
The stillness.

Reference: R. V. Solé & L. F. Seoane, preprint http:/www.arxiv/org/1402.4802 (2014).

Sunday, March 16, 2014

Some enlightenment on Giordano Bruno

I hear that the relaunched Cosmos TV series has included a little hagiography of Giordano Bruno as a martyr to Copernican science, and I sigh. If I was a sensible chap, I would simply accept this myth is never now going to be squashed, because it seems to be too important to many people as a means of “showing” how the Roman Church was determined to stamp out the kind of independent and anti-dogmatic thought that supposedly gave rise to modern science. In short, Bruno fills the same martyr’s role here as early Christians needed to sustain their own faith.

But I am not a sensible chap, because I persist with this fantasy that one day everyone will be persuaded to go back and look at the history and see that this portrayal of Bruno is a (relatively) modern invention – an aspect of the nineteenth-century Draper-White narrative that pitched science in head-on combat with the Church. I am foolish enough to imagine that what I wrote in my book Curiosity is actually going to be read and heeded:
“The Neapolitan friar Giordano Bruno had an arrogant and argumentative nature that was bound to get him into serious trouble eventually, although if he had not happened to promote Copernican cosmology it is doubtful that he would command any greater fame today than the many other intellectual vagabonds who wandered Europe during the Counter-Reformation. It seems a vain hope that Bruno should ever cease to be the ‘martyr to science’ that modern times have made of him; maybe we must resign ourselves to the words spoken by Brecht’s Galileo: ‘Unhappy the land where heroes are needed.’
The fact is that Bruno’s Copernicanism is not mentioned in the charges levelled against him by the Inquisition in 1576, nor the denunciation of 1592 that led to his imprisonment and lengthy trial. Of the heretical accusations that condemned him to be burnt at the stake in 1600, only two are still recorded, which relate to obscure theological matters. He held many opinions of which the Church disapproved deeply, on such delicate matters as the Incarnation and the Trinity, not to mention having a long history of associating with disreputable types. Bruno’s death stains the Church’s record of tolerance for free thought, but says little about its attitude to science. There is nothing in Bruno’s espousal of a world soul, or his long discourses on demons and other spiritual beings, or his unconventional system of the elements, that makes him so very unusual for his times – but nothing either that qualifies him for canonization in the scientific pantheon.”

But then – praise be! – I see that others have done the job already, and better than I could. Corey Powell at Discover magazine has set the record straight on Bruno, and attacking this old Whig view of science history. Meg Rosenburg has posted a nice piece on Bruno too. And best of all, Rebekah Higgitt has written a masterful article in her Guardian blog about why this kind of appropriation of history to serve our modern agenda is invariably false and damaging to the historical record. As she puts it, “Historical figures who lived in a very different world, very differently understood, cannot be turned into heroes who perfectly represent our values and concerns without doing serious damage to the evidence.” And this is really the point, for I’m tired and, I fear, a little cross at scientists who seem to think that being scrupulous with the evidence only applies to science and not to something as wishy-washy as the humanities. So hurrah to all three of you!

And I couldn’t help but be struck by how, at the same time, we have Brendan O’Neil (who I can’t say I always agree with) taking Richard Dawkins to task by pointing out how the Enlightenment was not, as many alleged champions of “Enlightenment values” like to insist today, about attacking religion, but rather about demanding religious tolerance and the freedom to worship as one pleases. But Brendan doesn’t take this point far enough. For the one thing Enlightenment heroes like Voltaire and Rousseau could not abide was atheism. The Enlightenment is as abused an historical notion as Bruno’s “martyrdom” is – by much the same people and for much the same reasons. And so this motivates me to post here what I said about all this at the How The Light Gets In festival at Hay-on-Wye last summer, as part of a debate on optimism, pessimism and the legacy of the Enlightenment. Here it is.

Yes, I’m fool enough to think that this might stop some folk from banging on about “Enlightenment values.” And yes, I know that this is deeply irrational of me.

_______________________________________________________________________
“Nasty, brutish and short”: How The Light Gets In Festival, panel discussion, 1st June 2013, Hay-on-Wye.

I’ve been trying to parse the title of this discussion ever since I saw it. The blurb says “The Enlightenment taught us to believe in the optimistic values of humanism, truth and progress” – but of course the title, which sounds a much more pessimistic note, comes from Thomas Hobbes’ Leviathan, and yet Hobbes too is very much a part of the early Enlightenment. You might recall that it was Hobbes’ description of life under what he called the State of Nature: the way people live if left to their own devices, without any overarching authority to temper their instincts to exploit one another.

That scenario established the motivation for Hobbes’ attempt to deduce the most reliable way to produce a stable society. And what marks out Hobbes’ book as a key product of the Enlightenment is that he tried to develop his argument not, as previous political philosophies going back to Plato had done, according to preconceptions and prejudices, but according to strict, quasi-mathematical logic. Hobbes’ Commonwealth is a Newtonian one – or rather, to avoid being anachronistic, a Galilean one, because he attempted to generalize his reasoning from Galileo’s law of motion. This was to be a Commonwealth governed by reason. And let me remind you that what this reason led Hobbes to conclude is that the best form of government is a dictatorship.

Now of course, this sort of exercise depends crucially one what you assume about human nature from the outset. If, like Hobbes, you see people as basically selfish and acquisitive, you’re likely to end up concluding that those instincts have to be curbed by drastic measures. If you believe, like John Locke, that humankind’s violent instincts are already curbed by an intrinsic faculty of reason, then it becomes possible to imagine some kind of more liberal, communal form of self-government – although of course Locke then argued that state authority is needed to safeguard the private property that individuals accrue from their efforts.

Perhaps the most perceptive view was that of Rousseau, who argued in effect that there is no need for some inbuilt form of inhibition to prevent people acting anti-socially, because they will see that it is in their best interests to cooperate. That’s why agreeing to abide by a rule of law administered by a government is not, as in Hobbes’ case, an abdication of personal freedom, but something that people will choose freely: it is the citizen’s part of the social contract, while the government is bound by this contract to act with justice and restraint. This is, in effect, precisely the kind of emergence of cooperation that is found in modern game theory.

My point here is that reasoning about governance during the Enlightenment could lead to all kinds of conclusions, depending on your assumptions. That’s just one illustration of the fact that the Enlightenment doesn’t have anything clear to say about what people are like or how communities and nations should be run. In this way and in many others, the Enlightenment has no message for us – it was too diverse, but more importantly, it was much too immersed in the preoccupations of its times, just like any other period of history. This is one reason why I get so frustrated about the way the Enlightenment is used today as a kind of shorthand for a particular vision of humanity and society. What is most annoying of all is that that vision so often has very little connection with the Enlightenment itself, but is a modern construct. Most often, when people today talk about Enlightenment values, they are probably arguing in favour of a secular, tolerant liberal democracy in which scientific reason is afforded a special status in decision-making. I happen to be one of those people who rather likes the idea of a state of that kind, and perhaps it is for this reason that I wish others would stop trying to yoke it to the false idol of some kind of imaginary Enlightenment.

To state the bleedin’ obvious, there were no secular liberal democracies in the modern sense in eighteenth century Europe. And the heroes of the Enlightenment had no intention of introducing them. Take Voltaire, one of the icons of the Enlightenment. Voltaire had some attractive ideas about religious tolerance and separation of church and state. But he was representative of such thinkers in opposing any idea that reason should become a universal basis for thought. It was grand for the ruling classes, but far too dangerous to advocate for the lower orders, who needed to be kept in ignorance for the sake of the social order. Here’s what he said about that: “the rabble… are not worthy of being enlightened and are apt for every yoke”.

What about religion, then? Let’s first of all dispose of the idea that the Enlightenment was strongly secular. Atheism was very rare, and condemned by almost all philosophers as a danger to social stability. Rousseau calls for religious tolerance, but not for atheists, who should be banished from the state because their lack of fear of divine punishment means that they can’t be trusted to obey the laws. And even people who affirm the religious dogmas of the state but then act as if they don’t believe them should be put to death.

Voltaire has been said to be a deist, which means that he believed in a God whose existence can be deduced by reason rather than revelation, and who made the world according to rational principles. According to deists, God created the world but then left it alone – he wasn’t constantly intervening to produce miracles. It’s sometimes implied that Enlightenment deism was the first step towards secularism. But contrary to common assertions, there wasn’t any widespread deist movement in Europe at that time. And again, even ideas like this had to be confined to the better classes: the message of the church should be kept simple for the lower orders, so that they didn’t get confused. Voltaire said that complex ideas such as deism are suited only “among the well-bred, among those who wish to think.”

Well, enough Enlightenment-bashing, perhaps – but then why do we have this myth of what these people thought? Partly that comes from the source of most of our historical myths, which is Victorian scholarship. The simple idea that the Enlightenment was some great Age of Reason is now rejected by most historians, but the popular conception is still caught up with a polemical view developed in particular by two nineteenth-century Americans, John William Draper and Andrew Dickson White. Draper was a scientist who decided that scientific principles could be applied to history, and his 1862 book The History of Intellectual Development in Europe was a classic example of Whiggish history in which humankind makes a long journey out of ignorance and superstition, through an Age of Faith, into a modern Age of Reason. But where we really enter the battleground is with Draper’s 1874 book History of the Conflict between Religion and Science, in which we get the stereotypical picture of science having to struggle against the blinkered dogmatism of faith – or rather, because Draper’s main target was actually Catholicism, against the views of Rome, because Protestantism was largely exonerated. White, who founded Cornell University, gave much the same story in his 1896 book A History of the Warfare if Science with Theology in Christendom. It’s books like this that gave us the simplistic views on the persecution of Galileo that get endlessly recycled today, as well as myths such as the martyrdom of Giordano Bruno for his belief in the Copernican system. (Bruno was burnt at the stake, but not for that reason.)

The so-called “conflict thesis” of Draper and White has been discredited now, but it still forms a part of the popular view of the Enlightenment as the precursor to secular modernity and to the triumph of science and reason over religious dogma.

Bur why, if these things are so lacking in historical support, do intelligent people still invoke the Enlightenment trope today whenever they fear that irrational forces are threatening to undermine science? Well, I guess we all know that our critical standards tend to plummet when we encounter idea that confirm our preconceptions. But it’s more than this. It is one thing to argue for how we would prefer things to be, but far more effective to suggest that things were once like that, and that this wonderful state of affairs is now being undermined by ignorant and barbaric hordes. It’s the powerful image of the Golden Age, and the rhetoric of a call to arms to defend all that is precious to us. What seems so regrettable and ironic is that the casualty here is truth, specifically the historical truth, which of course is always messy and complex and hard to put into service to defend particular ideas.

Should we be optimistic or pessimistic about human nature? Well – big news! – we should be both, and that’s what history really shows us. And if we want to find ways of encouraging the best of our natures and minimizing the worst, we need to start with the here and now, and not by appeal to some imagined set of values that we have chosen to impose on history.

Thursday, March 13, 2014

Searching in the dark

Here’s a kind of initial edit of my piece and invisibility for Nautilus.

______________________________________________________________________

In unseen worlds, science crosses paths with myth.

It seems almost tautological to say that for centuries scientists studied light in order to comprehend the visible world. Why are things coloured? What is a rainbow? How do our eyes work? And what is light itself? These are questions that preoccupied scientists and philosophers since the time of Aristotle, including Isaac Newton, Michael Faraday, Thomas Young and James Clerk Maxwell.

But in the late nineteenth century all that changed, and it was largely Maxwell’s doing. This was the period in which the whole focus of physics – still emerging as a distinct scientific discipline – shifted from the visible to the invisible. Light itself was instrumental to that change.

Physics has never looked back from that shift. Today its theories and concepts are concerned largely with invisible entities: fields of force, rays outside our visual perception, particles too small to see even in the most advanced microscopes, ideas of unseen parallel worlds, and mysterious entities named for their very invisibility: dark matter and dark energy.

Things that we can’t see or touch once belonged to the realm of the occult. This simply meant that they were hidden, not necessarily that they were supernatural. But the occult became the hiding place for al kinds of imaginary and paranormal phenomena: ghosts, spirits and demons, telepathy and other ‘psychic forces’. These things seem now to be the antithesis of science, but when science first began to fixate on invisible entities, many leading scientists saw no clear distinction between such occult concepts and hard science.

To make sense of the unseen, we have to look for narratives. This means we fall back on old stories, enshrined in myth and folklore. It’s rarely acknowledged or appreciated that scientists still do this when they are confronted by mysteries and gaps in their knowledge. Those myths aren’t banished as science advances, but simply reinvented.

Occult light

What was it about light that impelled this swerve towards the invisible? In the early nineteenth century Faraday introduced the idea of a field – an invisible, pervasive influence – to explain the nature of electricity and magnetism. In the 1860s Maxwell wrote down a set of equations showing how electricity and magnetism are related. Maxwell’s equations implied that disturbances in these coupled fields – electromagnetic waves – would move through space at the speed of light. It was quickly apparent that these waves in fact are light.

But whereas visible light has wavelengths of between about 400 and 700 millionths of a millimetre, Maxwell’s equations showed that there was no obvious limit to the wavelength that electromagnetic waves can have. They may exist beyond both limits of the visible range – where we can’t see them.

These predictions were soon confirmed. In 1887, the German scientist Heinrich Hertz showed that oscillations of electrical current could give rise to long-wavelength radiation, which became known as radio waves. It took less than a decade for the Italian Guglielmo Marconi to show that radio waves could be used to transmit messages across vast distances.

It’s hard to appreciate now how revolutionary this was, not just practically but conceptually. Previously, messages beyond shouting range had to be sent either by a physical letter or as pulses of electricity down telegraph wires. The telegraph was already extraordinary enough, but still it required a physical link between sender and receiver. With radio, one could communicate wirelessly through ‘empty space’.

It is no coincidence that these discoveries happened at the height of the Victorian enthusiasm for spiritualism, in which mediums claimed to be able to contact the souls of the dead. If radio waves could transmit invisibly between a broadcasting device and a receiver, was it so hard to imagine that human brains – which are after all quickened by electrical nerve signals – could act as receivers?

But what of the senders? Already scientists, familiarized to the concept of invisible fields, had begun to speculate about non-material beings that inhabit an unseen plane of existence. Maxwell’s friends Peter Guthrie Tate and Balfour Stewart, both professors of physics, published The Unseen Universe (1875), in which they presented the ether of Maxwell’s waves as a bridge between the physical and spiritual words. Some of the pioneers of the telegraph had already drawn parallels with spiritualism, calling it ‘celestial telegraphy’. Now the wireless spawned a vision of empty space – still thought to be filled with a tenuous fluid called the ether that carried electromagnetic waves – as being alive with voices, the imprint of invisible intelligences. All you had to do was tune in, just as radio enthusiasts would scan the airwaves for crackly, half-heard snatches of messages from Helsinki or Munich. Rudyard Kipling’s short story “Wireless” (1902) described a man who, feverish from tuberculosis, becomes a receiver for fragments of a poem by Keats, while elsewhere in the house a group of amateur radio operators picks up broadcasts from a nearby ship. While the ‘telegraph line’ of the spiritualist medium offered the comfort of words from departed loved ones, now the wireless seemed instead to make the spirit world a source of impersonal, often meaningless fragments adrift in an unheeding universe.

The discovery of X-rays by Wilhelm Röntgen in 1895 stimulated these imaginings yet more. X-rays, it soon became clear, were invisible rays at the other end of the spectrum from radio, with wavelengths much shorter than those of light. But what made X-rays so astonishing and evocative was that they not only were invisible but revealed the invisible – not least, the bones beneath our flesh, in an unnerving presentiment of death. In the late 1890s people flocked to public demonstrations at shows such as the Urania in Berlin or Thomas Edison’s stage spectacles in New York to watch their skeletons appear on fluorescent, X-ray-sensitive screens. X-ray photography seemed a straightforward extension of the ‘spirit photography’ that had become popular in the 1870s and 80s (faked or genuinely inadvertent double exposures), confirming the photographic emulsion as a ‘sensitive medium’ that could render the invisible visible. Others claimed to see evidence of new types of invisible rays recorded in photographs, and even to be able to photograph ‘thought forms’ and souls.


At the fin-de-siecle, invisible rays were everywhere, and no claim seemed too extravagant. There were cathode rays and anode rays, wholly spurious radiations such as N-rays and ‘black light’ (although ultraviolet light also acquired that name), and most famously, the ‘uranic rays’ that Henri Becquerel discovered coming from uranium in 1896. These streamed in an unchecked and unquenchable flow, suggesting a tremendous hidden source of energy that, through the work of Pierre and Marie Curie, Ernest Rutherford and others, was eventually traced to the nuclear heart of the atom. The Curies renamed these rays ‘radioactivity’.

There was an old cultural preconception that invisible ‘emanations’ could have life-enhancing agency, whether these were the ‘virtues’ ascribed to medicinal herbs in the Middle Ages or the ‘animal magnetism’ or ‘mesmeric force’ of the 18th century German physician Franz Anton Mesmer. We shouldn’t be surprised, therefore, that at first radioactivity too was widely believed to have miraculous healing powers. “Whatever your Ill, write us”, said the Nowata Radium Sanitarium Company in 1905, “Testimonials of Cases cured will be sent you.” ‘‘Therapeutic’ radium was added to toothpastes and cosmetics, and spa towns proudly advertised the radioactivity (from naturally occurring radon) in their waters. It wasn’t until the 1920s that quite the opposite was found to be the case: too late to save Marie Curie herself, or the Radium Girls – factory workers who had for a decade been licking paintbrushes dipped in radioactive paint for the dials of watches.

Ghost factories

What all these discoveries told us was that the universe we perceive is only a small part of what is ‘out there’. There was a long tradition of ‘spirit worlds’ going back at least to the Middle Ages, when it was common belief that invisible and probably malevolent demons lurked all around us. These beliefs provided the unconscious template for making sense of the ‘invisible universe’, so that leading scientists such as the physicist William Barrett, who co-founded the Society for Psychical Research in 1882, could write a book like On the Threshold of the Unseen (1917) in which he proposed the existence of human-like invisible ‘elementals’. Another physicist, Edmund Fournier d’Albe, put forward the theory that the human soul is composed of invisible particles called ‘psychomeres’ possessing a rudimentary kind of intelligence. He suggested that this hypothesis could account for paranormal phenomena such as ghosts and fairies.

One of the most prominent of these ‘psychical’ scientists was William Crookes. A chemist and entrepreneur who served as the President of the Royal Society between 1913 and 1915, Crookes became famous when he discovered the new chemical element thallium in 1861. Yet he seems to have been particularly credulous of spiritualists’ claims, if not in fact even collusive with them. He was taken in by several mediums, including the famous Florence Cook – like many mediums a striking young woman who found it easy to manipulate the judgement of Victorian gentlemen of more advanced years. Crookes was convinced that “there exist invisible intelligent beings, who profess to be spirits of deceased people” (he evidently took this to be the sceptical view). To investigate the ‘psychic force’ that he thought mediums commanded, Crookes invented a device called the radiometer or ‘light mill’, in which delicate vanes attached to a pivot would rotate when illuminated by light. Although the reason for the rotation was not, as at first thought, due to the ‘pressure’ exerted by light itself, that pressure is a real enough phenomenon, and the radiometer helped to establish it as such. It was thus an instrument motivated by a belief in the paranormal that prompted some genuinely useful scientific work.

The same may be said of Crookes’ ‘radiant matter’, allegedly a “fourth form of matter” somewhere between ordinary material and pure light. In 1879 he claimed that this stuff existed in “the shadowy realm between known and unknown”, and suspected that it, like the ether, might be a bridge to the spirit world.


Radiant matter was another figment of Crookes’ over-active imagination. But this too bore fruit. He invoked radiant matter to explain a mysterious region inside the discharge tubes called the ‘dark space’. But it turned out that this dark region was instead caused by cathode rays, and Crookes’ research on this phenomenon led ultimately to the discovery of electrons and X-rays and, coupled with Marconi’s radio broadcasting, to the development of television. Indeed, several of the early pioneers of television were motivated by their paranormal sympathies, whether it is Crookes refining the cathode ray tube, Fournier d’Albe devising his own idiosyncratic televisual technology, or John Logie Baird, usually regarded as the device’s real inventor, who believed he was in spiritualistic contact with the departed spirit of Thomas Edison.

It is tempting to regard all this as a kind of late-Victorian delirium that engulfed dupes like Crookes – not to mention Arthur Conan Doyle, who famously believed in the photographs of the “Cottingley Fairies”, faked by two teenaged girls in Yorkshire. But there was more to it than that.

For one can argue that radio communication was simply representative of all modern media in that they are ghosts factories, forever manufacturing what in 1886 the psychic researcher Frederic W. H. Myers called “phantasms of the living”: disembodied replicas of ourselves, ready to speak on our behalf. Radio could conjure the illusion that the prime minister, or a film star, had become manifest, though disembodied like a phantom, in your sitting room.

How much more potent the illusion was, then, when you could see electronic ghosts as well as hear them. It might have seemed natural and harmless enough to refer to the double images of early television sets, caused by poor reception or bad synchronization of the electron beam, as ‘ghosts’ – but this terminology spoke to, and fed, a common suspicion that the figures you saw on the screen might not always correspond to real people. After all, they might already be dead. News reporters flocked to the home of Jerome E. Travers of Long Island in December 1953 to witness the face of an unknown woman who had appeared on the screen and wouldn’t vanish even when the set was unplugged. (The family had turned the screen towards the wall, as if in disgrace.)

By appearing to transmit our presence over impossible reaches of time and space, and preserving our image and voice beyond death, these media subvert the laws that for centuries constrained human interaction by requiring the physical transport of a letter or the person themself. We submit to the illusion that the voice of our beloved issues from the phone, that the Skyped image conjured on the screen by light-emitting diodes is the far-away relative in the flesh.

Who could possibly be surprised, then, that the internet throngs with ghosts – that, as folklore historian Owen Davies says, “cyberspace has become part of the geography of haunting”. Here too the voices and images of the dead may linger indefinitely; here too pseudonymous identities are said to speak from beyond the grave. More even than the telephone and television, the internet, that invisible babble of voices, seems almost designed to house spirits, which after all are no more ethereal than our own cyber-presence.

Hidden worlds

Contemplating the forms attributed to new invisible phenomena a hundred and more years ago should give us pause when we come to the phantasmal worlds of modern science. For we are still generating them, and their manifestations are familiar. It’s undoubtedly true that our everyday perceptions grant us access to only a tiny fraction of reality. Telescopes responding to radio waves, infrared radiation and X-rays have vastly expanded our view of the universe, while electron microscopes and even finer probes of nature’s granularity have populated the unseeably minuscule microworld. However, our theories and imaginations don’t stop there, and each feeds the other in ways we do not always fully appreciate.

Take, for example, the Many Worlds interpretation of quantum mechanics. There’s no agreement about quite how to interpret what quantum theory tells us about the nature of reality, but the Many Worlds interpretation has plenty of influential adherents. It supposes the existence of parallel universes that embody every possible outcome of the many possible solutions to the equations describing a quantum system. According to physicist Max Tegmark of MIT, “it predicts that one classical reality gradually splits into superpositions of many such realities.” The idea is derived from the work of physicist Hugh Everett in the 1950s – but Everett himself never spoke of “many worlds”. At that time, the prevailing view in quantum theory was that, when you make a measurement on a quantum system, this selects just one of the possible outcomes enumerated in the mathematical entity called the wavefunction – a process called “collapsing the wavefunction”. The problem was there was nothing in the theory to cause this collapse – you had to put it in “by hand”. Everett made the apparently innocuous suggestion that perhaps there is in fact no collapse: that all the other possible outcomes also have a real physical existence. He never really addressed the question of where those other states reside, but his successors had no qualms about building up around them an entire universe, identical to our own in every respect except for that one aspect. Every quantum event causes these parallel universes to proliferate, so that “the act of making a decision causes a person to split into multiple copies”, according to Tegmark. (More properly, they have always existed, but it’s just that things evolve differently in each of them.)

The problem is that this idea itself collapses into incoherence when you try to populate it with sentient beings. It’s not (as sometimes implied) that there are alternative versions of us in these many worlds – they are all in some sense us, but there’s no prescription for where to put our apparently unique consciousness. This conundrum arises not (as some adherents insist) as an inevitable result of “taking the math seriously”, but simply because of the impulse, motivated by neither experiment nor theory, to make each formal mathematical expression a ‘world’ of its own, invisible from ‘this one’. That is done not for any scientific reason, but simply because it is what, in the face of the unknown, we have always done.

Much the same consideration applies to the concept of brane worlds. This arises from the most state-of-the-art variants of string theory, which attempt to explain all the known particles and forces in terms of ultra-tiny entities called strings, which can be envisioned as particles extended into little strands that can vibrate. Most versions of the theory call for variables in the equations that seem to have the role of extra dimensions in space, so that string theory posits not four dimensions (of time and space) but eleven. As physicist and writer Jim Baggott points out, “there is no experimental or observational basis for these assumptions” – the “extra dimensions” are just formal aspects of the equations. However, the latest versions of the theory suggest that these extra dimensions can be extremely large, making these so-called extra-dimensional ‘branes’ (short for membranes) potential repositories for alternative universes, separated from our own like the stacked leaves of a book. Inevitably, there is an urge to imagine that these places too might be populated with sentient beings, although that’s optional. But the point is that these brane worlds are nothing more than mathematical entities in speculative equations, incarnated, as it were, as invisible parallel universes.

Dark matter and dark energy are more directly motivated by observations of the real world. Dark matter is apparently needed to account for the gravitational effects that seem to come from parts of space where no ordinary matter is visible, or not enough to produce that much of a tug. For example, rotating galaxies seem to have some additional source of gravitational attraction, beyond the visible stars and gas, that stops them from flying apart. The ‘lensing’ effect by which distant astrophysical objects get distorted by the gravitational warping of spacetime seems also to demand an invisible form of matter. But dark matter does not ‘exist’ in the usual sense, in that it has not been seen, nor are there theories that can convincingly explain or demand its existence. Dark energy too is a kind of ‘stuff’ required to explain the acceleration of the universe’s expansion, something discovered by astronomers observing far-away objects in the mid-1990s. But it is just a name for a puzzle, without even a hint of any direct detection.

It is fitting and instructive that both of these terms seem to come from the age of William Crookes, whose investigations with gas discharge tubes led him to report a mysterious region inside the tubes called the ‘dark space’, which he explained by invoking his radiant matter. It turned out that there was no such thing as radiant matter; the effects that led Crookes to propose it were instead caused by invisible ‘cathode rays’, shown in 1898 to be streams of subatomic particles called electrons. It seems quite possible that dark energy, and perhaps dark matter too, will turn out to be not exactly ‘stuff’ but symptoms of some hitherto unknown physical principle. These connections were exquisitely intuited by Philip Pullman in the His Dark Materials trilogy, where (the title alone gives a clue) a mysterious substance called Dust is an amalgam of dark matter and Barrett’s quasi-sentient psychomeres, given a spiritual interpretation by the scientist-priests of Pullman’s alternative steampunk Oxford University who sense its presence using instruments evidently based on Crookes’ radiometer.

It would be wrong to conclude that scientists are just making stuff up here, while leaning on the convenience of its supposed invisibility. Rather, they are using dark matter and dark energy, and (if one is charitable) quantum many worlds and branes and other imperceptible and hypothetical realms, to perform an essential task, which is to plug gaps in our knowledge with concepts that we can grasp. These makeshift repairs and inventions are needed if science is not to be simply derailed or demoralized by its lacunae. But when this happens, it seems inevitable that the inventions will take familiar forms – they will be drawn from old concepts and even myths, they will be “mysterious” particles or rays or even entire imagined worlds. These might turn out to be entirely the wrong concepts, but they make our ignorance concrete and enable us to think about how to explore it. The only danger is if the scientists themselves forget what they are up to and begin to believe in their own constructs. Then they will be like William Crookes and William Barrett, looking for spirits in the void, seduced by their own tales into thinking that they already have the answer.

Friday, March 07, 2014

Molecular mechanisms of evolution

“Molecular mechanisms that generate biological diversity are rewriting ideas about how evolution proceeds”. I couldn’t help noticing how similar that sounds to what I was saying in my Nature article last spring, “Celebrate the unknowns”. Some people were affronted by that – although other responses, like this one from Adrian Bird, were much more considered. But this is the claim put forward by Susan Rosenberg and Christine Queitsch in an interesting commentary in Science this week. They point out (as I attempted to) that the “modern synthesis” so dear to some is in need of some modification.

“Among the cornerstone assumptions [of the modern synthesis]”, say Rosenberg and Queitsch, “were that mutations are the sole drivers of evolution; mutations occur randomly, constantly, and gradually; and the transmission of genetic information is vertical from parent to offspring, rather than horizontal (infectious) between individuals and species (as is now apparent throughout the tree of life). But discoveries of molecular mechanisms are modifying these assumptions.” Quite so.

This is all no great surprise. Why on earth should we expect that a theory drawn up 80 or so years ago will remain inviolable today? As I am sure Darwin expected, evolution is complex and doesn’t have a single operative principle, although obviously natural selection is a big part of it. (I need to be careful what I say here – one ticking off I got was from a biologist who was unhappy that I had over-stressed natural selection at the molecular level, which I freely confess was a slight failure of nerve – I have found that saying such things can induce apoplexy in folks who see the shadows of creationism everywhere.) My complaint is why this seemingly obvious truth gets so little airplay in popular accounts of genetics and evolution. I’m still puzzled by that.

I realise now that kicking off my piece with ENCODE was something of a tactical error (even though that study was what began to raise these questions in my mind), since the opposition to that project is fervent to the point of crusading in some quarters. (My own suspicion is that the ENCODE team did somewhat overstate their undoubtedly interesting results.) Epigenetics too is now getting the backlash for some initial overselling. I wish I’d now fought harder to keep in my piece the discussion of Susan Lindquist’s work on stress-induced release of phenotypic diversity (S. Lindquist, Cold Spring Harb. Symp. Quant. Biol. 74, 103 (2009)), which is mentioned in the Science piece – but there was no room. In any case, this gives me the impetus to finally put the original, longer version of my Nature article online on my web site – not tonight, but imminently.

Thursday, March 06, 2014

Neuroscience in the classroom

Here's a kind of pre-edit of my latest column for the Prospect blog.

_______________________________________________________

As Prospect has already alluded, neuroscience is going to be an ever fiercer battleground for how we should organize our societies. Gender differences, criminal law, political persuasions – we had better be prepared to grasp some thorny questions about whether or not “our brains make us do it.” To judge from some commentaries, the older psychological frameworks we have used to understand behaviour, dysfunction, trauma, intelligence and ethics - whether that is Freudianism, Kleinianism, object relations, transference or whatever – are about to be replaced with the MRI scanner.

Inevitably, one of the bloodiest fields of combat is going to be education. I say inevitably not only because we know the levels of panic and anxiety schooling already invokes in parents but because few areas of social policy have been so susceptible to ideology, fads and dogma. You can be sure that supporters of every educational strategy will be combing the neuroscience literature for “evidence” of their claims.

That’s why a recent report from the Education Endowment Foundation (EEF) looking at the supposed neurological evidence for 18 teaching techniques is so timely. The report distinguishes those that have rather sound neurological support, such as the cognitive value of minimising stress, engaging in physical exercise and pacing out the school day with plenty of breaks, from those for which the evidence or understanding remains a long way from offering benefits in the classroom, such as genetics or personalized teaching approaches.

The report is also ready to acknowledge that some techniques, such as learning games or using physical actions to “embody cognition” (enacting “action verbs” rather than just reading them, say), warrant serious consideration even though they may not yet be understood well enough to know how best to translate to the classroom. (Seasoned Prospect readers might like to know that claims about the supposed cognitive benefits of cursive writing were apparently not even deemed worthy of consideration.)

These findings, along with earlier studies by specialists of the “neuromyths” that propagate in classrooms, are nicely rounded up in a commentary by Sense About Science, a non-profit organisation that seeks to provide people with the necessary facts to make informed choices about scientific issues.

Sense About Science has already done a great service in debunking the pseudoscientific programme called Brain Gym, which has convinced many schools that it can make children’s brains “work better” through a series of movements and massage exercises. Brain Gym has also run foul of the scourge of “bad science” Ben Goldacre. The EEF report is more politely, but no less firmly, dismissive: “a review of the theoretical foundations of Brain Gym and the associated peer-reviewed research studies fails to support the contentions of its promoters.”

All this is important and useful for cutting through the hype and fuzzy thinking. The EEF report will be valuable reading for teachers, who are often given little opportunity or encouragement to investigate the basis of the methods they are required to use. But we need to be awfully careful about setting up neuroscience as the arbiter of our understanding of the brain and cognition.

It is, after all, still a young science, and we still have a sometimes rudimentary understanding of how those colourful MRI brain scans translate into human experience. As the EEF report acknowledges, neuroscience has in some instances been able to add little so far to what has already been established by well conducted psychological tests. It is a relief to see brain science now undermining simplistic folk beliefs about, for example, “left brain” and “right brain” personalities. But as Raymond Tallis has elegantly explained, neuroscience is sometimes in danger of spawning a spurious dogma of its own.

It’s not just that the science itself might be poorly interpreted or over-extrapolated. The problem is deeper: whether there exists, or can exist, a firm and reliable link between the objective functioning of neural circuits and the subjective experience of people. Psychology is as much about providing a framework for thinking and talking about the latter as it is about pursuing a reductive explanation in terms of the superior frontal gyrus.

It is currently fashionable, for example, to claim that neuroscience has debunked Freudianism. It’s not even clear what this can mean. Freud’s claims that his ideas were scientific are apt to irritate scientists today partly because they don’t recognize how differently that word was used in the late nineteenth century, when novelists like Emile Zola could claim that they were applying the scientific method to literature. More to the point, Freud’s identification of an unconscious world where primitive impulses raged was really of cultural rather than scientific import. One could argue, if one feels inclined, that the identification of “primitive” instinctive areas of the brain such as the basal ganglia, as well as the modern understanding of how childhood experiences affect the brain’s architecture, in fact offer some scientific validation of Freud. But the broader point is that there was never going to be any real meaning in seeking a neuro-anatomical correlate of the ego or the id. As (admittedly somewhat crude) metaphors for our conflicting impulses and inclinations, they still make sense – as much sense as concepts like love, jealousy and disgust (which are sure to have complex and variable neural mappings).

This consideration arises in the matter of “multiple intelligences”, a concept promoted in the 1980s by the developmental psychologist Howard Gardner and which now underpins the widespread view that education should cater to different “learning styles” such as visual, auditory and kinaesthetic. The Sense About Science commentary suggests that neuroscience now contradicts the idea, since different brain functions all seem to stem from the same anatomical apparatus. But like many ideas in psychology, the multiple-intelligences theory runs into problems only when it hardens along doctrinaire lines – if it insists for example that every child must be classified with a particular learning style, or that different styles have wholly distinct neurological pathways. No one who has any experience on the football pitch (a relatively rare situation for academics) will have the slightest doubt that it makes sense to suggest Wayne Rooney possesses a kind of intelligence quite indifferent to his ability to read beyond the Harry Potter books. To think in those terms is a useful tool for considering human capacities, regardless of whether fledgling neuroscience seems to “permit” it.

In case you think this sounds like special pleading from a particularly flaky corner of science, bear in mind that the so-called hard sciences are perfectly accustomed to heuristic concepts that lack a rigorous foundation but which help to make sense of the behaviour scientists actually observe – witness, for example, the notions of electronegativity and oxidation state in chemistry. These concepts are not arbitrary but have proved their worth over decades of careful study. The task of psychology is surely to distinguish between baby and bathwater, rather than policing its ideas for consistency with the diktats of MRI scans.

The art of molecules

Here’s a little book review about an intriguing field. It appeared in Chemistry World.

________________________________________________________________________

Molecular Aesthetics
Ed. Peter Weibel and Ljiljana Fruk
ZKM-Center for Art and Media Karlsruhe/MIT Press, 2013
ISBN: 978-0-262-01878-4

A conference held in Karlsruhe in 2011 was perhaps the first to address the topic of molecular aesthetics. To judge from this collection of articles and imagery stemming from that meeting, it must have been an event in equal measure stimulating, entertaining and perplexing.

The editors Peter Weibel and Ljiljana Fruk have taken the wise decision not simply to put together a collection of papers from the meeting, but rather, to augment such contributions with a wide collection of reprints on the topic, along with a very generous selection of images of related artworks. The result is an engrossing 500-page digest which will surely contain something for everyone.

Roald Hoffmann characteristically puts the issue in a nutshell: “By virtue of not being comfortable in the official literature, aesthetic judgements in chemistry, largely oral, acquire the character of folk literature.” A question not quite addressed here is whether this is how things must be or whether it should be resisted.

The book is nothing if not diverse, which means that the quality is bound to fluctuate. The paranoid guerrilla rantings of the Critical Art Ensemble and the opaque semiotic posturing of Eric Allie offer few useful insights. Kenneth Snelson’s model of electronic structure is decidedly “outsider science.” Some of the artworks, although striking, bear little on the issue of molecular aesthetics. But I’m not complaining – such inclusiveness adds to the richness of the stew.

My own view is much in accord with that advanced here by Joachim Schummer: if we really want to talk about molecular aesthetics then we must cease warbling about molecules that are “beautiful” (meaning pleasing) because of their symmetry and instead conduct a serious investigation of what the term could mean – what criteria we should use for thinking about the ways we represent chemistry and molecules visually, conceptually and sensorially, and about the delight

Friday, February 28, 2014

Strength in numbers

I have a feature in Nature on developments in crowdsourcing science, looking in particular at the maths project Polymath on its fifth anniversary. Here’s the long version pre-editing. I also wrote an editorial to accompany the piece.

____________________________________________________________________________

Researchers are finding that online, crowd-sourced collaboration can speed up their work — if they choose the right problem.

When, last April, the hitherto little-known mathematician Yitang Zhang of the University of New Hampshire announced a proof that there are infinitely many prime numbers differing by no more than 70 million, it was hailed as a significant advance in a famous outstanding problem in number theory. In its simplest form, the twin primes conjecture states that there are infinitely many pairs of prime numbers differing by 2, such as (41, 43). Zhang’s gap of 70 million was much bigger than 2, but until then there was no proof of any persistent limiting gap at all.

But perhaps as dramatic as the reclusive Zhang’s unanticipated proof, published in May, was what happened next. “One could easily envisage that there would be a flood of mini-papers in which Zhang's bound of 70 million was whittled down by small amounts by different authors racing to compete with each other”, says Terence Tao, a mathematician at the University of California at Los Angeles. But instead of such an atomized race, this challenge to reduce the bound became the eighth goal for a ‘crowdsourcing’ maths project called Polymath, which Tao helped to set up and run. Mathematicians all around the world pitched in together, and the bound dropped from the millions to the thousands in a matter of months. By the end of November it stood at 576.

There is nothing new about the notion of crowdsourcing to crack difficult problems in science. Six years ago, the Galaxy Zoo project recruited volunteers to classify the hundreds of thousands of galaxies imaged by the Sloan Digital Sky Survey into distinct morphological types: information that would help understand how galaxies form and evolve. Galaxy Zoo has now gone through several incarnations and incorporates data on the earliest epochs of the visible universe from the Hubble Space Telescope. It provided a template for other projects needing human judgement to sort data, and has itself evolved into Zooniverse, which hosts several online data-classifying projects in space science and other areas. Participants can, for example, classify craters and other surface features on the Moon, tropical cyclone data from 30-year records, animals photographed by automated cameras on the Serengeti, cancer data, and even humanities projects such as tagging the diaries of soldiers from the First World War. Almost a million people have registered with Zooniverse to lend their help.

Expert opinion

But Polymath, which had its fifth anniversary in January this year, is rather different. Although anyone can join in to help solve its problems, you’re unlikely to make much of a contribution without highly specialized knowledge. This is no bean-counting exercise, but demands the most advanced mathematics. The project began when Cambridge mathematician Timothy Gowers asked on his own blog “Is massively collaborative mathematics possible?”

“The idea”, Gowers explained, “would be that anybody who had anything whatsoever to say about the problem could chip in. And the ethos of the forum would be that comments would mostly be kept short… you would contribute ideas even if they were undeveloped and/or likely to be wrong.” Gowers suspected there could be a benefit to having many different minds with different approaches and styles working on a problem. What’s more, sometimes a solution requires sheer luck – and the more contributions there are, the more likely you’ll get lucky.

His first challenge was a problem called the Hales-Jewett theorem, which posits that any sufficiently high-dimensional collection of number sequences must exhibit some correlated structure – it must be combinatorial – rather than being entirely random. Gowers’ blog sought a solution for one particular form of the theorem, known as the density version. Gowers had hoped for new insights into the problem, but even he was surprised that by March, after nearly 1,000 comments, he was able to declare the theorem proved He called that period “one of the most exciting six weeks of my mathematical life”, and adds that “the quite unexpected result – an actual solution to the problem – added an extra layer of excitement to the whole thing”. The proof was described in a paper attributed to “D. H. J. Polymath”.

Tao was drawn into that challenge, and has since hosted other projects on Polymath. Mathematics is perhaps a surprising discipline in which to find this sort of collaboration, as traditionally it has been viewed as a solitary enterprise, exemplified by the lonely and often secretive work of the likes of Zhang or Andrew Wiles, who proved Fermat’s Last Theorem in seclusion in the 1990s. But that image is misleading – or perhaps projects like Polymath are playing an active role in changing the culture. “One strength of a Polymath collaboration is in gathering literature and connections with other fields that a traditional small collaboration might not be aware of without a fortuitous conversation with the right colleague”, says Tao. “Simply having a common place to discuss and answer focused technical questions about a paper is very useful.” He says that such online “reading seminars” helped researchers get to grips quickly with Zhang’s original proof.

Refining that proof – Polymath 8 – produced another paper for D. H. J. Polymath. One of the big leaps came from James Maynard, a postdoctoral researcher at the University of Montreal in Canada, who last November showed how to reduce Zhang’s bound of 70 million to just 600. Maynard, however, had already been working on the problem before Zhang’s results were announced, and he says his work was essentially independent of Polymath.

All the same, he sees this as an appropriate problem for such an approach. “Zhang's work was very suitable for many participants to work on”, Maynard says. “The proof can be split into separate sections, with each section more-or-less independent of the others. This allowed different participants to focus on just the sections which appealed to them.”

The success of Polymath has been mixed, however. “Polymath 4 and 7 led to interesting results”, says Gil Kalai of the Hebrew University of Jerusalem, who has administrated some of the projects. “Polymath 3 and 5 led to interesting approaches but not to definite results, and Polymath 2,6 and 9 did not get much off the ground.” And Gowers admitted that for at least some of the challenges the “crowd” was rather small – just a handful of real experts. Partly this might be just a matter of time: after Polymath 1, he remarked that “the number of comments grew so rapidly that merely keeping up with the discussion involved a substantial commitment that not many people were in a position to make.” And perhaps some of the experts who might have contributed were simply not a part of the active blogosphere.

Polymath “hasn't turned out to be a game-changer”, says Tao, “but it’s a valid alternative way of doing mathematical research that seems to be effective in some cases. One nice thing though is that we can react rather quickly to ‘hot’ events in mathematics such as Zhang's work.” He says that the crowdsourcing approach works better for some problems than others. “It helps if the problem is broadly accessible and of interest to a large number of mathematicians, and can be broken up into parts that can be worked on independently, and if many of these parts lie within reach of known techniques.”

“Projects which seem to require a genuinely new idea have so far not been terribly successful”, he adds. “The project tends to assemble all the known techniques, figure out why each one doesn't work for the problem at hand, throw out a few speculative further ideas, and then get stuck. We're still learning what works and what doesn't.”

It’s with such pitfalls in mind that Kalai says “it will be nice to have a Polymath devoted to theory-building rather than to specific problem solving.” He adds that he would also like to see Polymath projects “that are on longer time scale than existing ones but perhaps less intensive, and that people can get in or spin off at will.”

Gowers recognized from the outset that collaboration won’t always eclipse competition. He admits that “it seems highly unlikely that one could persuade lots of people to share good ideas” about a high-kudos goal like the Riemann hypothesis, which relates to the distribution of prime numbers. This, after all, is one of the seven Millennium Problems for the solution of which the privately funded Clay Mathematics Institute in Providence, Rhode Island, has offered prizes of $1m.

All the same, that didn’t deter Gowers from launching Polymath 9 last November, which set out to find proofs for three conjectures that would solve another of the remaining six Millennium Problems: the so-called NP versus P problem. This asks whether all hard problems for which solutions can be quickly verified by a computer (denoted NP) coincides with the class of problems that can be solved equally quickly (denoted P). Gowers did not expect all three of his conjectures to be solved by Polymath 9, but admitted he would be pleased if just one of them could be. However, the results were initially disappointing, and Gowers was about to declare Polymath 9 a failure when he was contacted by Pavel Pudlak of the Mathematical Institute of the Czech Academy of Sciences with a proof that one of the three statements he was hoping to be proved false was in fact true, apparently cutting off this avenue for attacking the problem. Gowers is philosophical. “It’s never a disaster to learn that a statement you wanted to go one way in fact goes the other way”, he wrote. “It may be disappointing, but it’s much better to know the truth than to waste time chasing a fantasy.” In that regard, then, Polymath 9 did something useful after all.

Polymath now functions as a kind of elite open-source facility. People can post suggestions for new projects on a dedicated website maintained by Gowers, Tao, Kalai and open-science advocate Michael Nielsen, and these are then discussed by peers and, if positively received, launched for contributions. “The organization is still somewhat informal”, Tao says. Setting up and sustaining a Polymath project is a big commitment. “It needs an active leader who is willing to spend a fair amount of effort to organise the discussion and keep it moving in productive directions”, says Tao. “Otherwise the initial burst of activity can dissipate fairly quickly. Not many people are willing or able to do this.” “It’s quite difficult to get people interested,” Gowers agrees; so far, he and Tao have initiated all but two of the projects.

Although surprised by Polymath’s success, Kalai says that the trend toward more collaborative efforts started earlier, as signaled by a rise in the average number of coauthors on maths papers. “Polymath projects do not have enough weights to make a substantial change. But they add to the wealth of mathematical activities, and, for better of for worse, their impact on the community is larger than their net scientific impact.” It’s not clear that this is a good way to do maths, he concludes – “but we can certainly explore it.”

Cash or glory

Some other “expert” crowdsourcing ventures are being run as commercial ventures by companies that aim to link people with a problem to solve with people who might have the skills and ideas needed to solve it. These generally charge fees and offer financial rewards for participants. Other initiatives are government-led, such as the NASA Tournament Lab, which seeks “the most innovative, most efficient, and most optimized solutions for specific, real-world challenges being faced by NASA researchers”, and the US-based Challenge.gov, which offers cash prizes for solutions to a whole range of engineering and technological problems.

One of the most prominent commercial enterprises is Innocentive, which hosts a variety of scientific or technological challenges that are open to all of its millions of registered “solvers”. These range from the seemingly banal, if important (developing economical forms of latrine lighting in emergencies, or “keeping hair clean for longer without washing”), to the esoteric (“seeking 4-hydroxy-1H-pyridin-2-one analogues”, or ways of stabilizing foamed emulsions). InnoCentive’s founder Alph Bingham says that their approach “has produced solutions to problems that had been previously investigated for years and even decades.” Good challenges, he says, “are ones where the space of possible solutions is immense and therefore hard to search on a serial basis”.

In contrast to that broad portfolio, other crowdsourcing companies such as Kaggle and CrowdFlower specialize in data analysis. Kaggle has been used, for example, in bioinformatics to predict biological behaviours of molecules from their chemical structure, and in energy forecasting. It has been recently used by a team of astronomers seeking algorithms for mapping the distribution of dark matter in galaxies based on its gravitational-lensing effects on background objects. Through Kaggle, the researchers set up a competition called “Observing Dark Worlds”, which offered cash prizes (donated by the financial company Winton Capital) for the three best algorithms. The winning entries improved the performance, relative to standard algorithms, by about 30 percent.

While this was valuable, astronomer David Harvey of the University of Edinburgh, an author of that study, admits that it’s not always straightforward to apply potential solutions to the problem you’ve set. “Many of the ideas that came out of the competition were great, and provided really interesting insights into the problem”, he says. “But none of the algorithms are ready to be used on real data – they need to be fully tested and developed. And it’s very hard to take some algorithm from someone not in your field and develop it.”

Harvey says that indeed the winning algorithm for “Observing Dark Worlds” still hasn’t yet been fully developed. “However, the advantages of these competitions is not always obvious”, he adds. For example, the second-place entry was written by informatics specialist Iain Murray of the University of Edinburgh, who is continuing to collaborate with Harvey, and now with other astronomers too. “This wouldn’t have happened if it wasn't for Kaggle”, Harvey says. That experience shows how “it’s vital that the winners of the competition work in collaboration post-competition on the problem and develop the initial idea all the way through to a final package.” But Harvey admits that “often these are just side projects for participants, and while they may have a sincere interest in the problem, they do not have the time to commit.”

Harvey points out that the call for such projects might nevertheless be increasing, especially in astronomy. “With new telescopes such as the Square Kilometer Array, the large synoptic survey and Euclid on the horizon, astronomers will be facing real problems of data processing, handling and analysing”, he says. However, Thomas Kitching of University College London, who was the lead scientist on the Dark Worlds project, admits to having mixed feelings about what ultimately such efforts might achieve. In part this is because real expertise might be hard to harness this way. “Most people are not experts, but might have a bit of time”, he says. “There may be some experts, but they have very little time.”

While Polymath relies on unpaid efforts of researchers whose sole reward is professional prestige, Innocentive and Kaggle recognize that harnessing a broader community requires more tangible incentives, typically in the form of cash prizes. “In academia, people are willing to spend a lot of time for ‘kudos’ or for the sake of science – but only up to a point”, says Kitching. “Once the problem requires a lot of time, like coding in Kaggle, then monetary incentives or prizes seem to be required. No one is going to spend seven days a week trying to win unless it’s already their job, so money offsets time.”

Innocentive’s 300,000 solvers stand to gain rewards of between $5,000 and $1m. Kaggle now hosts some of the efforts of Galaxy Zoo for a prize of $16,000 (also provided by Winton Capital). This sort of funding is not necessarily just philanthropic for the donors – Winton Capital, for example, were themselves able to recruit new analysts via the Observing Dark Worlds initiative for a fraction of their usual advertising and interviewing costs.

But it’s not all about lucre. “Winning solvers rarely list the cash among their top motivations”, says Bingham. “Their motivations are frequently more intrinsic, such as intellectual stimulation or curiosity to explore where an idea might lead." InnoCentive aims to encourages non-cash incentives, such as prospects for further collaboration or joint press releases. Yet Bingham adds that “dollar amounts also serve as a kind of score-keeping.” Some of Kaggle’s projects have no cash prizes, and Harvey says that “a lot of the time computer scientists will go there because they want to work on something new and exciting, and not for financial gain.” Indeed, the company invites participants to “compete as a data scientists for fortune, fame and fun.”

“A competition can help to advertise a problem to people who have not thought about it before, a prize can attract them to spend time, and a metric can help to sort signal from noise”, says Kitching. “So in this sense competition, if well posed can help in science. But a poorly posed problem may just increase noise.”

But as Kalai points out, there can be as much value in identifying important questions, and tools to tackle them, as in finding solutions. Kitching recalls a computer called Multivac that appeared in several of Isaac Asimov’s short stories, which was very good at answering questions but still required human scientists to pose them in the first place. Kitching suspects that the crowdsourcing pool will act more like Multivac than like its interrogators. “In the crowdsourcing approach the key to successful science is working out the correct questions to ask the crowd”, he says.

Thursday, February 27, 2014

Floods: more please?

Are the UK floods a sign of climate change? According to a recent poll, 46 percent of people think so, 27 percent think not. The invitation is to regard this as a proxy poll for a general belief in the reality of climate change, and perhaps in humankind’s key causative role in it.

But in fact, any information embedded in this poll is complicated and difficult to entangle. If any climate or weather scientists were quizzed, it seems likely that they will have gravitated, like me, towards the “undecided” category. As they have been repeating insistently and now a little wearily, no single extreme-weather event (and this one certainly qualifies as that) can yet be unequivocally attributed to climate change. This of course is manna for the climate sceptics, who use it to argue that we still don’t know if climate change is really happening, and that this uncertainty reflects a serious limitation, perhaps a fundamental flaw, of the whole basis of climate modelling. It matters little that climatologists say such extreme weather is fully consistent with what the models predict – the misguided but widespread notion that science provides “yes/no” answers to questions, decided by the data, is here proving a burden.

That situation is changing, however. As Simon Lewis points out in Nature this week, it is now becoming possible to make some definite links between specific extreme weather events and anthropogenic climate change. Such analyses are complicated and the conclusions tentative, but they already give grounds for saying a little more than merely “it’s too early to tell”.

What the flood poll really probes, however, is public perceptions about what an altered climate would mean. The effect of the floods is likely to be not so much convincing undecided voters that climate change is already upon us, but showing them what is really at stake in this temperate zone: not balmy Mediterranean-style summers, not distant news of drowned Pacific island states, but Verdun-style mud and sandbags, and images of this green and pleasant land under glittering, muddy water from horizon to horizon. We have finally got a feeling for what it might be like to live in a world a degree or two warmer, and it seems uncomfortably close to home, and not at all pleasant. Shivering east coast Americans are having a somewhat different kind of awakening.

As wake-up calls go, it is pretty mild. But it is also likely to shift perceptions, not just of what to expect but of what the social and economic consequences will be. The more intelligent, or perhaps just cannier, sceptics have ceased questioning the science or the evidence but instead contest the economics: it will cost more, they say, to mitigate climate change, for example via taxes on fossil fuels or expensive green technologies, than to accept and adapt to it. This, for example, is the line taken by the science writer Matt Ridley, who laid out his case last October in an article in the Spectator.

The Viscount Ridley, immensely wealthy Eton-educated Conservative hereditary peer whose Darwinian attitude to economics was notoriously suspended when it came to the bailout of Northern Rock under his chairmanship, is an easy villain. But the Ridley I know (slightly – we sit on the same academic advisory committee), who happens to be an exceptionally good science writer and a clever thinker, is harder to caricature. His argument – in which a warmer world results in fewer net deaths, for example, though winter hypothermia – can’t be casually waved away. The dismantling needs more care.

The economic case is hugely complicated, and plagued by many more uncertainties than the science. It depends, for example, on making projections about nascent or even as yet undeveloped technologies. Even the research on which Ridley almost exclusively draws – by economist Richard Tol – mostly just points out these lacunae, and Tol advises nonetheless that “there is a strong case for near-term action on climate change”. (Ridley jettisons that bit.)

But of course economic figures paper over a multitude of woes. Imagine, for example, that an ice-free summer in the Arctic (which begins to look likely sooner than we expected) leads to the extinction of the economically insignificant polar bear (I’ll come back to that) but creates a fertile new breeding ground for fish stocks, with large economic benefits for fisheries. How would you feel about that? Or if the inundation of a few island states with trivial GDP, leaving the populations homeless, were massively offset by improved wheat yields in a warmer US Midwest? I’m not saying these things will happen, just that GDP is only a part of the story.

More importantly, perhaps, one can’t really put an economic figure on consequences of climate change such as the mass human migration that is predicted from north to south, which could very readily lead to social unrest and even war. Or the drastic changes in ecosystems likely to result, for example if ocean acidification from dissolved carbon dioxide wipes out corals. It isn’t hard to dream up such disasters, and Ridley is right that we need to think carefully, not just reactively, about what the real consequences would be – but economics, let alone highly uncertain economics, doesn’t give a full answer. All we can really agree on is that there seem unlikely to be any net benefits beyond 2070 or so, by which time things are getting really bad – especially if you have ploughed on merrily with business as usual, which Ridley seems to recommend. (He offers no alternative plan.) I won’t be around to see that; with good luck, my children will be. I don’t think theirs will be a problem that can be solved with wellies and sandbags.

All the same, I wish I could trust the arguments Ridley brings to the table. But, surprised by his passing suggestion that polar bears are fine and might even benefit from a bit of polar warming, I decided to check. The US Geological Survey in Alaska says “Our analysis of those data has shown that longer ice-free seasons have resulted in reduced survival of young and old polar bears and a population decline over the past 20 years. Recent observations of cannibalism and unexpected mortalities of prime age polar bears in Alaska are consistent with a population undergoing change.” The National Wildlife Federation says “The chief threat to the polar bear is the loss of its sea ice habitat due to global warming.” It’s impossible to generalize, however: studies suggest that many polar-bear populations will be wiped out within a few decades without human intervention, but some seem to be doing OK and may survive indefinitely (although climate change may introduce other threats, such as disease). If I were a polar bear, I’d feel decidedly less than sanguine about these forecasts. I’d also suspect that Ridley is less the “rational optimist” he styles himself, and more the wishful thinker.

Wednesday, February 19, 2014

The benefits of bendy wings


Here’s my latest news story for Nature.

________________________________________________________________

From insects to whales, flying and swimming animals use the same trick.

A new design principle that enables animals to fly has been discovered by a team of US researchers. They say that the same principle is used for propulsion by aquatic creatures, and suggest that it could supply guidance for designing artificial devices that propel themselves through air and water. The work is published today in Nature Communications [1].

The findings are welcomed by animal-flight expert Graham Taylor of Oxford University, who says that they “should certainly prove a fruitful area for future research”.

The earliest dreams of human flight, from Icarus to Leonardo da Vinci, drew on the notion of flapping wings, like those of birds or bats. But practical designs from the Wright brothers onward have largely abandoned this design in favour of the stationary aerofoil wing. Does that have to be the way, or might artificial flapping-wing devices be built?

In fact, a few have been already, but only very recently. In 2011, the German automation technology company Festo announced a small remote-controlled aircraft called the SmartBird that used flapping wings, based on the motion of a seagull. Aerospace engineers at the University of Illinois at Urbana Champaign are developing a “robotic bat” [2]. Some flying devices have also been based on insect flapping-wing flight, which use rapid wingbeats to produce upwards thrust that allows hovering and high manoeuvrability [3], while others have mimicked the undulating movements of jellyfish [4-6].

Developing flying machines based on bird-like flapping-wing aerodynamics is hampered by the lack of information about how birds achieve stability and control. John Costello of Providence College on Rhode Island, USA, and his colleagues believed that these flight properties might depend crucially on the fact that, unlike the wings of many human craft, animal wings are not rigid but flexible.

Yet there have been conflicting views on how wing flexibility affects the thrust produced by wing flapping, even to the extent of whether it helps or hinders. Costello and colleagues decided to take an empirical approach – to look at just how much real animal wings deform during flight.

They suspected that the same effect of bending should be evident in the operation of fins and flukes used for propulsion in water. In fact, they were initially motivated by their participation in a project for the US Office of Naval Research to develop a biologically inspired “jellyfish vehicle” [5,6].

That work, says Costello, showed that “the addition of a simple passive flap to an otherwise fairly rigid bending surface resulted in orders of magnitude increases in propulsive performance”. But what exactly were the rules behind these bending effects? “We reasoned that animals solved this problem several hundred million years ago”, says Costello, “so we decided to start by looking at natural forms.”

To gather data on the amount of deformation of wings and fins during animal movement, the researchers combed Youtube and Vimeo for video footage of species ranging from fruitflies to humpback whales and from molluscs to bats. They had to be extremely selective in what they used. They needed footage of steady motion (no slowing or speeding up), they needed to compare many flapping cycles for the same species, and they needed to find motion in the plane perpendicular to the line of vision, to obtain accurate information on the amount of bending.

This data was painstakingly collected by team members Kelsey Lucas of Roger Williams University in Bristol, Rhode Island, and Nathan Johnson at Providence College. “I’m not sure how many hundreds or thousands of video sequences they viewed and discarded” Costello admits. “It took many months of searching.”

They found that, for all the vast diversity of propulsor shapes and structures – gossamer-thin membranes, feathered wings, thick and heavy whale tails – there was rather little variation in the bending behaviour when measured (by eye) using the right variables. Specifically, when the data were plotted on a graph of the “flexion ratio” – the ratio of the length from “wing” base to the point where bending starts, to the total “wing” length – against maximum bending angle, all the points clustered within a small region.

In other words, this seems to be an example of “convergent evolution” – animals with very different evolutionary backgrounds have all “found” the same solution to a common problem, in this case the most effective bending criterion for propulsion through fluids. “Whether an animal is a fish or mollusc swimming in water or an insect or bird flying through the air, they all evolved to move within fluid environments”, says Costello. “Their evolution has been governed by the physical laws that determine fluid interactions. It doesn’t matter whether they originated from crawling, walking or jumping ancestors – once they adapted to a fluid, they evolved within a system determined by a common set of limits.”

“Perhaps the simple fact that wings, fins, and flukes of all shapes and sizes deform in a similar manner is not so surprising”, says Taylor. “What is surprising is the coupled variation in materials, morphology, and movement that this similarity implies” – so that “the comparatively flimsy wing of an insect deforms to the same extent in flight as does the powerful fleshy tail fluke of a killer whale”.

Costello is cautiously optimistic about translating the findings into aeronautical engineering design principles. First, he says, more needs to be known about why this narrow range of bending motions is advantageous for propulsion. “We hope to uncover more about the hydrodynamic reasons that these patterns are so common”, he says. “Maybe then the advantages that these animals have found in these traits can be translated into human designs.”

References
1. Lucas, K. N. et al., Nature Commun. 5, 3293 (2014).
2. Kuang, P. D., Dorothy, M. & Chung, S.-J. Abstract, Am Inst. Aeronaut. Assoc. 29-31 March, 2011-1435 (2011). doi: 10.2514/6.2011-1435
3. Ma, K. Y., Chirarattananon, P., Fuller, S. B. & Wood, R. J. Science 340, 603–607 (2013).
4. Ristroph, L. & Childress, S. J. R. Soc. Interface http://dx.doi.org/10.1098/rsif.2013.0992 (2014).
5. Villanueva, A., Smith, C. & Priya, S. Bioinsp. Biomim. 6, 036004 (2011).
6. Colin, S. P. et al., PLoS ONE 7, e48909 (2012).

Friday, February 14, 2014

Making sense of music - in Italian


A popular-science magazine called Sapere has been published monthly in Italy since 1935. (There's a nice history of science popularization in Italy here.) Sapere is now produced by the Italian publisher Dedalo, who are aiming to revitalize it. They have asked me to contribute a regular column, which will be about the cognition of music. Each month I’ll focus on one or two particular pieces of music and explain how they do what they do. Here’s a slightly extended version of the introductory column, which takes as its subject “Over the Rainbow”.

___________________________________________________________

How do we make sense of music, and why does it move us? While much is still mysterious about it, some is not. Cognitive science and neuroscience are starting to reveal rules that our minds use to turn a series of notes and chords into a profound experience that speaks to us and reaches into the depths of our soul. In these columns I’ll aim to explain some of the rules, tricks and principles that turn sound into music.

One of the first things we notice about a song is the melody. Certain melodies capture our attention and interest more than others, and the songwriter’s goal is to find ones that stick. How do they do it?

Take “Over the Rainbow”, the ballad written by Harold Arlen and E. Y. Harburg for The Wizard of Oz (1939). We remember it partly for Judy Garland’s plangent voice – it became her signature tune – but it grabs us from the start with that soaring leap on “Some-where”.

Melodic leaps this big are very rare. Statistically speaking, most steps between successive melody notes are small – usually just between adjacent notes in the musical scale. For music all around the world, the bigger the step, the less often it is used. Partly this may be because it’s generally easier to sing or play notes that are closer together, but there is also a perceptual reason. Small steps in pitch help to “bind” the notes into a continuous phrase: we hear them as belonging to the same tune. The bigger the step, the more likely we’ll perceive it as a break in the tune. This is one of the rules deduced by the Gestalt psychologists around the start of the twentieth century, who were interested in how the mind groups stimuli together to form an organized picture of the world that causes them. They were primarily interested in vision, but the ‘gestalt principles’ apply to auditory experience too.

But small pitch steps can sound boringly predictable after a while, like nursery rhyme tunes. To create memorable tunes, sometimes songwriters have to take a chance on bigger leaps. The one in the first two notes of “Over the Rainbow” is particularly big: a full octave. The same leap occurs at the beginning of “Singing in the Rain”. Typically only one percent or so of pitch steps in melodies are this big. That means they stand out as memorable – but how come we still hear it as a tune at all, when the gestalt principles seem to say that big jumps cause a perceptual break-up?

Well, Arlen (who wrote the music) has added safeguards against that, probably quite unconsciously. First, the two notes on “Some-where” are long ones – you could say that “-where” is held for long enough for the brain to catch up after the leap. Second, the leap comes right at the start of the song, before there’s even really been time for a sense of the tune to develop at all. Third, and perhaps most important, the leap is not alone. There are similar big jumps in pitch (although not quite as big) at the start of the second and third phrases too (“Way up…”, “There’s a…”). In this way, the composer is signalling that the big jumps are an intentional motif of the song – he’s telling us not to worry, this is just a song with big pitch jumps in it. This is a general principle: if you hear a big pitch jump in a melody, it’s very likely that others will follow. In this way, tunes can create their own ‘rules’ which can over-ride the gestalt principles and produce something both coherent and memorable.

Wednesday, February 12, 2014

Closer to ignition

Here’s the original draft of my latest piece for Nature news.

___________________________________________________________________

Another milestone is passed on the long road to fusion energy

The usual joke about controlled nuclear fusion, which could provide much ‘cleaner’ nuclear power than fission, is that it has been fifty years away for the past fifty years. But it just got a bit closer. In a report published in Nature today [1], a team of researchers at the US National Ignition Facility (NIF), based at Lawrence Livermore National Laboratory in California, say that their fusion experiments have managed to extract more energy from the nuclear process than was absorbed by the fuel to trigger it.

That’s certainly not the much-sought “break-even” point at which a fusion reactor can generate more energy than it consumes, because there are many other processes that consume energy before it even reaches the nuclear fuel. But it represents “a critical step on the path to ignition”, according to Mark Herrmann of Sandia National Laboratory in Albuquerque, New Mexico, who heads the project on high-energy X-ray pulses there.

While nuclear fission extracts nuclear energy released during breakup of very heavy nuclei such as uranium, nuclear fusion – the process that powers stars – produces energy by the coalescence of very light nuclei such as hydrogen. A tiny part of the masses of the separate hydrogen nuclei is converted into energy during the reaction.

Although the basic physics of fusion is well understood, conducting it in a controlled manner in a reactor – rather than releasing the energy explosively in a thermonuclear hydrogen bomb – has proved immensely difficult, largely because of the challenge of containing the incredibly hot plasma that fusion generates.

There is no agreed way of doing this, and fusion projects in different parts of the world are exploring a variety of solutions. In most of these projects the fuel consists of the heavy hydrogen isotopes deuterium and tritium, which react to produce the isotope helium-4.

A lot of energy must be pumped into the fuel to drive the nuclei close together and overcome their electrical repulsion. At the NIF this energy is provided by 192 high-power lasers, which send their beams into a bean-sized gold container called a hohlraum, in which the fuel sits inside a plastic capsule. The laser energy is converted into X-rays, some of which are absorbed by the fuel to trigger fusion. Most of the energy, however, is absorbed by the hohlraum itself. That’s why obtaining gain (more energy out than in) within the fuel itself is only a step along the way to “ignition”, the point at which the reactor as a whole produces energy.

The fuel is kept in a plastic shell called the ablator. This absorbs the energy in the hohlraum and explodes, creating the high pressure that makes the fuel implode to reach the high density needed to start fusion. But that pressure can burst through the ablator at weak points and destabilize the implosion, mixing the fuel with the ablator plastic and reducing the efficiency of the fusion process.

The NIF team’s success, achieved in experiments conducted between last September and this January, comes from ‘shaping’ the laser pulses to deliver more power early in the pulse. This creates a relatively high initial temperature in the hohlraum which “fluffs up” the plastic shell. “This fluffing up greatly slows down growth of the instability”, says team leader Omar Hurricane.

As a result, the researchers have been able to achieve a “fuel energy gain” – a ratio of energy released by the fuel to energy absorbed – of between 1.2 and 1.9. “This has never been done before in laboratory fusion research”, says Herrmann. “It’s a very promising advance.”

He adds that much of the energy released was produced by self-heating of the fuel through the radiation released in the fusion reactions – an important requirement for sustaining the fusion process.

But fusion energy generation still remains a distant goal, for which Hurricane admits he can’t yet estimate a timescale. “Our total gain – fusion energy out divided by laser energy in – is only about 1%”, he points out.

“This is more than a little progress, but still modest in terms of energy generation”, Hurricane says. “Our goal right now is to more than double the final pressures in our implosion, by making it go faster and improving its shape.”

Meanwhile, other projects, such as the International Thermonuclear Experimental Reactor (ITER) under construction in southern France, will explore different approaches to fusion. “When trying to solve hard problems it is wise to have multiple approaches, as every potential solution has pros and cons”, says Hurricane.

References
1. Hurricane, O. A. et al., Nature advance online publication doi:10.1038/nature13008 (2014).

Tuesday, February 04, 2014

Colour coordinated

Here's a talk, nicely recorded, that I gave on colour and chemistry for the "Big Ideas" course at Bristol at the end of last year. There's a version of this floating on the web that I gave at Michigan something like ten years ago, which cruelly lays bare the ravages of time. For colour junkies, there is another little snippet here for the Atlantic magazine on Newton's spectrum.