Wednesday, July 09, 2014

The clean air act


Here’s my Material Witness column for the July issue of Nature Materials – less techie than usual, so suitable, I think, for your tender eyes. (Not that you mind techie, but there are limits.)

_____________________________________________________________

Most poets probably harbour a hope that their poems might change the world, but none has taken that wish quite as literally as Simon Armitage, whose ‘In Praise of Air’ is the first ‘catalytic poem’. Displayed on a 10m by 20m panel on the side of a university building overlooking a busy road in Sheffield’s city centre, it is not just an ode to the vital joys of clean air but is actively producing that very stuff. The panel is coated with a layer of photocatalytic titanium dioxide nanoparticles that, when irradiated with sunlight (or indeed street lights), convert nitrogen oxides (NOx) adsorbed on their surface to nitrate.

The project is a collaboration with Sheffield materials scientist Tony Ryan, and has been funded as part of the city’s Lyric Festival of literature. As well as breaking down nitrogen oxides, the catalytic nanoparticles transform toxic volatile organic compounds into fatty acids. They are, of course, barely able to make a dent on the fumes from passing vehicles: each square metre of the display removes about 2g of NOx a day, about as much as is produced by a single bus.

But of course the point is to make a difference in another way: to create a visible and arresting symbol of the need to tackle air pollution. Armitage’s image of “days when thoughts are fuddled with smog/or civilization crosses the street/with a white handkerchief over its mouth” will be all too familiar with many urban dwellers, perhaps especially in China, where today mobile apps tell users whether or not the PM10 index (the level of airborne particulate matter smaller than ten micrometres across) is low enough for children to play safely outside.

That same objective motivates the technology from which this project arose: ‘catalytic clothing’, developed by Ryan with designer and artist Helen Storey, who specializes in art– science collaborations for fashion, design and technology. They have devised a process in which the titania nanoparticles can become attached to ordinary clothing fabric (so far cotton, but they are working on other fibres) during the laundering process, so that subsequently the wearer may combat air pollution simply by walking around. Ryan and Storey say that the effects are not insignificant: 30 people in catalytic clothing walking past a metre-width stretch of pavement every minute could effect a noticeable drop in levels of NOx. The duo are still trying to bring the idea to the market.

‘In Praise of Air’ is also singing the praises of materials. The technology is nothing particularly new, but Ryan’s work is a reminder that bringing a useful laboratory product to the attention of both investors and consumers is often a matter of engaging the imagination — and that this is where scientists can benefit from interactions with designers and artists. It shows too that serious problems can be tackled playfully and in ways that encourage the public to see that they can participate and not be merely the passive recipients of some cryptic and forbidding technology.

Tuesday, July 08, 2014

Hearing voices


Here’s my fourth column on music cognition for Sapere magazine (where you'll only find it in Italian).

___________________________________________________________________

Making sense of our perceptual world is an incredibly hard challenge. Things are moving; objects are half-hidden by other objects; the light is constantly changing. Our brains learn some simple rules of thumb for sorting all this visual information into a “model of the world” that generally proves to be accurate. These rules were deduced in the early twentieth century by the so-called gesalt psychologists in Germany. For example, we tend to group together objects that look similar, or which are close together. We expect movements to be continuous: when an aeroplane flies behind a cloud, we expect it to emerge from the other side – the principle of “good continuation”. Many optical illusions arise from images that confound these rules, for example so that we can no longer tell what is background and what is foreground.

Similar “gestalt principles” help us make sense of sound. An infant can distinguish the sound of its mother’s voice from the background noise, perhaps by separating them by timbre (the “texture” of the sound). The same ability allows us to sustain dinner-party conversations in later life.

Music draws on these sonic grouping abilities. We separate the sounds into different perceptual streams, so that for example the strings and the woodwind in an orchestra don’t merge into an aural mush.

Even without timbral distinctions to help us, we can use the gestalt principle of good continuation to pick out the different melodic lines in a composition where several melodies are superimposed, as in the complex counterpoint of Baroque music. As long as the melodies have gradual pitch changes and don’t cross over one another, we might – with practice – be able to juggle four or more of them at once, as in some of Bach’s fugues. In fact, many of the rules for composing counterpoint, such as ensuring that different voices don’t follow the same up-and-down pitch steps (contours), implicitly observe the gestalt principles. Sometimes they’re used more instinctively: jazz soloists, for example, might play their notes just before the beat, moving their melody slightly out of synch with the band so that it stands out.

But musicians have sometimes undermined the gestalt principles on purpose. In his Bolero, Ravel makes the different timbres of the celesta, French horns and flutes blend by giving their melodies identical pitch contours. The result sounds like a fantastical composite instrument with an entirely new timbre.

Friday, July 04, 2014

Wonder drops

Everyone on Twitter seemed to like this amazing droplet video so much that I thought I should put it up here too. I never went quite as gaga as many folks do over that Feynman schtick about science and the beauty of a flower, but I have a sense of what he meant here. It’s the fact that the droplets seem to come to rest and then jump again that makes this so weird and captivating – but when you realise they are getting smaller each time, and that it’s the liberated surface free energy that is propelling them upwards, then you have a real sense of “oooh” that you can’t get without knowing some science. Isn’t nature cute?

Wednesday, July 02, 2014

Designs on life


…and here’s a review of an interesting book which also appears in the current issue of Chemistry World (along with a piece by Andy Extance on Alexandra Daisy Ginsberg’s work).

___________________________________________________________________

Synthetic Aesthetics
Alexandra Daisy Ginsberg, Jane Calvert, Pablo Schyfter, Alistair Elfick & Drew Endy
MIT Press, 2014
ISBN 978-0-262-01999-6
349 pages, price not indicated

There’s no denying the gee-whiz appeal of synthetic biology. Its often stated objective is to “re-engineer life”: to redesign living systems like bacteria as if they were machines, in particular fitting them out with gene circuits for doing new and useful things, such as turning plant matter into fuel or pharmaceuticals. It sounds like a green (perhaps literally) solution to big global problems. What’s not to like?

Synthetic Aesthetics makes a bold and brave attempt to explore that question. The book results from an innovative project funded by the ESPRC and the US National Science Foundation, which brought together artists, designers, scientists and sociologists in 2009 for an intensive workshop that debated what synthetic biology might and might not mean.

Such a diverse mix is already unusual. Better still, they didn’t convene just to celebrate and propagate the field, but rather, to criticize it in the proper sense of the word: to ask whether it is evolving along the right tracks, and where these might lead. When artists and scientists are put together, the usual hope is that the research will stimulate artistic expressions. But here the interaction is more ambiguous, even fraught: the artists played the role of provocateurs, suggesting possible futures that question the assumptions and motivations of the scientists.

That ambivalence is evident throughout the book. When artist Alexandra Daisy Ginsberg suggests that “useful fictions can unintentionally become embedded in the language of the field, even shaping it”, she touches on a problem for all areas of science. And the artists here display a commendable resistance to being conscripted to sell the science, a common outcome of sci-art collaborations. For instance, while the scientists seem keen to clothe their work in a clean, shiny image that refers to circuits and Lego bricks, the artists were determined to remind us of the messy, visceral and sometimes disgusting nature of biology.

“Synthetic biology is a contemporary example of a field that employs artists and designers as part of a concerted effort to engineer public acceptance for a technology that does not yet exist”, write artist Oron Catts and microbiologist Hideo Iwasaki. Catts in particular feels that this propagandist effort needs to be challenged. Despite – or perhaps because of – his scepticism, he decided to be a part of the project since he felt it could be more productively interrogated from inside than from out. His engagement has been remarkably deep: having learnt the biotechnological methods needed to create works from living matter, he runs a collective called SymbioticA from a laboratory at the University of Western Australia which has been at the forefront of an artistic exploration of synthetic biology.

One of Catts’ main complaints is that the engineering paradigm is over-stretched in biology. I agree, for reasons I can’t expand on here. Genetic circuits evidently are somewhat amenable to redesign, but we have no real idea yet how far that metaphor can be taken.

Another debate exists around the whole concept of design itself. Is life itself already “designed” by evolution? It looks that way, and many scientists, here including synthetic biologist Drew Endy, argue that it is. But social scientist Pablo Schyfter counters persuasively that “only people design”, not least because design involves values – and the question is then, whose values? Besides, saying that nature designs is like saying that, if you make a billion randomly shaped pegs and find one that fits the hole, that one peg was designed.

I was left with a burning question: why hasn’t this sort of exercise happened before in other fields? Chemistry is particular cries out for it. Instead of rhapsodizing about the aesthetics of beautiful molecules, we should bring in more artists to challenge our convenient fictions.

How Pelicans became blue


Here is my piece on “Pelican blue” from this month’s Chemistry World, which is a lovely special issue on chemistry and art.

______________________________________________________________________

The relaunch of Pelican Books, an imprint of Penguin, in May prompted fond reminiscences about these distinctive, low-cost non-fiction paperbacks. The series began in 1937 with George Bernard Shaw’s The Intelligent Women’s Guide to Socialism, Capitalism, Sovietism and Fascism, which exemplified its efforts to cover serious topics in an accessible but non-patronizing manner. Offering such titles at the cost of a packet of cigarettes was a radical idea for its time, and Penguin’s claim today that the books “lower[ed] the traditional barriers to knowledge” seems amply justified by the tributes the relaunch has elicited. Several readers from unprivileged backgrounds attested that Pelicans were their introduction to the world of thought.

I have been surprised to see how many of the trademark blue covers are dotted around my bookshelves. There is C. H. Waddington’s The Scientific Attitude (1941), for example, and Plastics in the Service of Man (1956) by E. G. Couzens and V. E. Yarsley attests that chemistry wasn’t neglected. However, the reason why chemists in particular might celebrate the rebirth of Pelican is not so much because of what lies between the covers but what lies on their surface.

For the feature that defined Pelicans was that blue: a shade veering towards turquoise, which chemists will recognize instantly as a copper pigment of some kind. In fact, Pelican covers supplied one of the earliest, and surely the most familiar, applications of a new and important class of colorant compounds: the phthalocyanines. Pelican blue is copper phthalocyanine, discovered in 1928, which was sold under the trade name monastral blue.

Like many colouring agents throughout history, this one was found by accident. In the late 1920s scientists at Scottish Dyes in Graingemouth – soon to merge with ICI – were investigating a blue impurity that appeared during the manufacture of phthalimide, which was used to synthesize another blue dye, indigo. I recommend the short film documenting history of this investigation - not just for the splendidly absurd pre-war diction of the narrator but for tips on how to conduct a chemical analysis while stylishly smoking a pipe.

The isolated impurity was studied in detail by R. Patrick Linstead at Imperial College. In 1934 he showed that the coloured material had some structural affinity with the porphyrin at the heart of chlorophyll, being a four-ring heterocyclic organic compound with a metal atom (here copper) at its centre [1]. Linstead christened this structure phthalocyanine.

Over the next two or three years it was developed as a commercial pigment by ICI, where a chemist named Charles Enrique Dent patented a method of making monastral blue. Dent went on to have a colourful career, working on invisible ink as a consultant for the British censorship department during the war, and being fictionalized as a villain shot by James Bond in Ian Fleming’s Dr No. The juicy details are in Kristie Macrakis’s recent book Prisoners, Lovers and Spies (Yale University Press, 2014).

Phthalocyanine blue “will soon be covering the printed pages”, proclaimed one Professor Tattershall of the University of Western Australia in 1936. He was more literally correct than he might have imagined, for it was seen at once as an eye-catching but sober hue for the Pelican covers. The artists’ paint company Winsor and Newton began using phthalocyanine blue and a green chlorinated version for their paints in the late 1930s, and still makes them today.

The Scottish Dyes chemists were not quite the first to see copper phthalocyanine, for it was also synthesized by accident in 1927 by two chemists in Switzerland, Henri de Diesbach and Edmond von der Weid [2]. It pleases me that the first of these authors has essentially the same surname as the alchemist who, around 1704-5, serendipitously discovered another blue pigment in Berlin while attempting to make something else – this was Prussian blue, which, like the phthalocyanines, remains a compound of great interest to modern chemists.

Of course, the most famous accidental discovery of a colorant was William Perkins’ discovery of mauve in 1856. ICI was one of the many chemical giants that emerged in the early twentieth century on the back of the revolution in synthetic dyes spawned by Perkins’ work. One of its precursors was the Manchester dye manufacturer Levinstein Ltd, which grew out of a plant set up at Ellesmere Port to make indigo beyond the reach of German patent law. Levinstein merged with British Dyes, and the resulting company then merged with others in 1926 to become ICI. I would love to know whether ICI’s famous early logo was also printed in monastral blue; it certainly looks like it.

Sadly, the new Pelicans won’t be using this pigment for their covers and spines, however. They are going for a mix of inks that gives Pantone 3105 U, which is more turquoise than the original. But the exact colour had already mutated somewhat during the first life of Pelicans. Chemistry, I suppose, moves on.

1. R. P. Linstead, J. Chem. Soc. 1934, 1016 (1934).
2. H. de Diesbach & E. von der Weid, Helv. Chim. Acta 10, 886 (1927).

Tuesday, July 01, 2014

Why rhythm is heard best in the bass


Then there’s this, also for Nature news. It shows, I think, that those boring bass solos beloved of jazz and rock bands really are superfluous after all. As for that image: never mind the bollocks, Bootsy is still the Man.

___________________________________________________________________________

There may be a physiological reason why low-pitched instruments keep the musical beat

How come the lead guitarist gets to play the flashy solos while the bass player gets only to plod to the beat? Bassists might bemoan the injustice, but in fact their rhythmic role seems to have been determined by the physiology of hearing. According to new research, people’s perception of timing in music is more acute for lower-pitched notes.

Psychologist Laurel Trainor of McMaster University in Hamilton, Canada, and her colleagues say that their findings, published today in the Proceedings of the National Academy of Sciences USA [1], explain why in the music of many cultures the rhythm is carried by the low-pitch instruments while the melody tends to be taken by the highest pitched. This is as true for the low-pitched percussive rhythms of Indian classical music and Indonesian gamelan as it is for the walking double bass of a jazz ensemble or the left-hand part of a Mozart piano sonata.

Earlier studies have shown that people have better pitch discrimination for higher-pitched notes – one reason why saxophonists and lead guitarists often solo at a squealing register [2]. It now seems that rhythm works best at the other end of the scale.

Trainor and colleagues used the technique of electroencephalography (EEG) – electrical sensors placed on the scalp – to monitor the electrical brain signals of people listening to trains of two simultaneous piano notes, one high-pitched and the other low-pitched, equally spaced in time. Occasionally one of the two notes was played slightly earlier – by just 50 milliseconds. The researchers looked for a sign in the EEG recordings that the listeners had noticed.

This showed up as a characteristic “spike” of electrical activity produced by the auditory cortex about 120-250 milliseconds after the deviant sound reaches the ear, called a mismatch negativity (MMN). It’s a known indication that the brain senses something wrong – a kind of “huh?” response. Trainor and other coworkers used the same approach previously to detect listeners’ responses to “errors” in pitch [2].

The MMN doesn’t depend on conscious recognition of the timing error – in fact, participants were told to watch a silent movie during the tests and to pay no attention to the sounds they heard. And although Trainor says that “the timing differences are quite noticeable”, the MMN response precedes any conscious awareness of them.

The researchers found that the MMN signals were consistently larger for mis-timing of the lower note than for the higher note. They also measured the participants’ ability to adjust their finger-tapping to deviant timings of the notes, and found that it was significantly better for the lower notes.

When the researchers used a computer model to figure out how the ear responds to their test sounds, they found that the signal from the auditory nerve connected to the cochlea showed more ambiguity about the timing of an early high-pitch note than a low note. They think this suggests that the differences arise at a fairly “low-level” stage of cognitive processing.

Cognitive scientist Tecumseh Fitch of the University of Vienna says that the study “provides a very plausible hypothesis for why bass parts plays such a crucial role in rhythm perception.”

“The fact that we are more sensitive to the timing of a low note compared to high note is surprising”, says cognitive musicologist Henkjan Honing of the University of Amsterdam. Nevertheless, he adds that “there are plenty of alternative interpretations” of the results.

For instance, he says, “the use of a piano tone could be contributing to the difference observed – different timbres should be used to prove it's really the pitch that causes the effect.”

And for louder, deeper bass notes than were used in these tests, says Fitch, we might also feel the resonance in our bodies, not just hear it in our ears. “I’ve heard that when deaf people dance they turn up the bass and play it very loud and can literally ‘feel the beat’ via torso-based resonance”, he says.

References
1. Hove, M. J., Marie. C., Bruce, I. C. & Trainor, L. J. Proc. Natl Acad. Sci. USA doi:10.1073/pnas.1402039111 (2014).
2. Fujioka, T., Trainor, L. J., Ross, B., Kakigi, R. & Pantev, C. J. Cogn. Neurosci. 17, 1578-1592 (2005).

How the clam got its glam


This piece was for Nature news. For more on animal photonics, look here. Congratulations, by the way, to Peter Vukusic (reference 2) for his IOP Bragg medal.

____________________________________________________________________

The bright light of the ‘disco clam’ comes from tiny reflective beads in its lips

As molluscs go, Ctenoides ales is one of the most flashy – literally. A native of the Indo-Pacific region, the creatures is popularly known as the “disco clam” because the soft tissues of its “lips” flash like a disco ball. But how, and why? A new study published in the Journal of the Royal Society Interface has begun to supply answers.

The clam’s flashes look like bioluminescence – emission of light – but in fact they are due entirely to reflected light. Lindsey Doughterty of the University of California at Berkeley and her coworkers have figured out how C. ales does the trick – and find that so far it’s apparently unique in nature. “We don't know of anything that is quite like the disco clam”, says Dougherty.

They show that the reflections are caused by spheres of the mineral silica – chemically like tiny grains of sand – synthesized by the organism and sequestered into one side of the mollusc’s mantle lip. The other side of the lip contains no spheres, and in contrast is highly absorbing, appearing a reddish colour.

The flashing occurs as the mollusk repeatedly rolls up and unfurls each side in concert, several times a second.

"To my knowledge, this reflectance from micro-silica in this critter, with a muscle-driven 'shutter' that creates rapid blinking, is unique," says Daniel Morse, a molecular biologist at the University of California, Santa Barbara.

Many animals create “photonic” structures that reflect and scatter light, often using orderly arrays of light-scattering objects with a size comparable to the wavelengths of visible light: several hundreds of nanometres. For example, stacked platelets of dense protein-based tissue are responsible for some of the bright colours of butterfly wings and bird feathers. Squid can manipulate the spacing of these platelets to reflect different colours.

But it’s unusual for such structures to reflect the whole visible spectrum, as in C. ales, so as to appear white. Some beetles make their white cuticle this way [2], and some butterflies have white wing scales studded with reflective beads [3].

The disco clams are also unusual in using silica as the reflector. Some marine diatoms have photonic structures in their silica exoskeletons, but these are not broadband [4], and a weevil also uses silica bead arrays [5]. Doughtery and colleagues find that the spheres have a rather narrow size distribution centred at a diameter of about 300 nm, and they calculate that this makes them near-optimal for reflecting the blue-green light (wavelengths of 400-500 nm) that predominates in the mollusc’s marine habitat.

But why does it do it? The researchers don’t yet know. They’ve found that the clams change the flashing rate in tests that mimic the looming presence of a predator, suggesting that it might function to ward off such threats. But Doughterty says that the flashing could also be a spawning signal to attract other clams, or might be a lure for the plankton on which the clams feed.

Whatever the case, it could be a trick worth learning. Other “animal photonics" have inspired engineers seeking new ways to manipulate light, and C. ales might do likewise. Doughterty is particularly impressed with low well the reflectors work in low light. “There could be biomimicry potential in low-light situations or in environments that are dominated by blue-green wavelengths”, like subsea locations, she says.

References
1. Dougherty, L. F., Johnsen, S., Caldwell, R. L. & Marshall, N. J. J. R. Soc. Interface 20140407 (2014).
2. Vukusic, P., Hallam, B. & Noyes, J. Science 315, 348 (2007).
3. Stavenga, D. G., Stowe, S., Siebke, K., Zeil, J. & Arikawa, K. Proc. R. Soc. Lond. B. 271, 1577-1584 (2004).
4. Noyes, J. J. Mater. Res. 23, 3229-3235 (2008).
5. Parker, A. R., Welch, V. L., Driver, D. & Martini, N. Nature 426, 786–787 (2003).

Wednesday, June 25, 2014

Let's not turn scientists into rock stars (the reverse is OK if they're called Brian)

Well, I thought I should put up here for posterity my comment published in the Guardian online about the Breakthrough Prizes. I realise that one thing I wanted to say in the piece but didn’t have room for is that it hardly seems an apt way to combat the low profile of scientists in our celebrity culture to try to turn them into celebrities too, with all the lucre that entails.

I know I tend to gripe about the feedback on Comment is Free, but I was pleasantly surprised at the thoughtful and apposite nature of many of the comments this time. But there’s always one – this time, the guy who didn’t read beyond the first line because it showed such appalling ignorance. The problem with not reading past the first line is that you can never be sure that the second line isn’t going to say “At least, that’s what we’re told, but the reality is different…”

I have been implored to put out a shout also for the Copley Medal, for which mathematicians are eligible. It is indeed very prestigious, and is apparently the oldest extant science prize. It is also delightfully modest in its financial component.

____________________________________________________________________

The wonderful thing about science is that it’s what gets discovered that matters, not who did the discovering. As Einstein put it, “When a man after long years of searching chances on a thought which discloses something of the beauty of this mysterious universe, he should not therefore be personally celebrated. He is already sufficiently paid by his experience of seeking and finding.” At least, that’s the official line – until it comes to handing out the prizes. Then, who did what gets picked over in forensic detail, not least by some of those in the running for the award or who feel they have been overlooked in the final decision.

This is nothing to be particularly ashamed of or dismayed about. Scientists are only human, and why shouldn’t they get some reward for their efforts? But the disparity between principle and practice is raised afresh with the inaugural awarding of the Breakthrough Prizes to five mathematicians on Monday. Each of them receives $3m – more than twice the value of a Nobel Prize. With stakes like that, it’s worth asking whether prizes help or hinder science.

The Breakthrough Prizes, established by information-technology entrepreneurs Yuri Milner and Mark Zuckerberg (of Facebook fame), are to be given for work in mathematics, fundamental physics and life sciences. The maths prize is the first to be decided, and the selection of five recipients of the full $3m each is unusual: from 2015, there will be only a single prize of this amount in each category, divided among several winners if necessary.

The creators of the prizes say they want to raise the esteem for science in society. “We think scientists should be much better appreciated”, Milner has said. “They should be modern celebrities, alongside athletes and entertainers. We want young people to get more excited. Maybe they will think of choosing a scientific path as opposed to other endeavours if we collectively celebrate them more."

He has a point – many people could reel off scores of Hollywood and sports stars, but would struggle to name any living physicist besides Stephen Hawking. But the idea that huge cash prizes might attract young people to science seems odd – can there be a single mathematician, in particular, who has chosen their career in the hope that they will get rich and famous? (And if there is, didn’t they study probability theory?)

Yet the curious thing is that maths is hardly starved of prizes already. The four-yearly Fields Medal (about $13,800) and the annual Abel Prize (about $1m) are both commonly described as the “maths Nobels”. In 2000 the privately funded Clay Mathematics Institute announced the Millennium Prizes, which offered $1m to anyone who could solve one of seven problems deemed to be among the most difficult in the subject.

Even researchers have mixed feelings. When Grigori Perelman solved one of the Millennium Problems, the Poincaré conjecture, in 2003, he refused the prize, apparently because he felt it should have recognized the work of another colleague too. Perelman fits the stereotype of the unworldly mathematician who rejects fame and fortune, but he’s not alone in feeling uneasy about the growth of an immensely lucrative “cash prize culture” in maths and science.

One of the concerns is that prizes distort the picture of how science is done, suggesting that it relies on sudden, lone breakthroughs by Great Women and (more usually) Men. Science today is often conducted by vast teams, exemplified by the international effort to find the Higgs boson at CERN, and even though the number of co-laureates for Nobel prizes has been steadily increasing, its arbitrary limit of three no longer accommodates this.

Although the maths Breakthrough prizewinners include some relatively young researchers, and the Fields Medal rewards those under 40, many prizes are seen as something you collect at the end of a career – by which time top researchers are showered with other awards already. Like literary awards, they can become the focus of unhealthy obsession. I have met several great scientists whose thirst for a Nobel is palpable, along with others whose paranoia and jealousies are not assuaged by winning one. Most troublingly, a Nobel in particular can transform a good scientist into an alleged font of all wisdom, giving some individuals platforms from which to pontificate on subjects they are ill equipped to address, from the origin of AIDS to religion and mysticism. The truth is, of course, that winners of big prizes are simply a representative cross-section: some are delightfully humble, modest and wise, others have been arrogant bullies, nutty, or Nazi.

And prizes aren’t won for good work, but for good work that fits the brief – or perhaps the fashion. Geologists will never get a Nobel, and it seems chemists and engineers will never get a Breakthrough prize.

Yet for all their faults, it’s right that scientists win prizes. We should celebrate achievement in this area just as in any other. But I do wonder why any prize needs to be worth $3m – it’s not surprising that the Breakthrough Prizes have been accused of trying to outbid the Nobels. Even some Nobellists will admit that a few thousands are always welcome but a million is a burden. A little more proportion, please.

Tuesday, June 24, 2014

The tinkling of the aristocracy

Fashions change. I just learnt from Louise Levathes’ book When China Ruled the Seas that upper-class men in fifteenth-century Siam made a tinkling noise when they moved. Why? Because at age twenty, they “had a dozen tin or gold beads partly filled with sand inserted into their scrotums.” According to a Chinese translator Ma Huan, this looked like “a cluster of grapes”, which Ma found “curious” but the Siamese considered “beautiful”. Were this to catch on again among the upper classes here, David Cameron’s Cabinet might sound rather more delightful, as long as they kept their mouths shut.

Thursday, June 19, 2014

What your sewage tells about you

Here’s my latest piece for BBC Future, pre-editing. I was going to illustrate it with that scene from Trainspotting, but I feared no one would have the stomach to read on.

_________________________________________________________________

“They found him in the toilet covered in white powder and frantically trying dispose of cocaine by emptying it down the toilet.” But it’s not just during a drugs bust, as in this report of an arrest in 2012, that illicit substances go down the pan. Many drugs break down rather slowly in the body, and so they are a pervasive contaminant in human wastewater systems. The quantities are big enough to raise concerns about effects on ecosystems – but they can also offer a way to monitor the average levels of drug use in communities.

“Sewage epidemiologist” does not, it has to be said, sound like the kind of post that will bring applications flooding in. But it is a rapidly growing field. And one of the primary goals is to figure out how levels of drug use obtained by more conventional means, such as questionnaires and crime statistics, tally with direct evidence from what gets into the water. Over the past six years or so, sewage epidemiology has been shown to agree rather well with these other approaches to quantifying drug abuse: the amounts of substances such as cocaine and amphetamines in wastewater in Europe and the USA more or less reflect estimates of their use deduced by other means.

A new study, however, shows that the figures don’t always match, and that some previous studies of illicit drugs in sewage might have under-estimated their usage. To appreciate why, there’s no option but to dive in, in the manner of the notorious toilet scene from the movie Trainspotting, and embrace the grimy truth: you might not discover as much by looking at drugs carried in urine and dissolved in water as you will by studying the faecal residues in suspended particles and sewage sludge, since some drugs tend to stick more readily to the solids.

While a few studies have looked at illicit drugs in sewage solids in Europe, Bikram Subedi and Kurunthachalam Kannan of the Wadsworth Center of New York State’s Department of Health in Albany have conducted what seems to be the first such study in the USA. They took samples of wastewater and sludge from two sewage treatment plants handling the wastes of many thousands of people in the Albany area, and carried out chemical analysis to search for drug-related compounds. They looked not only for the drugs themselves – such as cocaine, amphetamine, morphine (the active component of heroin) and the hallucinogen methylenedioxyamphetamine (a designer drug known as “Sally” or “Sass”) – but also for some of their common ‘metabolites’, the related compounds into which they can be transformed in the body. The two researchers also measured amounts of common compounds such as nicotine and caffeine, which act as chemical markers of human excretion and so can serve to indicate the total number of, shall we say, contributors to the sewage.

To measure how much of these substances the samples contain, Subedi and Kannan used the technique of electrospray mass spectrometry. This involves converting the molecules into electrically charged ions by sticking hydrogen ions onto them, or knocking such ions off, and then accelerating them through an electromagnetic field to strike a detector. The field deflects the ions from a straight-line course, but the more massive they are the less they are deflected. So the molecules are separated out into a “spectrum” of different masses. For fairly large molecules like cocaine, with a molecular mass of 303, there’s pretty much only one common way atoms such as carbon, oxygen, nitrogen and hydrogen can be assembled to give this mass (formula C17H21NO4). So you can be confident that a spike in the spectrum at mass 303 is due to cocaine.

From these measurements, the researchers estimated a per capita consumption of cocaine in the Albany area four times higher than that found in an earlier study of wastewater in the US, and a level of amphetamine abuse about six times that in the same previous study, as well as 3-27 times that reported for Spain, Italy and the UK. It’s still early days, but this suggests that sewage epidemiology would benefit from getting to grips with the solids.

Subedi and Kannan could also figure out how good the sewage treatment was at removing these ingredients from the water. That varied widely for different substances: about 99 percent of cocaine was removed, but only about 4 percent of the pharmacologically active cocaine metabolite norcocaine. A few drugs, such as methadone, showed apparently “negative removal” – the wastewater treatment was actually converting some related compounds into the drug. The researchers admit that no one really knows yet what effects these illicit substances will have when they reach natural ecosystems – but there’s increasing concern about their possible consequences. It looks as though we might need to start thinking about the possibility of “passive drug abuse” – if not in humans, then at least in the wild.

Reference: B. Subedi & K. Kannan, Environmental Science & Technology online publication dx.doi.org/10.1021/es501709a (2014).

Tuesday, June 17, 2014

Curiosity: the view from the Vatican

I don’t think of myself as a pope-basher. But there are times when one can’t help being flabbergasted at what the Vatican is capable of saying and doing. That’s how I feel on discovering this report of a speech by Pope Francis from last November.

From a supposedly progressive pope, these comments plunge us straight back to the early Middle Ages. This in particular: “In the Gospel, the Pope underlined, we find ourselves before another spirit, contrary to the wisdom of God: the spirit of curiosity’.”

“The spirit of curiosity”, Francis goes on, “distances us from the Spirit of wisdom because all that interests us is the details, the news, the little stories of the day. Oh, how will this come about? It is the how: it is the spirit of the how! And the spirit of curiosity is not a good spirit. It is the spirit of dispersion, of distancing oneself from God, the spirit of talking too much. And Jesus also tells us something interesting: this spirit of curiosity, which is worldly, leads us to confusion.”

Oh, this passion for asking questions! The pope is, of course, scripturally quite correct, for as it says in Ecclesiastes,
“Do not pry into things too hard for you
Or examine what is beyond your reach…
What the Lord keeps secret is no concern of yours;
do not busy yourself with matters that are beyond you.”

All of which tells us to respect our proper place. But if you want to take this to a more judgemental and condemnatory level, then of course we need to turn to St Augustine. Curiosity is in his view a ‘disease’, one of the vices or lusts at the root of all sin. “It is in divine language called the lust of the eyes”, he wrote in his Confessions. “From the same motive, men proceed to investigate the workings of nature, which is beyond our ken – things which it does no good to know and which men only want to know for the sake of knowing.”

I don’t see much distance between this and the words of Pope Francis. However, let’s at least recognize that this is not a specifically Christian thing. The pope seems to be wanting to return the notion of curiosity to the old sense in which the Greeks used it: a distraction, an idle and aimless seeking after novelty. That is what Aristotle meant by periergia, which is generally taken to be the cognate of the Latin curiositas. Plutarch considers curiositas the vice of those given to snooping and prying into the affairs of others – the kind of busybody known in Greek as a polypragmon. In Aristotle’s view, this kind of impulse had no useful role in philosophy. That’s why the medieval Aristotelians tried to make a distinction between curiosity and genuine enquiry. It’s fine to seek knowledge, said Thomas Aquinas and Albertus Magnus – but as the latter wrote,
"Curiosity is the investigation of matters which have nothing to do with the thing being investigated or which have no significance for us; prudence, on the other hand, relates only to those investigations that pertain to the thing or to us."

This is how these folks tried to carve out a space for natural philosophy, within which science eventually grew until theology no longer seemed like a good alternative for understanding the physical world.

Should we, then, give Pope Francis the benefit of the doubt and conclude that he wasn’t talking about the curiosity that drives so much (if not all) of science, but rather, the curiosity that Augustine felt led to a fascination with the strange and perverse, with “mangled corpses, magical effects and marvellous spectacles”? After all, the pope seems to echo that sentiment: “The Kingdom of God is among us: do not seek strange things, do not seek novelties with this worldly curiosity.”

Oh yes, we could allow him that excuse. But I don’t think we should. How many people, on hearing “curiosity” today, think of Augustine’s “mangled corpses” or Aristotle’s witless periergia? The meaning of words is what people understand by them. For a schoolchild commended for their curiosity, the words of the pope will carry the opposite message: be quiet, don’t ask, seek not knowledge but only God. This seems to me to be verging on a wicked thing to say.

But worse: the idea of a hierarchy of questions in which some are too trivial, too aimless or ill-motivated, is precisely what needed to be overcome before science could flourish. Early modern science was distinguished from the admirable natural philosophy of Aquinas and Albertus Magnus, of Roger Bacon and Robert Grosseteste, by the fact that no question was any longer irrelevant or irreverent. One could investigate the gnat’s leg, or a smudge on Jupiter, or optical phenomena in pieces of mica. That wasn’t yet in itself science, but the liberation of curiosity was the necessary precondition for it. In which case, Pope Francis’s message is profoundly anti-progressive and anti-intellectual.

Saturday, June 07, 2014

Still on the trail of cursive's benefits

My interest in the merits (or not) of cursive writing prompted me to follow up Ed Yong’s recent tweet about an article on handwriting in the New York Times. It is by Maria Konnikova, and it is interesting. But I am particularly struck by this paragraph:
“Dr. Berninger goes so far as to suggest that cursive writing may train self-control ability in a way that other modes of writing do not, and some researchers argue that it may even be a path to treating dyslexia. A 2012 review suggests that cursive may be particularly effective for individuals with developmental dysgraphia — motor-control difficulties in forming letters — and that it may aid in preventing the reversal and inversion of letters.”

Here at least seems to be a concrete claim of the supposed cognitive benefits of cursive, in comparison to print handwriting. OK, so Dr Berninger’s claim is totally unsubstantiated here, and I’ll have to live with that. And this claim that cursive might be particularly useful for working with dyslexia and dysgraphia is one I’ve heard previously and seems plausible – it could perhaps offer a valid reason to teach cursive handwriting in preference to manuscript from the outset (not sequentially). But I’d like to see what that 2012 review says about this. So I follow the link.

It leads me to what appears to be a book chapter: “The contribution of handwriting and spelling remediation to overcoming dyslexia”, by Diane Montgomery. She is reporting on a study that used an approach called the Cognitive Process Strategies for Spelling (CPSS) to try to help pupils with identified spelling difficulties, who were in general diagnosed as dyslexics. This method involves, among many other things, teaching these children cursive alone. So Montgomery’s work in itself doesn’t offer any evidence for the superiority of cursive over other handwriting styles in this context – cursive is just accepted here as a ‘given’.

But she does finally explain why cursive is a part of CPSS, in a section titled “Why cursive in remedial work is important”. Here the author claims that “Experiments in teaching cursive from the outset have taken place in a number of LEAs [local education authorities] and have proved highly successful in achieving writing targets earlier and for a larger number of children.” Aha. And the evidence? Two studies are cited, one from 1990, one from 1991. One is apparently a local study in Kingston-upon-Thames. Both are in journals that are extremely hard to access – even the British Library doesn’t seem to keep them. So now I’m losing the trail... But let’s remind ourselves where we are on it. The CPSS method for helping dyslexic children uses cursive because advantages for it have been claimed in some studies almost 25 years ago on non-dyslexic cohorts.

Onward. Montgomery says that other dyslexia programmes base their remediation on cursive. She lists the reasons why that is so, but none of the claims (e.g “spaces between letters and between words is orderly and automatic”) is backed up with citations showing that these actually confer advantages.

There is, however, one such documented claim in her piece. “Ziviani and Watson-Will (1998) found that cursive script appeared to facilitate writing speed.” Now that’s interesting – this is of course the claim made by many people when they defend cursive, so I was delighted to find an assertion that there’s real evidence for it. Well, I could at least get hold of this paper, and so I checked it out. And you know what? It doesn’t show that at all. This statement is totally, shockingly false. Ziviani and Watson-Will were interested in the effects of the introduction of a new cursive style into Australian schools, replacing “the previous print and cursive styles”. How well do the children taught this way fare in terms of speed and legibility? The authors don’t actually conduct tests that compare a cohort trained the old way with one trained the new way. They are just concerned with how, for those trained the new way, speed affects legibility. So it’s a slightly odd study that doesn’t really address the question it poses at the outset. What it does do is to show that there is a weak inverse correlation between speed and legibility for the kids who learnt the new cursive style. Not at all surprising, of course – but there is not the slightest indication in this paper that cursive (of any kind) improves speed relative to manuscript/print style (of any kind).

There’s another relevant reference in Montgomery’s paper that I can get. She says “The research of Early (1976) advocated the exclusive use of cursive from the beginning.” Hmm, I wonder why? So I look it up. It compares two groups of children from two different American schools, one with 21 pupils, the other with 27. One of them was taught cursive from the outset, the other was taught the traditional way of manuscript first and then cursive. The results suggested, weakly, that exclusive teaching of cursive produced fewer letter reversals (say, b/d) and fewer transpositions (say, “first/frist”). But the authors acknowledged that the sample size was tiny (and no doubt they were mindful also that the experimental and control groups were not “identically prepared”). As a result, they said, “We in no way wish to offer the present data as documenting proof of the superiority of cursive over manuscript writing.” Would you have got that impression from Montgomery?

So now I’m really wondering what I’d find in those elusive 1990/1991 studies. At this point it doesn’t look good.

What, then, is going on here? Montgomery says that “custom and practice or ‘teaching wisdom’ is very hard to change and extremely rigid attitudes are frequently found against cursive.” I agree with the first point entirely – but in my experience so far, the rigid attitudes are in favour of cursive. And on the evidence here, advocacy for cursive seems to be made more on the basis of an existing conviction than out of respect for the evidence.

Ironically perhaps, I suspect that Early is nevertheless right. For most children, it won’t make an awful lot of difference whether they are taught cursive or manuscript – but they will find writing a fair bit easier at first if they are taught only one or the other. There does seem to be some slight indication that cursive might help with some particular spelling/writing problems, such as letter reversals and transpositions, though I’d like to see far better evidence for that. In that case, one could argue that the balance tips slightly in favour of cursive, simply for the sake of children with dyslexia and other dysgraphic problems. And I have the impression that in this case, a cursive-like italic style might be the best, rather than anything too loopy.

But if that were to be done, it would be good to be clear about the reasons. We are not saying that there’s anything in it for normal learners. And we really must drop the pathetic, patronising and pernicious habit of telling children that cursive is “grown-up” writing, infantilizing those who find it hard. If they learnt it from the outset, they would understand that it is just a way of writing – nothing more or less.

There is clearly still a lot of mythology, and propagating of misinformation, in this area. Given its importance to educational development, that’s troubling.

Wednesday, June 04, 2014

Programmable matter kicks off

Here's how my recent article for IEEE Spectrum started off, with some more references, info and links.

________________________________________________________

If science is to reach beyond a myopic fixation on incremental advances, it may need bold and visionary dreams that border on myth-making. There are plenty of those in the field called programmable matter, which aims to blend micro- and nanotechnology, robotics and computing to produce substances that change shape, appearance and function at our whim.

The dream is arrestingly illustrated in a video produced by a team at Carnegie Mellon University in Pittsburgh. Executives sit around a table watching a sharp-suited sales rep make his pitch. From a vat of grey gloop he pulls a perfectly rendered model of a sports car, and proceeds to reshape it with his fingers. With gestures derived from touchscreen technology, he raises or flattens the car’s profile and adjusts the width of the headlamps. Then he changes the car from silver-grey to red, the “atoms” twinkling in close-up with Disney-movie magic as their color shifts.

This kind of total mastery over matter is not so different from the alchemist’s dream of transmuting metals, or in contemporary terms, the biologist’s dream of making life infinitely malleable through synthetic biology. But does the fantasy – it’s little more at present – bear any relation to what can be done?

Because of its affiliation with robotic engineering and computer science, the idea of programmable matter is often attributed to a paper published in 1991 by computer scientists Tommaso Toffoli and Norman Margolus of the Massachusetts Institute of Technology, who speculated about a collection of tiny computing objects that could sense their neighbors and rearrange themselves rather like cellular automata [1]. But related ideas were developed independently in the early 1990s by the chemistry Nobel laureate Jean-Marie Lehn, who argued that chemistry would become an information science by using the principles of spontaneous self-assembly and self-organization to design molecules that would assemble themselves from the bottom up into complex structures [2]. Lehn’s notion of “informed matter” was really nothing less than programmable matter at the atomic and molecular scale.

Lehn’s own work since the 1960s helped to show that the shapes and chemical structures of molecules could predispose them to unite into large-scale organized arrays that could adapt to their circumstances, for example responding to external signals or having self-healing abilities. Such supramolecular (“beyond the molecule”) self-assembly enables biomolecules to become living cells, which need no external instructions to reconfigure themselves because their components already encode algorithms for doing so. In some ways, then, living organisms already embody the aspirations of programmable matter.

Yet in the information age, it is we who do the programming. While living cells are often said (a little simplistically) to be dancing to the evolutionarily shaped program coded in their genomes, technologies demand that we bring matter under our own direct control. It’s one thing to design molecules that assemble themselves, quite another to design systems made from components that will reconfigure or disassemble at the push of a button. The increasingly haptic character of information technology’s interfaces encourages a vision of programmable matter that is responsive, tactile, even sensual.

Many areas of science and technology have come together to enable this vision. Lehn’s supramolecular chemistry is one, and nanotechnology – its extreme miniaturization and interest in “bottom-up” self-organizing processes – is another. Macroscopic robotic engineering has largely abandoned the old idea of humanoid robots, and is exploring machines that can change shape and composition according to the task in hand [3]. To coordinate large numbers of robotic or information-processing devices, centralized control can be cumbersome and fragile; instead, distributed computing and swarm robotics rely on the ability of many interacting systems to find their own modes of coordination and organization [4]. Interacting organisms such as bacteria and ants provide an “existence proof” that such coordination is sometimes best achieved through this kind of collective self-organization. Understanding such emergent behaviour is one of the central themes in the science of complex systems, which hopes to harness it to achieve robustness, adaptability and a capacity for learning.

Meanwhile, thanks to the shrinking of power sources and the development of cheap, wireless radio-frequency communications for labelling everything from consumer goods to animals for ecological studies, robotic devices can talk to one another even at very small scales. And making devices that can be moved and controlled without delicate and error-prone moving parts has benefitted immensely from the development of smart materials that can respond to their environment and to external stimuli by, for example, changing their shape, color or electrical conductivity.

In short, the ideas and technologies needed for programmable matter are already here. So what can we do with them?

Seth Goldstein and his team at Carnegie Mellon, in collaboration with Intel Research Pittsburgh, were among the first to explore the idea seriously. “I’ve always had an interest in parallel and distributed systems”, says Goldstein. “I had been working in the area of molecular electronics, and one of the things that drew me into the field was a molecule called a rotaxane that, when subjected to an electric field, would change shape and as a result change its conductivity. In other words, changing the shape of matter was a way of programming a system. I got to thinking about what we could do if we reversed the process: to use programming to change the shape of matter.”

The Carnegie Mellon group envisions a kind of three-dimensional, material equivalent of sound and visual reproduction technologies, in which millions of co-operating robot modules, each perhaps the size of a dust grain, will mimic any other object in terms of shape, movement, visual appearance, and tactile qualities. Ultimately these smart particles – a technology they call Claytronics [5] – will produce a “synthetic reality” that you can touch and experience without any fancy goggles or gloves. From a Claytronics gloop you might summon a coffee cup, a spanner, a scalpel.

“Any form of programmable matter which can pass the ‘Turing test’ for appearance [looking indistinguishable from the real thing] will enable an entire new way of thinking about the world”, say Goldstein. “Applications like injectable surgical instruments, morphable cellphones, 3D interactive life-size TV and so on are just the tip of the iceberg.”

Goldstein and colleagues call the components of this stuff “catoms” – Claytronic atoms, which are in effect tiny spherical robots that move, stick together, communicate and compute their own location in relation to others. Each catom would be equipped with sensors, color-change capability, computation and locomotive agency. That sounds like a tall order, especially if you’re making millions of them, but Goldstein and colleagues think it should be achievable by stripping the requirements down to the bare basics.

The prototype catoms made by the Pittsburgh researchers since the early 2000s were a modest approximation to this ambitious goal: squat cylinders about 44 mm across, their edges lined with rows of electromagnets that allow them to adhere in two-dimensional patterns. By turning the magnets on and off, one catom could ‘crawl’ across another. Using high-resolution photolithography, the Carnegie Mellon team has now managed to shrink the cylindrical catoms to the sub-millimetre scale, while retaining the functions of power transfer, communication and adhesion. These tiny catoms can’t yet move, but they will soon, Goldstein promises.


Prototype catoms

Electromagnetic coupling might ultimately not be the best way to stick them, however, because it drains power even when the devices are static. Goldstein and colleagues have explored the idea of making sticky patches from carpets of nanofibers, like those on a gecko’s foot, that adhere due to intermolecular forces. But at present Goldstein favors electrostatics as the best force for adhesion and movement. Ideally the catoms will be powered by harvesting energy from the environment – drawing it from an ambient electric field, say – rather than carrying on-board power supplies.

One of the big challenges is figuring out where each catom has to go in order to make the target object. “The key challenge is not in manufacturing the circuits, but in being able to program the massively distributed system that will result from putting all the units together into an ensemble”, says Goldstein. Rather than drawing up a global blueprint, the researchers hope that purely by using local rules, where each catom simply senses the positions of its near neighbors, the ensemble can find the right shape. Living organisms seem to work this way: the single-celled slime mold Dictyostelium discoideum, for example, aggregates under duress into a mushroom-shaped multicellular body without any ‘brain’ to plan it. This strategy means the catoms must communicate with one another. The Carnegie Mellon researchers plan to explore both wireless technologies for remote control, and electrostatic interactions for nearest-neighbour sensing.

To be practical, this repositioning needs to be fast. Goldstein and colleagues think that an efficient way to produce shape changes might be to fill the initial catom “blob” with little voids, and then shift them around to achieve the right contours. Small local movements of adjacent catoms are all that’s needed to move holes through the medium, and if they reach the surface and are expelled like bubbles, the overall volume shrinks. Similarly, the material can be expanded by opening up new bubbles at the surface and engulfing them.

At MIT, computer scientist Daniela Rus and her collaborators have a different view of how smart, sticky ‘grains’ could be formed into an object. Their “smart sand” would be a heap of such grains that, by means of remote messages and magnetic linkages, will stick selectively together so that the target object emerges like a sculpture from a block of stone. The unused grains just fall away. Like Goldstein, Gilpin and colleagues have so far explored prototypes on a larger scale and in two dimensions, making little units the size of sugar cubes with built-in microprocessors and electromagnets on four faces. These units can communicate with each other to duplicate a shape inserted into the 2D array. The smart grains that border the master shape recognize that they are at the edge, and send signals to others to replicate this pixellated mould and the object that lies within it [6].

Rus and her collaborators have hit on an ingenious way to make these ‘grains’ move. They have made larger cubes called M-blocks, several centimeters on each side, which use the momentum of flywheels spinning at up to 20,000 r.p.m. to roll, climb over each other and even leap through the air [7]. When they come into contact, the blocks can be magnetically attached to assemble into arbitrary shapes – at present determined by the experimenters, although their plan is to develop algorithms that let the cubes themselves decide where they need to go.


M-blocks in action

Programmable matter doesn’t have to be made from an army of hard little units. Hod Lipson at Cornell University and his colleagues think that it should be possible to create “soft robots” that can be moulded into arbitrary shapes from flexible smart materials that change their form in response to external signals.

“Soft robotics” is already well established. Shape-memory alloys, which bend or flex when heated or cooled, can provide the ‘muscle’ within the soft padding of a silicone body [8], for example, and polymeric objects can be made to change shape by inflating pneumatic compartments [9]. What made the soft robot designed by Lipson and his colleague Jonathan Hiller particularly neat is that the actuation didn’t require a specific signal, but was built into the structure itself. They used evolutionary computer algorithms to figure out how to arrange tiny blocks of silicone foam rubber so that raising and reducing the air pressure caused the rubber to contract and expand in a way that made the weirdly-shaped assembly crawl across a surface [10].

Lipson and his coworkers have also devised algorithms that can mutate and optimize standardized components such as rods and actuators to perform particular tasks, and have coupled this design process to a 3D printer that fabricates the actual physical components, resulting in “machines that make machines.” They have been able to print not just parts but also power sources such as batteries, and Lipson says that his ultimate goal is to make robots that can “walk out of the printer”.

These are top-down approaches to programmable matter, emerging from existing developments in robotic technology. But there are alternatives that start from the bottom up: from nanoscale particles, or even molecules. For example, currently there is intense research on the behavior of so-called self-propelled or “living” colloids: particles perhaps a hundred or so nanometers across that have their own means of propulsion, such as gas released by chemical reactions at their surface. These particles can show complex self-organized behavior, such as crystalline patterns that form, break and explode [11]. Controlling the resulting arrangements is another matter, but researchers have shown they can at least move and control individual nanoparticles using radiofrequency waves and magnetic fields. This has permitted wireless “remote control” of processes in living cells, such as the pairing of DNA strands [12], the triggering of nerve signals [13], and the control of insulin release in mice [14].

Nature programs its cellular matter partly by the instructions inherited in the DNA of the genome. But by exploiting the same chemical language of the genes – two DNA strands will pair up efficiently into a double helix only if their base-pair sequences are complementary – researchers have been able to make DNA itself a kind of programmable material, designed to assemble into specific shapes and patterns. In this way they have woven complex nanoscale DNA shapes such as boxes with switchable lids [15], nanoscale alphabetic letters [16] and even world maps [17]. By supplying and removing ‘fuel strands’ that drive strand pairing and unpairing, it is possible to make molecular-scale machines that move, such as DNA ‘walkers’ that stride along guide strands [18]. Eventually such DNA systems might be given the ability to replicate and evolve.


DNA origami

In ways like this, programmable matter seems likely to grow from the very small, as well as shrinking from robots the size of dimes. Goldstein says the basic idea can be applied to the building blocks of matter over all scales, from atoms and cells to house bricks. It’s almost a philosophy: a determination to make matter more intelligent, more obedient, more sensitive – in some respects, more alive.

________________________________________________________

Box: What might go wrong?

Isn’t there something a little sinister to this idea of matter that morphs and even mutates? What will the sculptors make? Can they be sure they can control this stuff? Here our fears of “animated matter” are surely shaped by old myths like that of the Jewish golem, a being fashioned from clay that threatened to overwhelm its creator.

The malevolence of matter that is infinitely protean is evident in imagery from popular culture, such as the “liquid robot” T-1000 of Terminator II. The prospect of creating programmable matter this sophisticated remains so remote, though, that such dangers can’t be meaningfully assessed. But in any event, Goldstein insists that “there’s no grey goo scenario here”, referring to a term nanotechnology pioneer Eric Drexler coined in his 1986 book Engines of Creation.

Drexler speculated about the possibility of self-replicating nanobots that would increase exponentially in number as they consumed the raw materials around them. This sparked some early fears that out-of-control nanotechnology could to turn the world into a giant mass of self-replicating gray sludge—a theme that appeared repeatedly in later works of science fiction, including Will McCarthy’s 1998 novel Bloom, Michael Crichton’s 2002 thriller Prey, and even in tongue-in-cheek fashion in a 2011 episode of Futurama.

But the real dangers may be ones associated more generically with pervasive computing, especially when it works by Wifi. What if such a system were hacked? It is one thing to have data manipulated online this way, but when the computing substrate is tangible stuff that reconfigures itself, hackers will gain enormous leverage for creating havoc.

Goldstein thinks, however, that some of the more serious problems might ultimately be of more of a sociological nature. Programmable matter is sure to be rather expensive, at least initially, and so the capabilities it offers might only widen the gap between those with access to new technology and those without. What’s more, innovations like this, as with today’s pervasive factory automation, threaten to render jobs in manufacturing and transport obsolete. So they will make more people unemployable, not because they lack the skills but because there will be nothing for them to do.

Of course, powerful new capabilities always carry the potential for abuse. You can see hints of that already in, say, the use of swarm robotics for surveillance, or in the reconfigurable robots that are being designed for warfare. Expect the dangers of programmable matter to be much like those of the Internet: when just about everything is possible, not all of what goes on will be good.

References
1. T. Toffoli & N. Margolus, Physica D 47, 263–272 (1991).
2. J.-M. Lehn, Supramolecular Chemistry (Wiley-VCH, Weinheim, 1994).
3. K. Gilpin & D. Rus, Robotics & Automation Magazine, IEEE 17(3), 38-55 (2010).
4. J. C. Barca & Y. A. Sekercioglu, Robotica 31, 345-359 (2012).
5. S. C. Goldstein, J. D. Campbell & T. C. Mowry, Computer 38, 99-101 (May 2005).
6. K. Gilpin, A. Knaian & D. Rus, IEEE Int. Conf. on Robotics and Automation, 2485-2492 (2010).
7. J. Romanishin, K. Gilpin & D. Rus, abstract, IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (November 2013).
8 .B. A. Trimmer, A. E. Takesian, B. M. Sweet, C. B. Rogers, D. C. Hake & D. J. Rogers, Proc. 7th Int. Symp. Technol. Mine Problem, Monterey, CA (2006).
9. F. Ilievski, A. D. Mazzeo, R. F. Shepherd, X. Chen & G. M. Whitesides, Angew. Chem. Int. Ed. 50, 1890–1895 (2011).
10. J. Hiller & H. Lipson, IEEE Trans. On Robotics 28(2), 457-466 (2012).
11. J. Palacci, S. Sacanna, A. P. Steinberg, D. J. Pine & P. M. Chaikin, Science 339, 936-940 (2013).
12. K. Hamad-Schifferli, J. J. Schwartz, A. T. Santos, S. Zhang & J. M. Jacobson, Nature 415, 152 (2002).
13. H. Huang et al., Nat. Nanotechnol. 5, 602 (2010).
14. S. A. Stanley et al., Science 336, 604 (2012).
15. E. S. Andersen et al., Nature 459, 73-76 (2009).
16. B. Wei, M. Dai & P. Yin, Nature 485, 623-626 (2012).
17. P. Rothemund, Nature 440, 297-302 (2006).
18. T. Omabegho, R. Sha & N. C. Seeman, Science 324, 67-71 (2009).

Why it's great to be deceived

Here's my third column for the Italian science magazine Sapere on music cognition. It mentions one of my favourite spine-tingling moments in music.

__________________________________________________

In my last column I explained that much of the emotional power of music seems to come from violations of expectation. We think the music will do one thing, but it does another. Or perhaps it just delays what we were expecting. We experience the surprise as an inner tension. You might think this would make music hard to listen to, rather than pleasurable. And indeed it’s a delicate game: if our expectations are foiled too often, we will just get confused and frustrated. Contemporary classical music has that effect on many people, although I’ll explain another time why this needn’t mean such music is bad or unlistenable. But if music always does what we expect, it becomes boring, like a nursery rhyme. Children need nursery rhymes to develop their expectations, but eventually they are ready for something more challenging.

A lot of music does meet our expectations for much of the time: it stays in key and in rhythm, and there is lots of repetition (verse-chorus-verse…). But the violations add spice. One way they can do this is to deliver notes or chords other than the ones we anticipate. Western audiences become very accustomed to hearing sequences of chords called cadences, which round off a musical phrase. If Bach or Mozart wrote a piece in, say, C major, you could be sure it would end with a C major chord (the so-called tonic chord): that just seems the “right” place to finish. Usually this final chord is preceded by the so-called “dominant” chord rooted on the fifth note of the scale (here G major). The dominant chord sets us up to expect the closing tonic chord. This pairing is called an authentic or perfect cadence.

Imagine the surprise, then, when you think you’re being given an authentic cadence but you get something else. That’s what happens about two-thirds of the way through Bach’s Prelude in E flat Minor from the first book of The Well Tempered Clavier, one of the most exquisite pieces Bach ever wrote. Here the prelude seems about to end [at 2:56 in the Richter recording here]: there’s an E flat minor chord (the tonic) followed by a dominant chord (B flat), and we think the next chord will be the closing tonic. But it isn’t. Never mind the fancy name of the chord Bach uses – it very definitely doesn’t close the phrase, but leaves it hanging. The effect is gorgeously poignant. This is sometimes known as a deceptive cadence: the musical term already reflects the idea that our expectations are being deceived. The tonic E flat minor does arrive moments later, and then we sigh as we finally get our delayed resolution.

Electrical brain-scanning studies show that we experience this kind of musical deception the same way as we experience a violation of grammatical syntax – such as when a sentence ends this like. There’s an electrical signal in the brain that signifies our “Huh?” response. This is just one of the ways in which the brain seems to process music and language using the same neural circuits – one way music literally ‘speaks’ to us.

Friday, May 23, 2014

A chance of snow


Here’s my latest piece for BBC Future.

____________________________________________________________________

So what happened to the severe winter that we in the UK were warned about last November? Instead there was less snow than in recent years, but it rained incessantly. Sure, the winter was unusually bitter – if you live in Buffalo, not Bristol. Why can’t the forecasters get it right?

Well, here’s the thing: the UK’s Met Office didn’t forecast anything like this at all. They said simply that “temperatures are likely to remain near or slightly below average for the time of year, but otherwise fairly normal conditions for early winter are most likely”.

But more importantly, the Met Office has tried repeatedly to explain that “it’s not currently scientifically possible to provide a detailed forecast over these long timescales”. As far as snow is concerned, they said last November, “because there are so many factors involved, generally that can only be discussed in any detail in our five day forecasts.” To which commendable honesty you might be inclined to respond “a fat lot of good.”

Yet hope is at hand. In a new paper published in Geophysical Research Letters, a team of Met Office scientists say that, thanks to advances in modelling the North Atlantic weather system, “key aspects of European and North American winter climate… are highly predictable months ahead”.

So what’s changed? Well, in part it all depends on what you mean by a weather forecast. You, I or Farmer Giles might want to know if it is going to rain or snow next Thursday, so that we can anticipate how to get to work or whether to gather in the herds. But the laws of physics have set a fundamental limit on how far in advance that kind of detail can be predicted. The weather, crucially dependent on the turbulent flows of atmospheric circulation, is a chaotic system, which means that the tiniest differences in the state of the system right now can lead to completely different outcomes several days down the line.

Beyond about ten days in the future, it is therefore mathematically impossible to forecast details such as local rainfall, no matter how much you know about the state of the weather system today. All the same, advances in satellite observations and computer modelling mean that forecasting has got better over the past few decades. No matter how cynically you like to see it, the five-day forecast for Western Europe is now demonstrably more accurate than the three-day forecast was in the late 1980s.

Despite this limited window of predictability, some aspects of weather – such as general average temperature – can be forecast much further in advance, because they may depend on features of the climate system that change more slowly and predictably, such as ocean circulation. That’s what lies behind the Met Office’s current optimism about winter forecasting in the North Atlantic region.

This area has always been difficult, because the state of the atmosphere doesn’t seem to be as closely linked to ocean circulation as it is in the tropics, where medium-range forecasting is already more reliable. But it’s precisely because the consequences of a colder-than-usual winter may be more severe at these higher latitudes – from disruption to transport to risk of hypothermia – that the shortcomings of winter weather forecasts are more keenly felt there.

The most important factor governing the North Atlantic climate on the seasonal timescale is an atmospheric phenomenon called the North Atlantic Oscillation (NAO). This is a difference in air pressure between the low-pressure region over Iceland and the high-pressure region near the Azores, which controls the strength of the westerly jet stream and the tracks of Atlantic storms. The precise size and location of these differences see-saws back and forth every few years, but with no regular periodicity. It’s a little like the better-known El Niño climate oscillation in the tropical Pacific Ocean, but is confined largely to the atmosphere whereas El Niño – which is more regular and fairly well predictable now – involves changes in sea surface temperatures.

If we could predict the fluctuations of the NAO more accurately, this would give a sound basis for forecasting likely winter conditions. For example, when the difference between the high- and low-pressure “poles” of the NAO is large, we can expect high winds and storminess. The Met Office team reports that a computer model of the entire global climate called Global Seasonal Forecast System 5 (GloSea5) can now provide good predictions of the strength of the NAO one to four months in advance. They carried out “hindcasts” for 1993-2012, using the data available several months before each winter to make a forecast of the winter state of the NAO and then comparing that with the actual outcome. The results lead the team to claim that “useful levels of seasonal forecast skill for the surface NAO can be achieved in operational dynamical forecast systems”.

What is it that makes the notoriously erratic NAO well predicted by this model? The researchers can’t fully answer this question yet, but they say that the fluctuations seem to be at least partly governed by the El Niño cycle. The NAO is also linked to the state of the Arctic sea ice and to a roughly two-year cycle in the winds of the tropical lower stratosphere. If the model can get those things right, it will be likely to forecast the NAO pretty well too.

If we can make good predictions of general winter climate, says the Met Office team, then this should help with assessing – months in advance – the risks of dangerously high winter winds and of transport chaos, as well as predicting variations in wind energy and setting fuel-pricing policies. All of which, you must admit, is probably in the end more useful than knowing whether to expect a white Christmas.

Paper: A. A. Scaife et al., Geophysical Research Letters 41, 2514-2519 (2014)

Thursday, May 22, 2014

Forgotten prophet of the Internet

Here is my review of Alex Wright’s book on Paul Otlet, published in Nature this week.

_____________________________________________________________________

Cataloguing the World:
Paul Otlet and the Birth of the Information Age
Alex Wright
Oxford University Press, New York, 2014
ISBN 978-0-19-993141-5
384 pages, $27.95

The internet is often considered to be one of the key products of the computer age. But as Alex Wright, a former staffer at the New York Times, shows in this meticulously researched book, it has a history that predates digital technology. While the organization of information has challenged us for as long as we have had libraries, the Belgian librarian Paul Otlet conceived in the late nineteenth century of schemes for collection, storage, automated retrieval and remote distribution of the sum total of human knowledge that have clear analogies with the way information today is archived and networked on the web. Wright makes a persuasive case that Otlet – a largely forgotten figure today – deserves to be ranked among the inventors of the internet.

It is possible to push the analogies too far, however, and to his credit Wright attempts to locate Otlet’s work within a broader narrative about the encyclopaedic collation and cataloguing of information. Compendia of knowledge date back at least to Pliny’s Natural History and the cut-and-paste collections of Renaissance scholars such as Conrad Gesner, although these were convenient (and highly popular) digests of typically uncited sources. Otlet, in contrast, sought to collect everything – newspapers, books, pamphlets – and to devise a system for categorizing the contents akin to (indeed, a rival of) the Dewey decimal system. Wright tells a rather poignant story of the elderly, perhaps somewhat senile Otlet stacking up jellyfish on a beach and then placing on top an index card bearing the number 59.33: the code for Coelenterata in his Universal Decimal Classification.

But the real focus of this story is not about antecedents of the internet at all. It concerns the dreams that many shared around the fin de siècle, and again after the First World War, of a utopian world order that united all nations. This was Otlet’s grander vision, to which his collecting and cataloguing schemes were merely instrumental. His efforts to create a repository of all knowledge, called the Palais Mondial (World Palace), were conducted with his friend Henri La Fontaine, the Belgian politician and committed internationalist who was awarded the Nobel peace prize in 1913. The two men imagined setting up an “intellectual parliament” for all humanity. In part, their vision paved the way for the League of Nations and subsequently the United Nations – although Otlet was devastated when the Paris Peace Conference in 1919 elected to establish the former in Geneva in neutral Switzerland rather than in Brussels, where his Palais Mondial was situated. But in part, their objective amounted to something far more grandiose, utopian and strange.

While world government was desired by many progressive, left-leaning thinkers, such as H. G. Wells (who Otlet read), during the inter-war period, Otlet’s own plans often seemed detached from mundane realities, which left leaders and politicians unconvinced and doomed Otlet to constant frustration and ultimate failure. When Henry James dismisses the scheme Otlet concocted with a Norwegian-American architect to construct an immense “World City”, you can’t help feeling he has put his finger on the problem: “The World is a prodigious & portentous & immeasurable affair… so far vaster in complexity than you or me”.

Wright overlooks the real heritage of these ideas of Otlet’s. They veered into mystical notions of transcendence of the human spirit, influenced by Theosophy, and Otlet seems to have imagined that learning could be transmitted not by careful study of documents but by a kind of symbolic visual language condensed into posters and displays. The complex of buildings called the Mundaneum that he planned with the architect Le Corbusier was full of sacred symbolism, as much a temple as a library/university/museum. Here Otlet’s predecessor is not Gesner but the Italian philosopher Tommaso Campanella, who in 1602 described a utopian “City of the Sun” in which knowledge was imbibed by the citizens from great, complex paintings on the city walls. This aspect of Otlet’s dreams makes them as much backward-looking to Neoplatonism and Gnosticism as they are forward-looking to the information age and the internet.

But the future was there too, for example in Otlet’s advocacy of the miniaturization of documents (on microfilm) and his plans for automatic systems that could locate information like steampunk search engines. He considered that his vast collection of information at the proposed Mundaneum (the real structure never actually amounted to more than a corner of the Palais Mondial, from which he was rudely ejected in 1924 by the Belgian government) might be broadcast to users worldwide by radio, and stored in a kind of personal workstation called a Mondotheque, equipped with microfilm reader, telephone, television and record player.

All this can be correlated with the software and hardware of today. But Wright recognizes that the comparison only goes so far. In particular, Otlet’s vision was consistent with the social climate of his day: centralized, highly managed and hierarchical, quite unlike the distributed, self-organized peer-to-peer networks concocted by anti-establishment computer wizards in the 1960s and 70s. And while our ability now to access an online scan of Newton’s Principia would have delighted Otlet, the fact that so much more of our network traffic involves cute cats and pornography would have devastated him.

The poor man was devastated enough. After losing government support in 1934, Otlet managed to cling to a corner of the Palais Mondial until much of his collection was destroyed by the Nazis in 1940. He salvaged a little, and it mouldered for two decades in various buildings in Brussels. What remains now sits securely but modestly in the Mundaneum in Mons – not a grand monument but a former garage. But there is another Mundaneum in Brussels: a conference room given that name in Google’s European bureau. It is a fitting tribute, and Wright has offered another.

Quantum or not?


Here’s the original text of my article on D-Wave’s “quantum computers” (discuss) in the March issue of La Recherche.

_________________________________________________________________

Google has a quantum computer, and you could have one too. If you have $10m to spare, you can buy one from the Canadian company D-Wave, based in Burnaby, British Columbia. The aerospace and advanced technology company Lockheed Martin has also done so.

But what exactly is it that you’d be buying? Physically, the D-Wave 2 is almost a caricature: a black box the size of a large wardrobe. But once you’ve found the on-switch, what will the machine do?

After all, it has only 512 bits – or rather, qubits (quantum bits) – which sounds feeble compared with the billions of bits in your iPad. But these are bits that work according to quantum rules, and so they are capable of much, much more. At least, that is what D-Wave, run by engineer and entrepreneur Geordie Rose, claims. But since the company began to launch its commercial products in 2011, it has been at the centre of a sometimes rancorous dispute about whether they are truly quantum computers at all, and whether they can really do things that today’s classical computers cannot.

One of the problems is that D-Wave seemed to come out of nowhere. Top academic and commercial labs around the world are struggling to juggle with more than a handful of qubits at a time, and they seem to be very far from having any kind of computer that can solve useful problems. Yet the Canadian company just appeared to pop up with black boxes for sale. Some scepticism was understandable. What’s more, the D-Wave machines use a completely different approach to quantum computing from most other efforts. But some researchers wonder if they are actually exploiting quantum principles at all – or even if they are, whether this has any advantages over conventional (classical) computing, let alone over more orthodox routes top quantum computers.

While there is now wider acceptance that something ‘quantum’ really is going on inside D-Wave’s black boxes, the issue of what they can achieve remains in debate. But that debate has opened up questions that are broader and more interesting than simply whether there has been a bit of over-zealous salesmanship. At root the issues are about how best to harness the power of quantum physics to revolutionize computing, and indeed about what it even means to do so, and how we can know for sure if we have done so. What truly makes a computer quantum, and what might we gain from it?

Defying convention

Quantum computing was first seriously mooted in the 1980s. Like classical computing, it manipulates data in binary form, encoded in 1’s and 0’s. But whereas a classical bit is either in a 1 or 0 state independently of the state of other bits, quantum bits can be placed in mixtures of these states that are correlated (entangled) with one another. Quantum rules then enable the qubits to be manipulated using shortcuts, while classical computers have to slog through many more logical steps to get the answer.

Much of the difficulty lies in keeping groups of qubits in their delicate quantum states for long enough to carry out the computation. They rapidly lose their mutual coordination (become “decoherent”) if they are too disturbed by the thermal noise of their environment. So qubits must be well isolated from their surroundings and kept very cold. Like some other research groups, D-Wave makes qubits from tiny rings of superconducting material, where roughly speaking the 1’s and 0’s correspond to electrical currents circulating in opposite directions. They’re kept at a temperature of a few hundredths of a degree above absolute zero, and most of the volume in the black box is needed to house the cooling equipment. But while the other labs have prototypes with perhaps half a dozen qubits that are nowhere near the marketplace, D-Wave is out there selling their machines.

According to the Canadian company’s researchers, they’ve got further because D-Wave has chosen an unconventional strategy. One of the key problems with quantum computing is that quantum rules are not entirely predictable. Whereas a classical logic gate will always give a particular output for a particular input of 1’s and 0’s, quantum physics is probabilistic. Even if the unpredictability is very small, it rapidly multiplies for many qubits. As a result, most quantum computer architectures will rely on a lot of redundancy: encoding the information many times so that errors can be put right.

D-Wave’s approach, called quantum annealing, allegedly circumvents the need for all that error correction (see Box 1). It means that the circuits aren’t built, like classical computers and most other designs for quantum computers, from ‘logic gates’ that take particular binary inputs and convert them to particular outputs. Instead, the circuits are large groups of simultaneously interacting qubits, rather like the atoms of a magnet influencing one another’s magnetic orientation.

___________________________________________________________________________

Box 1: Quantum annealing

Computing by quantum annealing means looking for the best solution to a problem by searching simultaneously across the whole ‘landscape’ of possible solutions. It’s therefore a kind of optimization process, exemplified by the Travelling Salesman problem, in which the aim is to find the most efficient route that visits every node in a network. The best solution can be considered the ‘lowest-energy’ state – the so-called ground state – of the collection of qubits. In classical computing, that can be found using the technique of simulated annealing, which means jumping around the landscape at random looking for lower-lying ground. By allowing for some jumps to slightly increase the energy, it’s possible to avoid getting trapped in local, non-optimal dips. Quantum annealing performs an analogous process, except that rather than hopping classically over small hills and cols, the collective state of the qubits can tunnel quantum-mechanically through these barriers. What’s more, it works by starting with a flat landscape and gradually raising up the peaks, all the while keeping the collective state of the qubits pooled in the ground state.

Optimization problems like the Travelling Salesman belonging to a class called NP-hard problems. Working out how the amino acid chain of a protein folds up into its most stable shape is another example, of immense importance in molecular biology. These are very computationally intense challenges for classical computers, which generally have to simply try out each possible solution in turn.

Quantum annealers, explains physicist Alexandre Zagoskin of Loughborough University in England, a cofounder of D-Wave who left the company in 2005, are analog devices: less like digital logic-gate computers and more like slide rules, which calculate with continuous quantities. Logic-gate computers are ‘universal’ in the sense that they can simulate each other: you can perform the same computation using electrical pulses or ping-pong balls in tubes. “The obsession with universal quantum computing created unrealistic expectations, overhype, disillusionment and fatigue, and it keeps many theorists developing software for non-existing quantum ‘Pentiums’”, says Zagoskin. “In the foreseeable future we can make, at best, quantum slide rules like quantum annealers.”

_____________________________________________________________________

That makes a quantum annealer well suited to solving some problems but not others. “Instead of trying to build a universal computer”, says computer scientist Catherine McGeoch of Amherst College in Massachusetts, “D-Wave is going for a sort of ‘quantum accelerator chip’ that aims to solve one particular class of optimization problem. But this is exactly the class of problem where theoreticians think the important quantum speedups would be found, if they exist. And they are important computational problems in practice.” As physicist Alexandre Zagoskin puts it, D-Wave might be a few-trick device, but “if the trick is useful enough, and the performance-to-price ratio is good, who cares?”

Aside from avoiding error correction, quantum annealing (QA) allegedly has other advantages. Daniel Lidar, scientific director of the University of Southern California–Lockheed Martin Quantum Computing Center in Los Angeles, which uses D-Wave’s latest commercial machine D-Wave 2, explains that it relaxes the need for qubits to be switched fast, compared to the logic-gate approach. That in turn means that less heat is generated, and so it’s not such a struggle to keep the circuits cool when they contain many qubits. Mark Johnson, D-Wave’s chief scientific officer, sees various other practical benefits of the approach too. “We believe a quantum annealing processor can be built at a useful scale using existing technology, whereas one based on the gate model of quantum computing cannot”, he says. There are many reasons for this, ranging from greater resistance against coherence to the less taxing demands on the materials, on control of the environment, and on interfacing with users.

However, there are arguments about whether QA actually solves optimization tasks faster than classical algorithms. “No good reason has ever been given”, says John Smolin of IBM’s T. J. Watson Research Center in Yorktown Heights, New York, where much of the pioneering theory of quantum computation was developed in the 1980s.

Put it to the test

Perhaps the best way to resolve such questions is to put D-Wave’s machines to the test. McGeoch, with Cong Wang at Simon Fraser University in Burnaby, has pitted them against various classical algorithms for solving NP-hard optimization problems. They primarily used a D-Wave circuit called Vesuvius-5, with 439 working qubits, and found that it could find answers in times that were at least as good as, and in some cases up to 3,600 times faster than, the classical approaches. The speed-up got better as the number of elements in the problem (the number of destinations for the salesman, say) increased, up to the maximum of 439.

But not everyone is persuaded. Not only is D-Wave playing to its strengths here, but the speed-up is sometimes modest at best, and there’s no guarantee that faster classical algorithms don’t exist. Smolin still doubts that D-Wave’s devices “have solved any problem faster than a fair comparison with conventional computers.” True, he says, its harsh to compare D-Wave’s brand new machine with those coming from a 50-year, trillion-dollar industry. But that after all is the whole point. “History has shown that silicon-based computers always catch up in the end”, he says. “Currently, D-Wave is not actually faster at its own native problem than classical simulated annealing – even a relatively naive program running on standard hardware, written by me, more or less keeps up. If I spent $10 million on custom hardware, I expect I could beat the running time achieved by D-Wave by a very large amount.” And the real question is whether D-Wave can maintain an advantage as the problems it tackles are scaled up. “There is no evidence their larger machines are scaling well”, Smolin says.

Besides, Lidar notes, it remains a big challenge to express computational problems in a way that D-Wave can handle. “Most optimization problems such as protein folding involve a large amount of preprocessing before they can be mapped to the current D-Wave hardware”, he says. “The pre-processing problem may itself be computationally hard.”

Quite aside from the putative speed advantages, how can we tell if D-Wave’s machines are using quantum rules? There’s some suggestive evidence of that. In 2011 Johnson and colleagues at D-Wave reported that an 8-qubit system on the 128-qubit chip of D-Wave 1 showed signs that it was conducting true quantum annealing, because the experimental results didn’t fit with what the qubits were predicted to do if they were behaving just like classical bits. Lidar and his colleagues subsequently conducted more exacting tests of D-Wave 1, finding that even 108 coupled qubits find their ground state in a way that doesn’t fit with the predictions of classical simulated annealing. Perhaps surprisingly, a ‘quantum’ signature of the behaviour remains even though the timescale needed to find the optimal solution was considerably longer than that over which thermal noise can scramble some of the quantum organization. “In this sense QA is more robust against noise than the gate model”, says Lidar.

But he stresses that “these types of experiments can only rule out certain classical models and provide a consistency check with other quantum models. There’s still the possibility that someone will invent another classical model that can explain the data, and such models will then have be ruled out one at a time.” So none of these tests, he explains, is yet a “smoking gun” for quantum annealing.

Besides, says Zagoskin, the more qubits you have, the less feasible it becomes to simulate (using classical computers) the behaviour of so many coherent qubits to see how they should act. To anticipate how such a quantum computer should run, you need a quantum computer to do the calculations. “The theory lags badly behind the experimental progress, so that one can neither predict how a given device will perform, nor even quantify the extent of its “quantumness”, he says.

Joseph Fitzsimons of the Center for Quantum Technologies of the National University of Singapore finds Lidar’s tests fairly persuasive, but adds that “this evidence is largely indirect and not yet conclusive.” Smolin is less sanguine. My opinion is that the evidence is extremely weak”, he says. The whole question of what it means to “be quantum” is a deep and subtle one, he adds – it’s not just a matter of showing you have devices that work by quantum rules, but of showing that they give some real advantage over classical devices. “No one is denying that the individual superconducting loops in the D-Wave machine are superconducting, and it is well accepted that superconductivity is a manifestation of a quantum effect.” But quantum rules also govern the way transistors in conventional computers function, “and one doesn’t call those quantum computers.”

How would you know?

To assess the performance of a quantum computer, one needs to verify that the solutions it finds are correct. Sometimes that’s straightforward, as for example with factorization of large numbers – the basis of most current encryption protocols, and one of the prime targets for quantum computation. But for other problems, such as computer simulation of protein folding, it’s not so easy to see if the answer is correct. “This raises the troubling prospect that in order to accept results of certain quantum computations we may need to implicitly trust that the device is operating correctly”, says Fitzsimons.

One alternative, he says, is to use so-called interactive proofs, where the verifier forms a judgement about correctness on the basis of a small number of randomly chosen questions about how good the solution is. Fitzsimons and his collaborators recently demonstrated such an interactive proof of quantum effects in a real physical “quantum computer” comprised of just four light-based qubits. But he says that these methods aren’t applicable to D-Wave: “Unfortunately, certain technological limitations imposed by the design of D-Wave’s devices prevent direct implementation of any of the known techniques for interactive verification.”

For NP-hard problems, this question of verification goes to the heart of one of the most difficult unresolved problems in mathematics: there is as yet no rigorous proof that such problems can’t be solved faster by classical computers – perhaps we just haven’t found the right algorithm. In that case, showing that quantum computers crack these problems faster doesn’t prove that they use (or depend on) quantum rules to do it.

Part of this problem also comes down to the lack of any agreement on how quantum computers might achieve speed-up in the first place. While early proposals leant on the notion that quantum computers would be carrying out many computations in parallel, thanks to the ability of qubits to be in more than one state at once, this idea is now seen as too simplistic. In fact, there seems to be no unique explanation for what might make quantum computation faster. “Having a physical quantum computer of interesting size to experiment on has not produced answers to any of these open theoretical questions”, says McGeoch. “Instead experiments have only served to focus and specialize our questions – now they are more numerous and harder to answer. I suppose that's progress, but sometimes it feels like were moving backwards in our understanding of what it all means.”

None of this seems about to derail the commercial success of D-Wave. “We plan to develop processors with more qubits”, says Johnson. “Of course there are more dimensions to processor performance, such as the choice of connecting qubits, or the time required to set up a problem or to read out a solution. We’re working to improve processor performance along these lines as well.”

There’s certainly no problem of sheer size. The superconducting integrated-circuit chip at the core of D-Wave’s devices is “somewhat smaller than my thumbnail”, says Johnson. Besides, the processor itself takes up a small fraction of this chip, and has a feature size “equivalent to what the semiconductor industry had achieved in the very late 1990s”. “Our expectation is that we will not run out of room on our chip for the foreseeable future”, he says. And D-Wave’s bulky cooling system is capable of keeping a lot more than 512 qubits cold. “D-Wave's technology seems highly scalable in terms of the number of qubits they place and connect on a chip”, says Lidar. “Their current fridge can support up to 10,000 qubits.” So those black boxes look set to go on getting more powerful – even if it is going to get even harder to figure out exactly what is going on inside them.