Friday, February 28, 2014

Strength in numbers

I have a feature in Nature on developments in crowdsourcing science, looking in particular at the maths project Polymath on its fifth anniversary. Here’s the long version pre-editing. I also wrote an editorial to accompany the piece.

____________________________________________________________________________

Researchers are finding that online, crowd-sourced collaboration can speed up their work — if they choose the right problem.

When, last April, the hitherto little-known mathematician Yitang Zhang of the University of New Hampshire announced a proof that there are infinitely many prime numbers differing by no more than 70 million, it was hailed as a significant advance in a famous outstanding problem in number theory. In its simplest form, the twin primes conjecture states that there are infinitely many pairs of prime numbers differing by 2, such as (41, 43). Zhang’s gap of 70 million was much bigger than 2, but until then there was no proof of any persistent limiting gap at all.

But perhaps as dramatic as the reclusive Zhang’s unanticipated proof, published in May, was what happened next. “One could easily envisage that there would be a flood of mini-papers in which Zhang's bound of 70 million was whittled down by small amounts by different authors racing to compete with each other”, says Terence Tao, a mathematician at the University of California at Los Angeles. But instead of such an atomized race, this challenge to reduce the bound became the eighth goal for a ‘crowdsourcing’ maths project called Polymath, which Tao helped to set up and run. Mathematicians all around the world pitched in together, and the bound dropped from the millions to the thousands in a matter of months. By the end of November it stood at 576.

There is nothing new about the notion of crowdsourcing to crack difficult problems in science. Six years ago, the Galaxy Zoo project recruited volunteers to classify the hundreds of thousands of galaxies imaged by the Sloan Digital Sky Survey into distinct morphological types: information that would help understand how galaxies form and evolve. Galaxy Zoo has now gone through several incarnations and incorporates data on the earliest epochs of the visible universe from the Hubble Space Telescope. It provided a template for other projects needing human judgement to sort data, and has itself evolved into Zooniverse, which hosts several online data-classifying projects in space science and other areas. Participants can, for example, classify craters and other surface features on the Moon, tropical cyclone data from 30-year records, animals photographed by automated cameras on the Serengeti, cancer data, and even humanities projects such as tagging the diaries of soldiers from the First World War. Almost a million people have registered with Zooniverse to lend their help.

Expert opinion

But Polymath, which had its fifth anniversary in January this year, is rather different. Although anyone can join in to help solve its problems, you’re unlikely to make much of a contribution without highly specialized knowledge. This is no bean-counting exercise, but demands the most advanced mathematics. The project began when Cambridge mathematician Timothy Gowers asked on his own blog “Is massively collaborative mathematics possible?”

“The idea”, Gowers explained, “would be that anybody who had anything whatsoever to say about the problem could chip in. And the ethos of the forum would be that comments would mostly be kept short… you would contribute ideas even if they were undeveloped and/or likely to be wrong.” Gowers suspected there could be a benefit to having many different minds with different approaches and styles working on a problem. What’s more, sometimes a solution requires sheer luck – and the more contributions there are, the more likely you’ll get lucky.

His first challenge was a problem called the Hales-Jewett theorem, which posits that any sufficiently high-dimensional collection of number sequences must exhibit some correlated structure – it must be combinatorial – rather than being entirely random. Gowers’ blog sought a solution for one particular form of the theorem, known as the density version. Gowers had hoped for new insights into the problem, but even he was surprised that by March, after nearly 1,000 comments, he was able to declare the theorem proved He called that period “one of the most exciting six weeks of my mathematical life”, and adds that “the quite unexpected result – an actual solution to the problem – added an extra layer of excitement to the whole thing”. The proof was described in a paper attributed to “D. H. J. Polymath”.

Tao was drawn into that challenge, and has since hosted other projects on Polymath. Mathematics is perhaps a surprising discipline in which to find this sort of collaboration, as traditionally it has been viewed as a solitary enterprise, exemplified by the lonely and often secretive work of the likes of Zhang or Andrew Wiles, who proved Fermat’s Last Theorem in seclusion in the 1990s. But that image is misleading – or perhaps projects like Polymath are playing an active role in changing the culture. “One strength of a Polymath collaboration is in gathering literature and connections with other fields that a traditional small collaboration might not be aware of without a fortuitous conversation with the right colleague”, says Tao. “Simply having a common place to discuss and answer focused technical questions about a paper is very useful.” He says that such online “reading seminars” helped researchers get to grips quickly with Zhang’s original proof.

Refining that proof – Polymath 8 – produced another paper for D. H. J. Polymath. One of the big leaps came from James Maynard, a postdoctoral researcher at the University of Montreal in Canada, who last November showed how to reduce Zhang’s bound of 70 million to just 600. Maynard, however, had already been working on the problem before Zhang’s results were announced, and he says his work was essentially independent of Polymath.

All the same, he sees this as an appropriate problem for such an approach. “Zhang's work was very suitable for many participants to work on”, Maynard says. “The proof can be split into separate sections, with each section more-or-less independent of the others. This allowed different participants to focus on just the sections which appealed to them.”

The success of Polymath has been mixed, however. “Polymath 4 and 7 led to interesting results”, says Gil Kalai of the Hebrew University of Jerusalem, who has administrated some of the projects. “Polymath 3 and 5 led to interesting approaches but not to definite results, and Polymath 2,6 and 9 did not get much off the ground.” And Gowers admitted that for at least some of the challenges the “crowd” was rather small – just a handful of real experts. Partly this might be just a matter of time: after Polymath 1, he remarked that “the number of comments grew so rapidly that merely keeping up with the discussion involved a substantial commitment that not many people were in a position to make.” And perhaps some of the experts who might have contributed were simply not a part of the active blogosphere.

Polymath “hasn't turned out to be a game-changer”, says Tao, “but it’s a valid alternative way of doing mathematical research that seems to be effective in some cases. One nice thing though is that we can react rather quickly to ‘hot’ events in mathematics such as Zhang's work.” He says that the crowdsourcing approach works better for some problems than others. “It helps if the problem is broadly accessible and of interest to a large number of mathematicians, and can be broken up into parts that can be worked on independently, and if many of these parts lie within reach of known techniques.”

“Projects which seem to require a genuinely new idea have so far not been terribly successful”, he adds. “The project tends to assemble all the known techniques, figure out why each one doesn't work for the problem at hand, throw out a few speculative further ideas, and then get stuck. We're still learning what works and what doesn't.”

It’s with such pitfalls in mind that Kalai says “it will be nice to have a Polymath devoted to theory-building rather than to specific problem solving.” He adds that he would also like to see Polymath projects “that are on longer time scale than existing ones but perhaps less intensive, and that people can get in or spin off at will.”

Gowers recognized from the outset that collaboration won’t always eclipse competition. He admits that “it seems highly unlikely that one could persuade lots of people to share good ideas” about a high-kudos goal like the Riemann hypothesis, which relates to the distribution of prime numbers. This, after all, is one of the seven Millennium Problems for the solution of which the privately funded Clay Mathematics Institute in Providence, Rhode Island, has offered prizes of $1m.

All the same, that didn’t deter Gowers from launching Polymath 9 last November, which set out to find proofs for three conjectures that would solve another of the remaining six Millennium Problems: the so-called NP versus P problem. This asks whether all hard problems for which solutions can be quickly verified by a computer (denoted NP) coincides with the class of problems that can be solved equally quickly (denoted P). Gowers did not expect all three of his conjectures to be solved by Polymath 9, but admitted he would be pleased if just one of them could be. However, the results were initially disappointing, and Gowers was about to declare Polymath 9 a failure when he was contacted by Pavel Pudlak of the Mathematical Institute of the Czech Academy of Sciences with a proof that one of the three statements he was hoping to be proved false was in fact true, apparently cutting off this avenue for attacking the problem. Gowers is philosophical. “It’s never a disaster to learn that a statement you wanted to go one way in fact goes the other way”, he wrote. “It may be disappointing, but it’s much better to know the truth than to waste time chasing a fantasy.” In that regard, then, Polymath 9 did something useful after all.

Polymath now functions as a kind of elite open-source facility. People can post suggestions for new projects on a dedicated website maintained by Gowers, Tao, Kalai and open-science advocate Michael Nielsen, and these are then discussed by peers and, if positively received, launched for contributions. “The organization is still somewhat informal”, Tao says. Setting up and sustaining a Polymath project is a big commitment. “It needs an active leader who is willing to spend a fair amount of effort to organise the discussion and keep it moving in productive directions”, says Tao. “Otherwise the initial burst of activity can dissipate fairly quickly. Not many people are willing or able to do this.” “It’s quite difficult to get people interested,” Gowers agrees; so far, he and Tao have initiated all but two of the projects.

Although surprised by Polymath’s success, Kalai says that the trend toward more collaborative efforts started earlier, as signaled by a rise in the average number of coauthors on maths papers. “Polymath projects do not have enough weights to make a substantial change. But they add to the wealth of mathematical activities, and, for better of for worse, their impact on the community is larger than their net scientific impact.” It’s not clear that this is a good way to do maths, he concludes – “but we can certainly explore it.”

Cash or glory

Some other “expert” crowdsourcing ventures are being run as commercial ventures by companies that aim to link people with a problem to solve with people who might have the skills and ideas needed to solve it. These generally charge fees and offer financial rewards for participants. Other initiatives are government-led, such as the NASA Tournament Lab, which seeks “the most innovative, most efficient, and most optimized solutions for specific, real-world challenges being faced by NASA researchers”, and the US-based Challenge.gov, which offers cash prizes for solutions to a whole range of engineering and technological problems.

One of the most prominent commercial enterprises is Innocentive, which hosts a variety of scientific or technological challenges that are open to all of its millions of registered “solvers”. These range from the seemingly banal, if important (developing economical forms of latrine lighting in emergencies, or “keeping hair clean for longer without washing”), to the esoteric (“seeking 4-hydroxy-1H-pyridin-2-one analogues”, or ways of stabilizing foamed emulsions). InnoCentive’s founder Alph Bingham says that their approach “has produced solutions to problems that had been previously investigated for years and even decades.” Good challenges, he says, “are ones where the space of possible solutions is immense and therefore hard to search on a serial basis”.

In contrast to that broad portfolio, other crowdsourcing companies such as Kaggle and CrowdFlower specialize in data analysis. Kaggle has been used, for example, in bioinformatics to predict biological behaviours of molecules from their chemical structure, and in energy forecasting. It has been recently used by a team of astronomers seeking algorithms for mapping the distribution of dark matter in galaxies based on its gravitational-lensing effects on background objects. Through Kaggle, the researchers set up a competition called “Observing Dark Worlds”, which offered cash prizes (donated by the financial company Winton Capital) for the three best algorithms. The winning entries improved the performance, relative to standard algorithms, by about 30 percent.

While this was valuable, astronomer David Harvey of the University of Edinburgh, an author of that study, admits that it’s not always straightforward to apply potential solutions to the problem you’ve set. “Many of the ideas that came out of the competition were great, and provided really interesting insights into the problem”, he says. “But none of the algorithms are ready to be used on real data – they need to be fully tested and developed. And it’s very hard to take some algorithm from someone not in your field and develop it.”

Harvey says that indeed the winning algorithm for “Observing Dark Worlds” still hasn’t yet been fully developed. “However, the advantages of these competitions is not always obvious”, he adds. For example, the second-place entry was written by informatics specialist Iain Murray of the University of Edinburgh, who is continuing to collaborate with Harvey, and now with other astronomers too. “This wouldn’t have happened if it wasn't for Kaggle”, Harvey says. That experience shows how “it’s vital that the winners of the competition work in collaboration post-competition on the problem and develop the initial idea all the way through to a final package.” But Harvey admits that “often these are just side projects for participants, and while they may have a sincere interest in the problem, they do not have the time to commit.”

Harvey points out that the call for such projects might nevertheless be increasing, especially in astronomy. “With new telescopes such as the Square Kilometer Array, the large synoptic survey and Euclid on the horizon, astronomers will be facing real problems of data processing, handling and analysing”, he says. However, Thomas Kitching of University College London, who was the lead scientist on the Dark Worlds project, admits to having mixed feelings about what ultimately such efforts might achieve. In part this is because real expertise might be hard to harness this way. “Most people are not experts, but might have a bit of time”, he says. “There may be some experts, but they have very little time.”

While Polymath relies on unpaid efforts of researchers whose sole reward is professional prestige, Innocentive and Kaggle recognize that harnessing a broader community requires more tangible incentives, typically in the form of cash prizes. “In academia, people are willing to spend a lot of time for ‘kudos’ or for the sake of science – but only up to a point”, says Kitching. “Once the problem requires a lot of time, like coding in Kaggle, then monetary incentives or prizes seem to be required. No one is going to spend seven days a week trying to win unless it’s already their job, so money offsets time.”

Innocentive’s 300,000 solvers stand to gain rewards of between $5,000 and $1m. Kaggle now hosts some of the efforts of Galaxy Zoo for a prize of $16,000 (also provided by Winton Capital). This sort of funding is not necessarily just philanthropic for the donors – Winton Capital, for example, were themselves able to recruit new analysts via the Observing Dark Worlds initiative for a fraction of their usual advertising and interviewing costs.

But it’s not all about lucre. “Winning solvers rarely list the cash among their top motivations”, says Bingham. “Their motivations are frequently more intrinsic, such as intellectual stimulation or curiosity to explore where an idea might lead." InnoCentive aims to encourages non-cash incentives, such as prospects for further collaboration or joint press releases. Yet Bingham adds that “dollar amounts also serve as a kind of score-keeping.” Some of Kaggle’s projects have no cash prizes, and Harvey says that “a lot of the time computer scientists will go there because they want to work on something new and exciting, and not for financial gain.” Indeed, the company invites participants to “compete as a data scientists for fortune, fame and fun.”

“A competition can help to advertise a problem to people who have not thought about it before, a prize can attract them to spend time, and a metric can help to sort signal from noise”, says Kitching. “So in this sense competition, if well posed can help in science. But a poorly posed problem may just increase noise.”

But as Kalai points out, there can be as much value in identifying important questions, and tools to tackle them, as in finding solutions. Kitching recalls a computer called Multivac that appeared in several of Isaac Asimov’s short stories, which was very good at answering questions but still required human scientists to pose them in the first place. Kitching suspects that the crowdsourcing pool will act more like Multivac than like its interrogators. “In the crowdsourcing approach the key to successful science is working out the correct questions to ask the crowd”, he says.

Thursday, February 27, 2014

Floods: more please?

Are the UK floods a sign of climate change? According to a recent poll, 46 percent of people think so, 27 percent think not. The invitation is to regard this as a proxy poll for a general belief in the reality of climate change, and perhaps in humankind’s key causative role in it.

But in fact, any information embedded in this poll is complicated and difficult to entangle. If any climate or weather scientists were quizzed, it seems likely that they will have gravitated, like me, towards the “undecided” category. As they have been repeating insistently and now a little wearily, no single extreme-weather event (and this one certainly qualifies as that) can yet be unequivocally attributed to climate change. This of course is manna for the climate sceptics, who use it to argue that we still don’t know if climate change is really happening, and that this uncertainty reflects a serious limitation, perhaps a fundamental flaw, of the whole basis of climate modelling. It matters little that climatologists say such extreme weather is fully consistent with what the models predict – the misguided but widespread notion that science provides “yes/no” answers to questions, decided by the data, is here proving a burden.

That situation is changing, however. As Simon Lewis points out in Nature this week, it is now becoming possible to make some definite links between specific extreme weather events and anthropogenic climate change. Such analyses are complicated and the conclusions tentative, but they already give grounds for saying a little more than merely “it’s too early to tell”.

What the flood poll really probes, however, is public perceptions about what an altered climate would mean. The effect of the floods is likely to be not so much convincing undecided voters that climate change is already upon us, but showing them what is really at stake in this temperate zone: not balmy Mediterranean-style summers, not distant news of drowned Pacific island states, but Verdun-style mud and sandbags, and images of this green and pleasant land under glittering, muddy water from horizon to horizon. We have finally got a feeling for what it might be like to live in a world a degree or two warmer, and it seems uncomfortably close to home, and not at all pleasant. Shivering east coast Americans are having a somewhat different kind of awakening.

As wake-up calls go, it is pretty mild. But it is also likely to shift perceptions, not just of what to expect but of what the social and economic consequences will be. The more intelligent, or perhaps just cannier, sceptics have ceased questioning the science or the evidence but instead contest the economics: it will cost more, they say, to mitigate climate change, for example via taxes on fossil fuels or expensive green technologies, than to accept and adapt to it. This, for example, is the line taken by the science writer Matt Ridley, who laid out his case last October in an article in the Spectator.

The Viscount Ridley, immensely wealthy Eton-educated Conservative hereditary peer whose Darwinian attitude to economics was notoriously suspended when it came to the bailout of Northern Rock under his chairmanship, is an easy villain. But the Ridley I know (slightly – we sit on the same academic advisory committee), who happens to be an exceptionally good science writer and a clever thinker, is harder to caricature. His argument – in which a warmer world results in fewer net deaths, for example, though winter hypothermia – can’t be casually waved away. The dismantling needs more care.

The economic case is hugely complicated, and plagued by many more uncertainties than the science. It depends, for example, on making projections about nascent or even as yet undeveloped technologies. Even the research on which Ridley almost exclusively draws – by economist Richard Tol – mostly just points out these lacunae, and Tol advises nonetheless that “there is a strong case for near-term action on climate change”. (Ridley jettisons that bit.)

But of course economic figures paper over a multitude of woes. Imagine, for example, that an ice-free summer in the Arctic (which begins to look likely sooner than we expected) leads to the extinction of the economically insignificant polar bear (I’ll come back to that) but creates a fertile new breeding ground for fish stocks, with large economic benefits for fisheries. How would you feel about that? Or if the inundation of a few island states with trivial GDP, leaving the populations homeless, were massively offset by improved wheat yields in a warmer US Midwest? I’m not saying these things will happen, just that GDP is only a part of the story.

More importantly, perhaps, one can’t really put an economic figure on consequences of climate change such as the mass human migration that is predicted from north to south, which could very readily lead to social unrest and even war. Or the drastic changes in ecosystems likely to result, for example if ocean acidification from dissolved carbon dioxide wipes out corals. It isn’t hard to dream up such disasters, and Ridley is right that we need to think carefully, not just reactively, about what the real consequences would be – but economics, let alone highly uncertain economics, doesn’t give a full answer. All we can really agree on is that there seem unlikely to be any net benefits beyond 2070 or so, by which time things are getting really bad – especially if you have ploughed on merrily with business as usual, which Ridley seems to recommend. (He offers no alternative plan.) I won’t be around to see that; with good luck, my children will be. I don’t think theirs will be a problem that can be solved with wellies and sandbags.

All the same, I wish I could trust the arguments Ridley brings to the table. But, surprised by his passing suggestion that polar bears are fine and might even benefit from a bit of polar warming, I decided to check. The US Geological Survey in Alaska says “Our analysis of those data has shown that longer ice-free seasons have resulted in reduced survival of young and old polar bears and a population decline over the past 20 years. Recent observations of cannibalism and unexpected mortalities of prime age polar bears in Alaska are consistent with a population undergoing change.” The National Wildlife Federation says “The chief threat to the polar bear is the loss of its sea ice habitat due to global warming.” It’s impossible to generalize, however: studies suggest that many polar-bear populations will be wiped out within a few decades without human intervention, but some seem to be doing OK and may survive indefinitely (although climate change may introduce other threats, such as disease). If I were a polar bear, I’d feel decidedly less than sanguine about these forecasts. I’d also suspect that Ridley is less the “rational optimist” he styles himself, and more the wishful thinker.

Wednesday, February 19, 2014

The benefits of bendy wings


Here’s my latest news story for Nature.

________________________________________________________________

From insects to whales, flying and swimming animals use the same trick.

A new design principle that enables animals to fly has been discovered by a team of US researchers. They say that the same principle is used for propulsion by aquatic creatures, and suggest that it could supply guidance for designing artificial devices that propel themselves through air and water. The work is published today in Nature Communications [1].

The findings are welcomed by animal-flight expert Graham Taylor of Oxford University, who says that they “should certainly prove a fruitful area for future research”.

The earliest dreams of human flight, from Icarus to Leonardo da Vinci, drew on the notion of flapping wings, like those of birds or bats. But practical designs from the Wright brothers onward have largely abandoned this design in favour of the stationary aerofoil wing. Does that have to be the way, or might artificial flapping-wing devices be built?

In fact, a few have been already, but only very recently. In 2011, the German automation technology company Festo announced a small remote-controlled aircraft called the SmartBird that used flapping wings, based on the motion of a seagull. Aerospace engineers at the University of Illinois at Urbana Champaign are developing a “robotic bat” [2]. Some flying devices have also been based on insect flapping-wing flight, which use rapid wingbeats to produce upwards thrust that allows hovering and high manoeuvrability [3], while others have mimicked the undulating movements of jellyfish [4-6].

Developing flying machines based on bird-like flapping-wing aerodynamics is hampered by the lack of information about how birds achieve stability and control. John Costello of Providence College on Rhode Island, USA, and his colleagues believed that these flight properties might depend crucially on the fact that, unlike the wings of many human craft, animal wings are not rigid but flexible.

Yet there have been conflicting views on how wing flexibility affects the thrust produced by wing flapping, even to the extent of whether it helps or hinders. Costello and colleagues decided to take an empirical approach – to look at just how much real animal wings deform during flight.

They suspected that the same effect of bending should be evident in the operation of fins and flukes used for propulsion in water. In fact, they were initially motivated by their participation in a project for the US Office of Naval Research to develop a biologically inspired “jellyfish vehicle” [5,6].

That work, says Costello, showed that “the addition of a simple passive flap to an otherwise fairly rigid bending surface resulted in orders of magnitude increases in propulsive performance”. But what exactly were the rules behind these bending effects? “We reasoned that animals solved this problem several hundred million years ago”, says Costello, “so we decided to start by looking at natural forms.”

To gather data on the amount of deformation of wings and fins during animal movement, the researchers combed Youtube and Vimeo for video footage of species ranging from fruitflies to humpback whales and from molluscs to bats. They had to be extremely selective in what they used. They needed footage of steady motion (no slowing or speeding up), they needed to compare many flapping cycles for the same species, and they needed to find motion in the plane perpendicular to the line of vision, to obtain accurate information on the amount of bending.

This data was painstakingly collected by team members Kelsey Lucas of Roger Williams University in Bristol, Rhode Island, and Nathan Johnson at Providence College. “I’m not sure how many hundreds or thousands of video sequences they viewed and discarded” Costello admits. “It took many months of searching.”

They found that, for all the vast diversity of propulsor shapes and structures – gossamer-thin membranes, feathered wings, thick and heavy whale tails – there was rather little variation in the bending behaviour when measured (by eye) using the right variables. Specifically, when the data were plotted on a graph of the “flexion ratio” – the ratio of the length from “wing” base to the point where bending starts, to the total “wing” length – against maximum bending angle, all the points clustered within a small region.

In other words, this seems to be an example of “convergent evolution” – animals with very different evolutionary backgrounds have all “found” the same solution to a common problem, in this case the most effective bending criterion for propulsion through fluids. “Whether an animal is a fish or mollusc swimming in water or an insect or bird flying through the air, they all evolved to move within fluid environments”, says Costello. “Their evolution has been governed by the physical laws that determine fluid interactions. It doesn’t matter whether they originated from crawling, walking or jumping ancestors – once they adapted to a fluid, they evolved within a system determined by a common set of limits.”

“Perhaps the simple fact that wings, fins, and flukes of all shapes and sizes deform in a similar manner is not so surprising”, says Taylor. “What is surprising is the coupled variation in materials, morphology, and movement that this similarity implies” – so that “the comparatively flimsy wing of an insect deforms to the same extent in flight as does the powerful fleshy tail fluke of a killer whale”.

Costello is cautiously optimistic about translating the findings into aeronautical engineering design principles. First, he says, more needs to be known about why this narrow range of bending motions is advantageous for propulsion. “We hope to uncover more about the hydrodynamic reasons that these patterns are so common”, he says. “Maybe then the advantages that these animals have found in these traits can be translated into human designs.”

References
1. Lucas, K. N. et al., Nature Commun. 5, 3293 (2014).
2. Kuang, P. D., Dorothy, M. & Chung, S.-J. Abstract, Am Inst. Aeronaut. Assoc. 29-31 March, 2011-1435 (2011). doi: 10.2514/6.2011-1435
3. Ma, K. Y., Chirarattananon, P., Fuller, S. B. & Wood, R. J. Science 340, 603–607 (2013).
4. Ristroph, L. & Childress, S. J. R. Soc. Interface http://dx.doi.org/10.1098/rsif.2013.0992 (2014).
5. Villanueva, A., Smith, C. & Priya, S. Bioinsp. Biomim. 6, 036004 (2011).
6. Colin, S. P. et al., PLoS ONE 7, e48909 (2012).

Friday, February 14, 2014

Making sense of music - in Italian


A popular-science magazine called Sapere has been published monthly in Italy since 1935. (There's a nice history of science popularization in Italy here.) Sapere is now produced by the Italian publisher Dedalo, who are aiming to revitalize it. They have asked me to contribute a regular column, which will be about the cognition of music. Each month I’ll focus on one or two particular pieces of music and explain how they do what they do. Here’s a slightly extended version of the introductory column, which takes as its subject “Over the Rainbow”.

___________________________________________________________

How do we make sense of music, and why does it move us? While much is still mysterious about it, some is not. Cognitive science and neuroscience are starting to reveal rules that our minds use to turn a series of notes and chords into a profound experience that speaks to us and reaches into the depths of our soul. In these columns I’ll aim to explain some of the rules, tricks and principles that turn sound into music.

One of the first things we notice about a song is the melody. Certain melodies capture our attention and interest more than others, and the songwriter’s goal is to find ones that stick. How do they do it?

Take “Over the Rainbow”, the ballad written by Harold Arlen and E. Y. Harburg for The Wizard of Oz (1939). We remember it partly for Judy Garland’s plangent voice – it became her signature tune – but it grabs us from the start with that soaring leap on “Some-where”.

Melodic leaps this big are very rare. Statistically speaking, most steps between successive melody notes are small – usually just between adjacent notes in the musical scale. For music all around the world, the bigger the step, the less often it is used. Partly this may be because it’s generally easier to sing or play notes that are closer together, but there is also a perceptual reason. Small steps in pitch help to “bind” the notes into a continuous phrase: we hear them as belonging to the same tune. The bigger the step, the more likely we’ll perceive it as a break in the tune. This is one of the rules deduced by the Gestalt psychologists around the start of the twentieth century, who were interested in how the mind groups stimuli together to form an organized picture of the world that causes them. They were primarily interested in vision, but the ‘gestalt principles’ apply to auditory experience too.

But small pitch steps can sound boringly predictable after a while, like nursery rhyme tunes. To create memorable tunes, sometimes songwriters have to take a chance on bigger leaps. The one in the first two notes of “Over the Rainbow” is particularly big: a full octave. The same leap occurs at the beginning of “Singing in the Rain”. Typically only one percent or so of pitch steps in melodies are this big. That means they stand out as memorable – but how come we still hear it as a tune at all, when the gestalt principles seem to say that big jumps cause a perceptual break-up?

Well, Arlen (who wrote the music) has added safeguards against that, probably quite unconsciously. First, the two notes on “Some-where” are long ones – you could say that “-where” is held for long enough for the brain to catch up after the leap. Second, the leap comes right at the start of the song, before there’s even really been time for a sense of the tune to develop at all. Third, and perhaps most important, the leap is not alone. There are similar big jumps in pitch (although not quite as big) at the start of the second and third phrases too (“Way up…”, “There’s a…”). In this way, the composer is signalling that the big jumps are an intentional motif of the song – he’s telling us not to worry, this is just a song with big pitch jumps in it. This is a general principle: if you hear a big pitch jump in a melody, it’s very likely that others will follow. In this way, tunes can create their own ‘rules’ which can over-ride the gestalt principles and produce something both coherent and memorable.

Wednesday, February 12, 2014

Closer to ignition

Here’s the original draft of my latest piece for Nature news.

___________________________________________________________________

Another milestone is passed on the long road to fusion energy

The usual joke about controlled nuclear fusion, which could provide much ‘cleaner’ nuclear power than fission, is that it has been fifty years away for the past fifty years. But it just got a bit closer. In a report published in Nature today [1], a team of researchers at the US National Ignition Facility (NIF), based at Lawrence Livermore National Laboratory in California, say that their fusion experiments have managed to extract more energy from the nuclear process than was absorbed by the fuel to trigger it.

That’s certainly not the much-sought “break-even” point at which a fusion reactor can generate more energy than it consumes, because there are many other processes that consume energy before it even reaches the nuclear fuel. But it represents “a critical step on the path to ignition”, according to Mark Herrmann of Sandia National Laboratory in Albuquerque, New Mexico, who heads the project on high-energy X-ray pulses there.

While nuclear fission extracts nuclear energy released during breakup of very heavy nuclei such as uranium, nuclear fusion – the process that powers stars – produces energy by the coalescence of very light nuclei such as hydrogen. A tiny part of the masses of the separate hydrogen nuclei is converted into energy during the reaction.

Although the basic physics of fusion is well understood, conducting it in a controlled manner in a reactor – rather than releasing the energy explosively in a thermonuclear hydrogen bomb – has proved immensely difficult, largely because of the challenge of containing the incredibly hot plasma that fusion generates.

There is no agreed way of doing this, and fusion projects in different parts of the world are exploring a variety of solutions. In most of these projects the fuel consists of the heavy hydrogen isotopes deuterium and tritium, which react to produce the isotope helium-4.

A lot of energy must be pumped into the fuel to drive the nuclei close together and overcome their electrical repulsion. At the NIF this energy is provided by 192 high-power lasers, which send their beams into a bean-sized gold container called a hohlraum, in which the fuel sits inside a plastic capsule. The laser energy is converted into X-rays, some of which are absorbed by the fuel to trigger fusion. Most of the energy, however, is absorbed by the hohlraum itself. That’s why obtaining gain (more energy out than in) within the fuel itself is only a step along the way to “ignition”, the point at which the reactor as a whole produces energy.

The fuel is kept in a plastic shell called the ablator. This absorbs the energy in the hohlraum and explodes, creating the high pressure that makes the fuel implode to reach the high density needed to start fusion. But that pressure can burst through the ablator at weak points and destabilize the implosion, mixing the fuel with the ablator plastic and reducing the efficiency of the fusion process.

The NIF team’s success, achieved in experiments conducted between last September and this January, comes from ‘shaping’ the laser pulses to deliver more power early in the pulse. This creates a relatively high initial temperature in the hohlraum which “fluffs up” the plastic shell. “This fluffing up greatly slows down growth of the instability”, says team leader Omar Hurricane.

As a result, the researchers have been able to achieve a “fuel energy gain” – a ratio of energy released by the fuel to energy absorbed – of between 1.2 and 1.9. “This has never been done before in laboratory fusion research”, says Herrmann. “It’s a very promising advance.”

He adds that much of the energy released was produced by self-heating of the fuel through the radiation released in the fusion reactions – an important requirement for sustaining the fusion process.

But fusion energy generation still remains a distant goal, for which Hurricane admits he can’t yet estimate a timescale. “Our total gain – fusion energy out divided by laser energy in – is only about 1%”, he points out.

“This is more than a little progress, but still modest in terms of energy generation”, Hurricane says. “Our goal right now is to more than double the final pressures in our implosion, by making it go faster and improving its shape.”

Meanwhile, other projects, such as the International Thermonuclear Experimental Reactor (ITER) under construction in southern France, will explore different approaches to fusion. “When trying to solve hard problems it is wise to have multiple approaches, as every potential solution has pros and cons”, says Hurricane.

References
1. Hurricane, O. A. et al., Nature advance online publication doi:10.1038/nature13008 (2014).

Tuesday, February 04, 2014

Colour coordinated

Here's a talk, nicely recorded, that I gave on colour and chemistry for the "Big Ideas" course at Bristol at the end of last year. There's a version of this floating on the web that I gave at Michigan something like ten years ago, which cruelly lays bare the ravages of time. For colour junkies, there is another little snippet here for the Atlantic magazine on Newton's spectrum.

Friday, January 31, 2014

What mathematicians do

Yes, it looks different. It sort of just happened. I was updating my links, and then they all vanished and I got this new look, complete with Twitter feed. The web works in mysterious ways.

Anyway, here is a piece that has been published in the February issue of Prospect. There will be more here on the Polymath project some time in the near future.

______________________________________________________________

Our cultural relationship with the world of mathematics is mythologized like no other academic discipline. While the natural sciences are seen to keep some roots planted in the soil of daily life, in inventions and cures and catastrophes, maths seems to float freely in an abstract realm of number, as much an art as a science. More than any white-coated boffin, its proponents are viewed as unworldly, with minds unfathomably different from ours. We revel in stories of lone geniuses who crack the most refractory problems yet reject status, prizes and even academic tenure. Maths is not just a foreign country but an alien planet.

Some of the stereotypes are true. When the wild-haired Russian Grigori Perelman solved the notorious Poincaré conjecture in 2003, he declined first the prestigious Fields Medal and then (more extraordinarily to some) the $1m Millennium Prize officially awarded to him in 2010. The prize was one of seven offered by the non-profit, US-based Clay Mathematics Foundation for solutions to seven of the most significant outstanding problems in maths.

Those prizes speak to another facet of the maths myth. It is seen as a range of peaks to be scaled: a collection of ‘unsolved problems’, solutions of which are guaranteed to bring researchers (if they want it) fame, glory and perhaps riches. In this way maths takes on a gladiatorial aspect, encouraging individuals to lock themselves away for years to focus on the one great feat that will make their reputation. Again, this is not all myth; most famously, Andrew Wiles worked in total secrecy in the 1990s to conquer Fermat’s Last Theorem. Even if maths is in practice more comradely than adversarial – people have been known to cease working on a problem, or to avoid it in the first place, because they know someone else is already doing so – nonetheless its practitioners can look like hermits bent on Herculean Labours.

It is almost an essential part of this story that those labours are incomprehensible to outsiders. And that too is often the reality. I have reported for several years now on the Abel prize, widely see as the ‘maths Nobel’ (not least because it is awarded by the Norwegian Academy of Science and Letters). Invariably, describing what the recipients are being rewarded for becomes an impressionistic exercise, a matter of sketching out a nested Russian doll of recondite concepts in a tone that implies “Don’t ask”.

Yet this public image of maths is only part of the story. For one thing, some of the hardest problems are actually the most simply stated. Fermat’s Last Theorem, named after the seventeenth-century mathematician Pierre Fermat who claimed to have a solution that he couldn’t fit in the page margin, is a classic example. It states that there are no whole-number solutions for a, b, and c in the equation a**n + b**n = c**n if n is a whole number larger than 2. Because it takes only high-school maths to understand the problem, countless amateurs were convinced that high-school maths would suffice to solve it. When I was an editor at Nature, ‘solutions’ would arrive regularly, usually handwritten in spidery script by authors who would never accept they had made trivial errors. (Apparently Wiles’ solution, which occupied 150 pages and used highly advanced maths, has not deterred these folks, who now seek acclaim for a ‘simpler’ solution.)

The transparency of Fermat’s Last Theorem is shared by some of the other Millennium Prize problems and further classic challenges in maths. Take Goldbach’s conjecture, which makes a claim about the most elusive of all mathematical entities – the prime numbers. These are integers that have no other factors except itself and 1: for example, 2, 3, 5, 7, 11 and 13. The eighteenth-century German mathematician Christian Goldbach is credited with proposing that every even integer greater than 2 can be expressed as the sum of two primes: for example, 4=2+2, 6=3+3, and 20=7+13. One can of course simply work through all the even numbers in turn to see if they can be chopped up this way, and so far the conjecture has been found empirically to hold true up to about 4x10**18. But such number-crunching is no proof, and without it one can’t be sure that an exception won’t turn up around, say, 10**21. Those happy to accept that, given the absence of exceptions so far, they’re unlikely to appear later, are probably not destined to be mathematicians.

Goldbach’s conjecture would be an attractive target for young mathematicians seeking to make their name, but it won’t make them money – it’s not a Millennium Problem. One of the most alluring of that select group is a problem that doesn’t really involve numbers at all, but concerns computation. It is called the P versus NP problem, and is perhaps best encapsulated in the idea that the answer to a problem is obvious once you know it. In other words, it is often easier to verify an answer than to find it in the first place. The NP vs P question is whether, for all problems that can be verified quickly (there’s a technical definition of ‘quickly’), there exists a way of actually finding the right answer comparably fast. Most mathematicians and computer scientists think that this isn’t so – in formal terms, that NP is not equal to P, meaning that some problems are truly harder to solve than to verify. But there’s no proof of that.

This is a maths challenge with unusually direct practical implications. If NP=P, we would know that, for some computing problems that are currently very slow to solve, such as finding the optimal solution to a complex routing problem, there is in fact have a relatively efficient way to get the answer. The problem has philosophical ramifications too. If NP=P, this would imply that anyone who can understand Andrew Wiles’ solution to Fermat’s Last Theorem (which is more of us than you might think, given the right guidance) could also in principle have found it. The rare genius with privileged insight would vanish.

Perhaps the heir to the mystique of Fermat’s Last Theorem, meanwhile, is another of the Millennium Problems: the Riemann hypothesis. This is also about prime numbers. They keep popping up as one advances through the integers, and the question is: is there any pattern to the way they are distributed? The Riemann hypothesis implies something about that, although the link isn’t obvious. Its immediate concern is the Riemann zeta function, denoted ζ(s), which is equal to the sum of 1**-s + 2**-s +3**-s +…, where s is a complex number, meaning that it contains a real part (an ‘ordinary’ number) and an imaginary part incorporating the square root of -1. (Already I’m skimping on details.) If you plot a graph of the curve ζ as a function of s, you’ll find that for certain values of s it is equal to zero. Here’s Riemann’s hypothesis: that the values of s for which ζ(s)=0 are always (sorry, with the exception of the negative even integers) complex numbers for which the real part is precisely ½. It turns out that these zero values of ζ determine how far successive prime numbers deviate from the smooth distribution predicted by the so-called prime number theorem. Partly because it pronounces on the distribution of prime numbers, if the Riemann hypothesis can be shown to be true then several other important conjectures would also be proved.

The distribution of the primes set the context for a recent instructive episode in the way maths is done. Although primes become ever rarer as the numbers get bigger, every so often two will be adjacent odd numbers: so-called twins, such as 26681 and 26683. But do these ‘twin primes’ keep cropping up forever? The (unproven) twin-primes hypothesis says that they do.

In April of last year, a relatively unknown mathematician at the University of New Hampshire named Yitang Zhang unveiled a proof of a "weaker" version of the twin-primes hypothesis which showed that there are infinitely many near-twins separated by less than 70 million. (That sounds like a much wider gap than 2, but its still relatively small when the primes themselves are gargantuan.) Zhang, a Chinese immigrant who had earlier been without an academic job for several years, fits the bill of the lone genius conquering a problem in seclusion. But after news of his breakthrough spread on maths blogs, something unusual happened. Others started chipping in to reduce Zhang’s bound of 70 million, and in June one of the world’s most celebrated mathematicians, Terence Tao at the University of California at Los Angeles, set up an online ‘crowdsourcing’ project called Polymath to pool resources. Before long, 70 million had dropped to 4680. Now, thanks to work by a young researcher named James Maynard at the University of Montreal, it is down to 600.

This extraordinarily rapid progress on a previously recalcitrant problem was thus a collective effort: maths isn’t just about secret labours by individuals. And while the shy, almost gnomically terse Zhang might fit the popular image, the gregarious and personable Tao does not.

What’s more, while projects like the Millennium Problems play to the image of maths as a set of peaks to scale, mathematicians themselves value other traits besides the ability to crack a tough problem. Abel laureates are commonly researchers who have forged new tools and revealed new connections between different branches of mathematicians. Last year’s winner, the Belgian Pierre Deligne, who among other things solved a problem in algebraic geometry analogous to the Riemann hypothesis, was praised for being a “theory builder” as well as a “problem solver”, and the 2011 recipient John Milnor was lauded as a polymath who “opened up new fields”. The message for the young mathematician, then, might be not to lock yourself away but to broaden your horizons.

Wednesday, January 29, 2014

Follow this

I thought I could resist. I really did. I was convinced it would just waste my time, and perhaps it will. But here it is: I’m on Twitter. https://twitter.com/philipcball, in case you’re interested. Time will tell.

Friday, January 24, 2014

Great balls of fire


This ball lightning has got everywhere (like here and here and here), often inaccurately – the paper in Physical Review Letters doesn’t report the first time ball lightning has been captured on video, you can see several such events on YouTube. But it’s the first time that a spectrum of ball lightning has been captured, which lets us see what it’s made of. In this case, that’s apparently dirt: the spectrum shows atomic emission lines characteristic of silicon, calcium and iron. That seems to support the idea that this mysterious atmospheric phenomenon is caused by a conventional lightning strike vaporizing the soil – but actually it’s too early to say whether it supports any particular theory.

In any event, I got a preview of all this because I wrote the story for Physical Review Focus. And I have to say that the notion of conducting field observations on the Qinghai Plateau in the dark during a thunderstorm strikes me as not one of the most desirable jobs in science – Ping Yuan, the group leader, said in what I suspect is considerable understatement that this place is “difficult to access.”

It’s funny stuff, ball lightning. We used to regularly get papers submitted to Nature offering theories of it – these were particularly popular with the Russians – and I had the pleasure of publishing the oft-cited Japanese work in 1991 in which two researchers made what looked like a ball-lightning-style plasma ball in the lab. But we’ve still got a way to go in understanding it. Anyway, I’ve made a little video about this business too.

Addendum: Whoa, now I discover that I wrote this piece 14 years ago about the original theory that ball lightning is a bundle of vaporized dirt.

Tuesday, January 21, 2014

The year of crystallography


Here is 2012 Chemistry Nobel Laureate Brian Kobilka speaking yesterday at the opening ceremony of the International Year of Crystallography at the UNESCO building in Paris. It's a fun but slightly strange gathering, at least in my experience - a curious mixture of science, politics, development programs, and celebration. But UNESCO has some very commendable plans for what this year will achieve, for example in terms of research initiatives in Africa. I had a comment on the IYCr in Chemistry World late last year, and Athene Donald has a nice perspective piece online at the Guardian.

Friday, January 17, 2014

Flight of the robot jellyfish

Here’s my other little piece for Nature news. The videos of this thing in flight, provided on the Nature site, are rather beautiful.

_____________________________________

Its transparent wings fixed to a delicate wire framework recall the diaphanous, veined wings of an insect. But when the flying machine devised by applied mathematicians Leif Ristroph and Stephen Childress of New York University rises gracefully into the air, the undulations of its conical form resemble nothing so much as a jellyfish swimming through water, the device’s electrical power lead trailing like a tentacle. It is, in short, like no other flying machine you have seen before.

This is not the first small artificial ornithopter – a flying machine capable of hovering like a dragonfly or hummingbird by the beating of its wings. But what distinguishes Ristroph and Childress’s craft from those like the flapping insectoid robots reported by researchers at Harvard last year [1], with a wingspan of barely 3 cm, is that it can remain stable in flight with the movement of its wings alone, without the need for additional stabilizers or complex feedback control loops to avoid flipping over. The new ornithopter has four droplet-shaped wings of Mylar plastic film about 5 cm wide, arranged around a spherical body, attached to an articulated carbon-fibre framework driven by a tiny motor and weighing no more than 2.1g in total. It can execute forward flight and stable hovering, and can right itself automatically from tilting. The motion of the wings generates a downward jet, as do the undulations of a jellyfish bell. The absence of this strategy among flying animals, the researchers say, remains a mystery. The work is reported in the Journal of the Royal Society Interface [2].

References
1. Ma, K. Y., Chirarattananon, P., Fuller, S. B & Wood, R. J. Science 340, 603-607 (2013).
2. Ristroph, L. & Childress, S. J. R. Soc. Interface 20130992 (2014).

Wednesday, January 15, 2014

"Irrational" behaviour can be rational

I have a couple of news stories on Nature’s site this week. Here’s the first. This is, I think, more of a cautionary tale than a surprising discovery. One researcher I spoke to put it like this:
“Imagine a person trying to clime to the top of the hill. Each step up takes this decision maker toward her goal. We see this person trudging along upward, but then we see this person not step uphill, but step downhill. Is this irrational? As it turns out there is a large bolder in her way and stepping down and around the boulder made sense considering the larger goal of getting to the top of the hill. But, if you only look at her behavior step by step, then moving downhill will look “irrational” or “intransitive” but really that’s a misunderstanding of the problem and landscape which is larger than just one step at a time. Moreover it is a fundamental misunderstanding of the idea of rationality to demand that every step of the decision maker be upward in order to be rational.”
The moral, I think, is that when we see choices like this that appear to be irrational, it pays to look for “boulders” before assuming that they are truly the result of error, limited cognitive resources, or sheer caprice.

__________________________________________________

Theory shows it may be best to rearrange your preferences if the options might change

You prefer apples to oranges, but cherries to apples. Yet if I offer you just cherries and oranges, you take the oranges. Are you stupid or crazy?

Not necessarily either, according to a new study. It shows that in some circumstances a decision like this, which sounds irrational, can actually be the best one. The work is published in Biology Letters [1].

Organisms, including humans, are often assumed to be evolutionarily hard-wired to make optimal decisions, to the best of their ability. Sticking with fixed preferences when weighing up choices – for example, in selecting food sources – would seem to be one aspect of such rationality. If A is preferred over B, and B over C, then surely A should be selected when the options are just A and C? This seemingly logical ordering of preferences is called transitivity.

What’s more, if A is preferred when both B and C are available, then A should ‘rationally’ remain the first choice from only A and B – a principle called the independence of irrelevant alternatives (IIA).

But sometimes animals don’t display such apparent logic. For example, honeybees and gray jays [2] and hummingbirds [3] have been seen to violate IIA. “On witnessing such behaviour in the past, people have simply assumed that it is not optimal, and then proposed various explanations for it”, says mathematical biologist Pete Trimmer of the University of Bristol in England. “They assume that the individual or species is not adapted to solve the given task, or that it is too costly to compute it accurately.”

The theoretical model of Trimmer and his colleagues shows that in contrast, violations of transitivity can sometimes be adaptively optimal and therefore perfectly rational. “It should mean that researchers will be less prone to quickly claiming that a particular species or individual is behaving irrationally” in these cases, he says.

The key to the apparent “irrationality” in the Bristol group’s model is that the various choices might appear or disappear in the future. Then the decision becomes more complicated than a simple, fixed ranking of preferences. Is it better to expend time and energy eating a less nutritious food that’s available now, or to ignore it because a better alternative might become available in a moment?

The researchers find that, for some particular choices of the nutritional values of food sources A, B and C, and of their probabilities of appearing or vanishing in the future, an optimal choice for pairs of foods can prefer B to A, C to B and A to C, which violates transitivity.

Trimmer and colleagues also find some situations where IIA is violated in the optimal solution. These choices look irrational, but aren’t.

Behavioural ecologist Tanya Latty of the University of Sydney, who has observed violations of IIA in the food choices of a slime mould [4], points out that some examples of apparent irrationality seen in the foraging decisions of non-human animals are already understood to come from the fact that animals rarely have all their options available at once. “The choice is not so much ‘which item should I consume’ as ‘should I spend time consuming this particular item, or should I keep looking’?” she explains. “Some of what we perceive as irrational behaviour would then simply be the result of presenting animals with the unusual case of a simultaneous choice, when they have evolved to make optimal sequential choices.”

Latty feels that the new work by Trimmer and colleagues “goes some way toward combining the sequential and simultaneous viewpoints”. It helps to show that “decision strategies that appear irrational in simplified experimental environments can be adaptive in the complex, dynamic worlds in which most organisms live.”

She thinks it might be possible to test these ideas. “I suspect it would be easy enough to train animals (or humans) to forage on items that had different probabilities of disappearing or reappearing. Then you could test whether or not playing with these probabilities influences preferences.” The difficulty is that organisms may already take into account natural tendencies for choices to disappear.

“I think it is absolutely worth investigating further”, Latty says. “It has certainly given me some ideas for future experiments.”

“The paper is very nicely done”, says economist and behavioural scientist Herb Gintis of the Santa Fe Institute in New Mexico, but he adds that “there is nothing anomalous or even surprising about these results.”

Gintis explains that the choices only seem to violate transitivity or IIA because there are in fact more than three. “Usually when IIA fails, the modeler is using the wrong choice space”, he says. “An expansion of the choice space to include probabilities of appearance and disappearance would correct this.”

Trimmer sees no reason why the results shouldn’t apply to humans. “Of course, much of the time we make errors, which is a very simple explanation for any behaviour which appears irrational”, he says. “But an individual who displays intransitive choices is not necessarily behaving erroneously.”

He feels that such behaviour could surface in economic contexts, for example in cases where people are choosing investment strategies from savings schemes that may or may not be available in the future. In other words, while economic behaviour is clearly not always rational (as some economists have assumed), we shouldn’t be too hasty in assuming that what seems irrational necessarily is.

References
1. McNamara, J. M., Trimmer, P. C. & Houston, A. I. Biol. Lett. 20130935 (2014).
2. Shafir, S., Waite, T. A. & Smith, B. H. Behav. Ecol. Sociobiol. 51, 180-187 (2002).
3. Bateson, M., Healy, S. D. & Hurly, T. A. Anim. Behav. 63, 587-596 (2002).
4. Latty, T. & Beekman, M. Proc. R. Soc. B 278, 307-312 (2011).

Tuesday, January 14, 2014

The future of physics

Research Funding Insight recently asked me to write a piece on the “future of physics”, to accompany a critique of string theory and its offshoots by Jim Baggott (see below). I wanted to take the opportunity to explain that, whatever the shortcomings of string theory might be, they most certainly do not leave physics as a whole in crisis. It is doing very nicely, because it is much, much broader than both string theory in particular and what gets called “fundamental physics” in general. So here it is. The article first appeared in Funding Insight on 7 January 2014, and I’m reproducing it here with kind permission of Research Professional. For more articles like this (including Jim Baggott’s), visit www.researchprofessional.com.

________________________________________________________________

Is physics at risk of disappearing up its own foundations? To read some of the recent criticisms of work on string theory, which seeks a fundamental explanation of all known forces and particles, you might think so. After about three decades of work, the theory is no closer to a solution – or rather, the number of possible solutions has mushroomed astronomically, while none of them is testable and they all rest on a base of untried speculations.

But while scepticism about the prospects for this alleged Theory of Everything may be justified, it would be mistaken to imagine that the difficulties are gnawing away at the roots of physics. They are the concern of only a tiny fraction of physicists, while many others consider them esoteric at best and perhaps totally irrelevant.

Don’t imagine either that the entire physics community has been on tenterhooks to see what the Large Hadron Collider at CERN in Geneva will come up with, or whether, now that it seems to have found the Higgs boson, the particle accelerator will open up a new chapter in fundamental physics that takes in such mysterious or speculative concepts as dark matter and supersymmetry (a hitherto unseen connection between different classes of particles).

Strings and the LHC are the usual media face of physics: what most non-physicists think physicists do. This sometimes frustrates other physicists intensely. “High-energy physics experiments are over-rated, and are not as significant as they were decades ago”, says one, based in the US. “Now it is tiny increments in knowledge, at excessive costs – yet these things dominate the science news.”

Given the jamboree that has surrounded the work at the LHC, especially after the award of the 2013 Nobel prize in physics to Peter Higgs (with François Englert, who also proposed the particle now known by Higgs’ name), it is tempting to dismiss this as sour grapes. But there’s more to it than the resentment of one group of researchers at seeing the limelight grabbed by another. For the perception that the centre of gravity of physics lies with fundamental particles and string theory reflects a deep misunderstanding about the whole nature of the discipline. The danger is that this misunderstanding might move beyond the general public and media and start to infect funders, policy-makers and educationalists.

The fact is that physics is not a quest for isolated explanations of this or that phenomenon (and string theory, for all its vaunted status as a Theory of Everything, is equally parochial in what it might ‘explain’). Physics attempts to discover how common principles apply to many different aspects of the physical world. It would be foolish to suppose that we know what all these principles are, but we certainly know some of them. In a recent article in Physics World, Peter Main and Charles Tracy from the Institute of Physics’ education section made a decent stab at compiling a list of what constitutes “physics thinking”. It included the notions of Reductionism, Causality, Universality, Mathematical Modelling, Conservation, Equilibrium, the idea that differences cause change, Dissipation and Irreversibility, and Symmetry and Broken Symmetry. There’s no space to explain all of these, but one might sum up many of them in the idea that things change for identifiable reasons; often those reasons are the same in different kinds of system; we can develop simplified maths-based descriptions of them; and when change occurs, some things (like total energy) stay the same before and after.

Many of these notions are older than is sometimes supposed. Particle physicists, for example, have been known to imply that the concept of symmetry-breaking – whereby a system with particular symmetry properties appears spontaneously from one with more symmetry – was devised in the 1950s and 60s to answer some problems in their field. The truth is that this principle was already inherent in the work of the Dutch scientist Johannes Diderik van der Waals in 1873. Van der Waals wasn’t thinking about particle physics, which didn’t even exist then; he was exploring the way that matter interconverts between liquid and gas states, in what is called a phase transition. Phase transitions and symmetry breaking have since proved to be fundamental to all areas of physics, ranging from the cosmological theory of the Big Bang to superconductivity. Looked at one way, the Higgs boson is the product of just another phase transition, and indeed some of the ideas found in Higgs’ theory were anticipated by earlier work on the low-temperature transition that leads to resistance-free electrical conductivity in some materials (called superconductivity).

Or take quantum theory, which began to acquire its modern form when in 1926 Erwin Schrödinger wrote down a ‘wavefunction’ to describe the behaviour of quantum particles. Schrödinger didn’t just pluck his equation from nowhere: he adapted it from the centuries-old discipline of wave mechanics, which describes what ordinary waves do.

This is not to say that physicists are always stealing old ideas without attribution. Quite the opposite: it is precisely because they were so thoroughly immersed in the traditions and ideas of classical physics, going back to Isaac Newton and Galileo, that the physicists of the early twentieth century such as Einstein, Max Planck and Niels Bohr were able to instigate the revolutionary new ideas of quantum theory and relativity. All the best contemporary physicists, such as Richard Feynman and the Soviet Lev Landau, have had a deep appreciation of the connections between old and new ideas. Feynman’s so-called path-integral formulation of quantum electrodynamics, which supplied a quantum theory of how light interacts with matter, drew on the eighteenth-century classical mechanics of Joseph Louis Lagrange. It is partly because they point out these connections that Feynman’s famous Lectures on Physics are so revered; the links are also to be found, in more forbidding Soviet style, in the equally influential textbooks by Landau and his colleague Evgeny Lifshitz.

The truly profound aspect of central concepts like those proposed by Main and Tracy is that they don’t recognize any distinctions of time and space. They apply at all scales: to collisions of atoms and bumper cars, to nuclear reactions and solar cells. It seems absurd to imagine that the burst of ultrafast cosmic expansion called inflation, thought to be responsible for the large-scale structure of the universe we see today, has any connection with the condensation of water on the windowpane – but it does. Equally, that condensation is likely to find analogies in the appearance of dense knots and jams in moving traffic. Looked at this way, what is traditionally called fundamental physics – theories of the subatomic nature of matter – is no more fundamental than is the physics of sand or sound. It merely applies the same concepts at smaller scales.

This, then, is one important message for physics education: don’t teach it as a series of subdisciplines with their own unique set of concepts. Or if you must parcel it up in this way, keep the connections at the forefront. It’s also a message for students: always consider how the subject you’re working on finds analogues elsewhere.

All this remains true even while – one might even say especially as – physics ventures into applied fields. It’s possible (honestly) to see something almost sublime in the way quantum theory describes the behaviour of electrons in solids such as the semiconductors of transistors. On one level it’s obvious that it should: quantum theory describes very small things, and electrons are very small. But the beauty is that, under the auspices of quantum rules, electrons can get marshalled into states that mirror those in quite different and more exotic systems. They can acquire ‘orbits’ like those in atoms, so that blobs of semiconductor can act as artificial atoms. They can get bunched into pairs or other groups that travel in unison, giving us superconductivity, itself analogous to the weird frictionless superfluid behaviour of liquid helium. One of the most interesting features of the atom-thick carbon sheets called graphene is not that they will provide new kinds of touch-screen (we have those already) but that their electrons, partly by virtue of being trapped in two dimensions, can collectively behave like particles called Dirac fermions, which have no mass and move at the speed of light. The electrons don’t actually do this – they just ‘look’ like particles that do. In such ways, graphene enables experiments that seem to come from the nether reaches of particle physics, all in a flake of pencil lead on a desktop.

As graphene promises to show, these exotic properties can feed back into real applications. Other electronic ‘quasiparticles’ called excitons (a pairing of an electron with a gap or ‘hole’ in a pervasive electron ‘sea’) are responsible for the light emission from polymers that is bringing flexible plastics to screens and display technology. In one recent example, an exotic form of quantum-mechanical behaviour called Bose-Einstein condensation, which has attracted Nobel prizes after being seen in clouds of electromagnetically trapped ultracold gas, has been achieved in the electronic quasiparticles of an easily handled plastic material at room temperature, making it possible that this once arcane phenomenon could be harnessed cheaply to make new kinds of laser and other light-based devices.

There is a clear corollary to all this for allocating research priorities in physics: you never know. However odd or recondite a phenomenon or the system required to produce it, you never know where else it might crop up and turn out to have uses. That of course is the cliché attached to the laser: the embodiment of a quirky idea of Einstein’s in 1917, it has come to be almost as central to information technology as the transistor.

Does this mean that physics, by virtue of its universality, can in fact have no priorities, but must let a thousand flowers bloom? Probably the truth is somewhere in between: it makes sense, in any field of science, to put some emphasis on areas that look particularly technologically promising or conceptually enriching, as well as curbing areas that seem to have run their course. But it would be a mistake to imagine that physics, any more than Darwinian evolution, has any direction – that somehow the objective is to work down from the largest scales towards the smaller and more ‘fundamental’.

Another reason to doubt the overly reductive approach is supplied by Michael Berry, a distinguished physicists at the University of Bristol whose influential work has ranged from classical optics and mechanics to quantum chaos. “There are different kinds of fundamentality”, says Berry. “As well as high-energy and cosmology, there are the asymptotic regimes of existing theories, where new phenomena emerge, or lurk as borderland phenomena between the theories.” Berry has pointed out that an ‘asymptotic regime’ in which some parameter in a theory is shrunk to precisely zero (as opposed to being merely made very small), the outcomes of the theory can change discontinuously: you might find some entirely new, emergent behaviour.

As a result, these ‘singular limits’ can lead to new physics, making it not just unwise but impossible to try to derive the behaviour of a system at one level from that at a more ‘fundamental’ level. That’s a reason to be careful about Main and Tracy’s emphasis on reductionism. Some problems can be solved by breaking them down into simpler ones, but sometimes that will lose the very behaviour you’re interested in. “If you don’t think emergence is important too, you won't get far as a condensed matter physicist”, says physicist Richard Jones, Pro-Vice-Chancellor for Research and Innovation at the University of Sheffield.

It’s important to recognize too that the biggest mysteries, however alluring they seem, may not be the most pressing, nor indeed the most intellectually demanding or enriching. The search for dark matter is certainly exciting, well motivated, and worth pursuing. But at present it is only rather tenuously linked to the mainstream of ideas in physics – we have so few clues, either observationally or theoretically, about how to look or what we hope to find, that it is largely a matter of blind empiricism. It is usually wise not to spend too much of your time stumbling around in the dark.

With all this in mind, here are a few suggestions for where what we might call ‘small physics’ might usefully devote some of its energies in the coming years:

- quantum information and quantum optics: even if quantum computers aren’t going to be a universal game-changer any time soon, the implications of pursuing quantum theory as an information science are vast, ranging from new secure communications technologies to deeper insights into the principles that really underpin the quantum world.

- the physics of biology: this can mean many things, from understanding how the mechanics of cells determine their fate (stem cells sometimes select their eventual tissue type from how the individual cells are pulled and tugged) to the question of whether phase transitions underpin cancer, brain activity and even natural selection. This one needs handling with care: physicists are likely to go badly astray unless they talk to biologists.

- materials physics: from new strong materials to energy generation and conversion, it is essential to develop an understanding of how materials systems behave over a wide range of size scales (and that’s not necessarily a problem to tackle from the bottom up). Such knowhow is likely to be central to a scientific basis for sustainability.

- new optical technologies: you’ve probably heard about invisibility cloaks, and while some of those claims need to be taken with a pinch of salt, the general idea that light can be moulded, manipulated and directed by controlling the microstructure of materials (such as so-called photonic band-gap materials and metamaterials) is already leading to new possibilities in display technologies, telecommunications and computing.

- electronics: this one kind of goes without saying, perhaps, but the breadth and depth of the topic is phenomenal, going way beyond ways to make transistors ever smaller. There is a wealth of weird and wonderful behaviour in new and unusual materials, ranging from spintronics (electronics that uses the quantum spins of electrons), molecular and polymer electronics, and unusual electronic behaviour on the surfaces of insulators (check out “topological insulators”).

None of this is to deny the value of Big Physics: new accelerators, telescopes, satellites and particle detectors will surely continue to reveal profound insights into our universe. But they are only part of a bigger picture.

Most of all, it isn’t a matter of training physicists to be experts in any of these (or other) areas. Rather, they need to know how to adapt the powerful tools of physics to whatever problem is at hand. The common notion (or is it just in physics?) that a physicist can turn his or her hand to anything is a bit too complacent for comfort, but it is nonetheless true that a ‘physics way of thinking’ is a potential asset for any science.

Monday, January 13, 2014

A prize for Max von Laue

In my book Serving the Reich, I make some remarks about the potential pitfalls of naming institutions, prizes and so forth after “great” scientists, and I say that, while my three main subjects Max Planck, Werner Heisenberg and Peter Debye are commemorated in this way, Max von Laue is not (“to my knowledge”). This seemed ironic, given that during the Nazi era Laue much more obviously and courageously resisted the regime than did these others.

Crystallographer Udo Heinemann of the Max Delbrück Centre for Molecular Medicine in Berlin has pointed out to me that a Max von Laue prize does in fact exist. It is awarded by the German Crystallographic Society (Deutsche Gesellschaft für Kristallographie, DGK) annually to junior scientists for “outstanding work in the field of crystallography in the broadest sense”, and is worth 1500 euros. I have discussed elsewhere the perils of this “name game”, but given that everyone plays it, I am pleased to see that Laue has not been overlooked. It seems all the more fitting to have this pointed out during the International Year of Crystallography.

Thursday, January 09, 2014

The cult of the instrument

I have a piece in Aeon about instruments in science. Here’s how it looked at the outset.

_____________________________________________________________

Whenever I visit scientists to discuss their research, there always comes a moment when they say, with pride they can barely conceal, “Do you want a tour of the lab?” It is invariably slightly touching – like Willy Wonka dying to show off his factory. I’m always glad to accept, knowing what lies in store: shelves bright with bottles of coloured liquid and powders, webs of gleaming glass tubing, slabs of perforated steel holding lasers and lenses, cryogenic chambers like ornate bathyspheres whose quartz windows protect slivers of material about to be raked by electron beams.

It’s rarely less than impressive. Even if the kit is off-the-shelf, it will doubtless be wired into a makeshift salmagundi of wires, tubes, cladding, computer-controlled valves and rotors and components with more mysterious functions. Much of the gear, however, is likely to be home-made, custom-built for the research at hand. The typical lab set-up is, among other things, a masterpiece of impromptu engineering – you’d need degrees in electronics and mechanics just to put it all together, never mind how you make sense of the graphs and numbers it produces.

All this usually stays behind the scene in science. Headlines announcing “Scientists have found…” rarely bother to tell you how those discoveries were made. And would you care? The instrumentation of science is so highly specialized that it must often be accepted as a kind of occult machinery for producing knowledge. We figure they must know how it all works.

It makes sense in a way that histories of science tend to focus on the ideas and not the methods – surely what matters most is what was discovered about the workings of the world? But most historians of science today recognize that the relationship of scientists to their instruments is an essential part of the story. It is not simply that the science is dependent on the devices; rather, the devices determine what is known. You explore the things that you have the means to explore, and you plan your questions accordingly. That’s why, when a new instrument comes along – the telescope and the microscope are the most thoroughly investigated examples, but this applies as much today as it did in the seventeenth century – entirely new fields of science can be opened up. Less obviously, such developments demand a fresh negotiation between the scientists and their machines, and it’s not fanciful to see there some of the same characteristics as are found in human relationships. Can you be trusted? What are you trying to tell me? You’ve changed my life! Look, isn’t she beautiful? I’m bored with you, you don’t tell me anything new any more. Sorry, I’m swapping you for a newer model.

That’s why it is possible to speak of interactions between scientists and their instruments that are healthy or dysfunctional. How do we tell one from the other?

The telescope and microscope were celebrated even by their first users as examples of the value of enhancing the powers of human perception. But the most effective, not to mention elegant, scientific instruments serve also as a kind of prosthesis for the mind: they emerge as an extension of the experimenter’s thinking. That is exemplified in the work of the New Zealand physicist Ernest Rutherford, perhaps the finest experimental scientist of the twentieth century. Rutherford famously preferred the sealing-wax-and-string approach to science: it was at a humble benchtop with cheap, improvised and homespun equipment that he discovered the structure of the atom and then split it. This meant that Rutherford would devise his apparatus to tell him precisely what he wanted to know, rather than being limited by someone else’s view of what one needed to know. His experiments thus emerged organically from his ideas: they could almost be seen as theories constructed out of glass and metal foil.


Ernest Rutherford’s working space in the Cavendish Laboratory, Cambridge, in the 1920s.

In one of the finest instances, at Manchester University in 1908 Rutherford and his coworkers figured out that the alpha particles of radioactive decay are the nuclei of helium atoms. If that’s so, then one needs to collect the particles and see if they behave like helium. Rutherford ordered from his glassblower Otto Baumbach a glass capillary tube with extraordinarily thin walls, so that alpha particles emitted from radium could pass right through. Once they had accumulated in an outer chamber, Rutherford connected it up to become a gas-discharge tube, revealing the helium from the fingerprint wavelength of its glow. It was an exceedingly rare example of a piece of apparatus that answers a well defined question – are alpha particles helium? – with a simple yes/no answer, almost literally by whether or not a light switches on.

A more recent example of an instrument embodying the thought behind it is the scanning tunnelling microscope, invented by the late Heinrich Rohrer and Gerd Binnig at IBM’s Zurich research lab in the 1980s. They knew that electrons within the surface of an electrically conducting sample should be able to cross a tiny gap to reach another electrode held just above the surface, thanks to a quantum-mechanical effect called tunnelling. Because tunnelling is acutely sensitive to the width of the gap, a needle-like metal tip moving across the sample, just out of contact, could trace out the sample’s topography. If the movement was fine enough, the map might even show individual atoms and molecules. And so it did.


A ring of iron atoms on the surface of copper, as shown by the scanning tunnelling microscope. The ripples on the surface are electron waves. Image: IBM Almaden Research Center.

Between the basic idea and a working device, however, lay an incredible amount of practical expertise – of sheer craft – allied to rigorous thought. Against all expectation (they were often told the instrument “should not work” on principle), Rohrer and Binnig got it going, invented perhaps the central tool of nanotechnology, and won a Nobel prize in 1986 for their efforts.

So that’s when it goes right. What about when it doesn’t?

Scientific instruments have always been devices of power: those who possess the best can find out more than the others. Galileo recognized this: he conducted a cordial correspondence with Johannes Kepler in Prague, but when Kepler requested the loan of one of Galileo’s telescopes the Italian found excuses, knowing that with one of these instruments Kepler would be an even more serious rival. Instruments, Galileo already knew, confer authority.

But now instruments – newer, bigger, better – have become symbols of prestige as never before. I have several times been invited to admire the most state-of-the-art device in a laboratory purely for its own sake, as though I am being shown a Lamborghini. Historian of medical technology Stuart Blume of the University of Amsterdam has argued that, as science has started to operate according to the rules of a quasi-market, the latest equipment serves as a token of institutional might that enhances one’s competitive position in the marketplace. When I spoke to several chemists recently about their use of second-hand equipment, often acquired from the scientific equivalent of eBay, they all asked to remain anonymous, as though this would mark them out as second-rate scientists.

One of the dysfunctional consequences of this sort of relationship with an instrument is that the machine becomes its own justification, its own measure of worth – a kind of totem rather than a means to and end. A result is then “important” not because of what it tells us but because of how it was obtained. The Hubble Space Telescope is (despite its initial myopia) one of the most glorious instruments ever made, a genuinely new window on the universe. But when it first began to send back images of the cosmos in the mid 1990s, Nature would regularly receive submissions reporting the first “Hubble image” of this or that astrophysical object. The authors would be bemused and affronted when told that what the journal wanted was not the latest pretty picture, but some insight into the process it was observing – a matter that required rather more thought and research.

This kind of instrument-worship is, however, at least relatively harmless in the long run. More problematic is the notion of instrument as “knowledge machine”, an instrument that will churn out new understanding as long as you keep cranking the handle. The European particle-physics centre CERN has flirted with this image for the Large Hadron Collider, which the former director-general Robert Aymar called a “discovery machine.” This idea harks back (usually without knowing it) to a tradition begun by Francis Bacon in his Novum Organum (1620). Here Bacon drew on Atistotle’s notion of an organon, a mechanism for logical deduction. Bacon’s “new organon” was a new method of analysing facts, a systematic procedure (what we would now call an algorithm) for distilling observations of the world into underlying causes and mechanisms. It was a gigantic logic machine, accepting facts at one end and ejecting theorems at the other.

In the event, Bacon’s “organon” was a system so complex and intricate that he never even finished describing it, let alone ever put it into practice. Even if he had, it would have been to not avail, because it is now generally agreed among philosophers and historians of science that this is now how knowledge comes about. The preference of the early experimental scientists, like those who formed the Royal Society, to pile up facts in a Baconian manner while postponing indefinitely the framing of hypotheses to explain them, will get you nowhere. (It’s precisely because they couldn’t in fact restrain their impulse to interpret that men like Isaac Newton and Robert Boyle made any progess.) Unless you begin with some hypothesis, you don’t know which facts you are looking for, and you’re liable to end up with a welter of data, mostly irrelevant and certainly incomprehensible.

This seems obvious, and most scientists would agree. But that doesn’t mean the Baconian “discovery machine” has vanished. As it happens, the LHC doesn’t have this defect after all: the reams of data it has collected are being funnelled towards a very few extremely well defined (even over-refined) hypotheses, in particular the existence of the Higgs particle. But the Baconian impulse is alive and well elsewhere, driven by the allure of “knowledge machines”. The ability to sequence genomes quickly and cheaply will undoubtedly prove valuable for medicine and fundamental genetics, but these experimental techniques have already far outstripped not only our understanding of how genomes operate but our ability to formulate questions about that. As a result, some gene-sequencing projects seem conspicuously to lack a suite of ideas to test. The hope seems to be that, if you have enough data, understanding will somehow fall out of the bottom of the pile. As a result, biologist Robert Weinberg of the Massachusetts Institute of Technology has said, “the dominant position of hypothesis-driven research is under threat.”

And not just in genomics. The United States and Europe have recently announced two immense projects, costing hundreds of millions of dollars, to use the latest imaging technologies to map out the human brain, tracing out every last one of the billions of neural connections. Some neuroscientists are drooling at the thought of all that data. “Think about it,” said one. “The human brain produces in 30 seconds as much data as the Hubble Space Telescope has produced in its lifetime.”

If, however, one wanted to know how cities function, creating a map of every last brick and kerb would be an odd way to go about it. Quite how these brain projects will turn all their data into understanding remains a mystery. One researcher in the European project, simply called the Human Brain Project, inadvertently revealed the paucity of any theoretical framework for navigating this information glut: “It is a chicken and egg situation. Once we know how the brain works, we'll know how to look at the data.” The fact that the Human Brain Project is not quite that clueless hardly mitigates the enormity of this flippant statement. Science has never worked by shooting first and asking questions later, and it never will.

Biology, in which the profusion of evolutionary contingencies makes it particularly hard to formulate broad hypotheses, has long felt the danger of a Baconian retreat to pure data-gathering, substituting instruments for thinking. Austrian biochemist Erwin Chargaff, whose work helped elucidate how DNA stores genetic information, commented on this tendency as early as 1977:
“Now I go through a laboratory… and there they all sit before the same high speed centrifuges or scintillation counters, producing the same superposable graphs. There has been very little room left for the all important play of scientific imagination.”

Thanks to this, Chargaff said, “a pall of monotony has descended on what used to be the liveliest and most attractive of all scientific professions.” Like Chargaff, the pioneer of molecular biology Walter Gilbert saw in this reduction of biology to a set of standardized instrumental procedures repeated ad nauseam an encroachment of corporate strategies into the business of science. It was becoming an industrial process, manufacturing data on the production line: data produced, like consumer goods, because we have the instrumental means to do so, not because anyone knows what to do with it all. Nobel laureate biochemist Otto Loewi saw this happening in the life sciences even in 1954:
“Sometimes one has the impression that in contrast with former times, when one searched for methods in order to solve a problem, frequently nowadays workers look for problems with which they can exploit some special technique.”

High-energy physics now works on a similar industrial scale, with big machines at the centre. It doesn’t suffer the same lack of hypotheses as areas of biology, but arguably it can face the opposite problem: a consensus around a single idea, into which legions of workers burrow single-mindedly. Donald Glaser, the inventor of the bubble chamber, saw this happening in the immediate postwar period, once the Manhattan Project had provided the template:
“I knew that large accelerators were going to be built and they were going to make gobs of strange particles. But I didn’t want to join an army of people working at big machines.”
For Glaser the machines were taking over, and only by getting out of it did he devise his Nobel-prizewinning technique.

The challenge for the scientist, then, particularly in the era of Big Science, is to keep the instrument in its place. The best scientific kit comes from thinking about how to solve a problem. But once they become a part of the standard repertoire, or once they acquire a lumbering momentum of their own, instruments might not assist thinking but start to constrain it. As historians of science Albert van Helden and Thomas Hankins have said, “Because instruments determine what can be done, they also determine to some extent what can be thought.”