I have a feature in Nature on developments in crowdsourcing science, looking in particular at the maths project Polymath on its fifth anniversary. Here’s the long version pre-editing. I also wrote an editorial to accompany the piece.
____________________________________________________________________________
Researchers are finding that online, crowd-sourced collaboration can speed up their work — if they choose the right problem.
When, last April, the hitherto little-known mathematician Yitang Zhang of the University of New Hampshire announced a proof that there are infinitely many prime numbers differing by no more than 70 million, it was hailed as a significant advance in a famous outstanding problem in number theory. In its simplest form, the twin primes conjecture states that there are infinitely many pairs of prime numbers differing by 2, such as (41, 43). Zhang’s gap of 70 million was much bigger than 2, but until then there was no proof of any persistent limiting gap at all.
But perhaps as dramatic as the reclusive Zhang’s unanticipated proof, published in May, was what happened next. “One could easily envisage that there would be a flood of mini-papers in which Zhang's bound of 70 million was whittled down by small amounts by different authors racing to compete with each other”, says Terence Tao, a mathematician at the University of California at Los Angeles. But instead of such an atomized race, this challenge to reduce the bound became the eighth goal for a ‘crowdsourcing’ maths project called Polymath, which Tao helped to set up and run. Mathematicians all around the world pitched in together, and the bound dropped from the millions to the thousands in a matter of months. By the end of November it stood at 576.
There is nothing new about the notion of crowdsourcing to crack difficult problems in science. Six years ago, the Galaxy Zoo project recruited volunteers to classify the hundreds of thousands of galaxies imaged by the Sloan Digital Sky Survey into distinct morphological types: information that would help understand how galaxies form and evolve. Galaxy Zoo has now gone through several incarnations and incorporates data on the earliest epochs of the visible universe from the Hubble Space Telescope. It provided a template for other projects needing human judgement to sort data, and has itself evolved into Zooniverse, which hosts several online data-classifying projects in space science and other areas. Participants can, for example, classify craters and other surface features on the Moon, tropical cyclone data from 30-year records, animals photographed by automated cameras on the Serengeti, cancer data, and even humanities projects such as tagging the diaries of soldiers from the First World War. Almost a million people have registered with Zooniverse to lend their help.
Expert opinion
But Polymath, which had its fifth anniversary in January this year, is rather different. Although anyone can join in to help solve its problems, you’re unlikely to make much of a contribution without highly specialized knowledge. This is no bean-counting exercise, but demands the most advanced mathematics. The project began when Cambridge mathematician Timothy Gowers asked on his own blog “Is massively collaborative mathematics possible?”
“The idea”, Gowers explained, “would be that anybody who had anything whatsoever to say about the problem could chip in. And the ethos of the forum would be that comments would mostly be kept short… you would contribute ideas even if they were undeveloped and/or likely to be wrong.” Gowers suspected there could be a benefit to having many different minds with different approaches and styles working on a problem. What’s more, sometimes a solution requires sheer luck – and the more contributions there are, the more likely you’ll get lucky.
His first challenge was a problem called the Hales-Jewett theorem, which posits that any sufficiently high-dimensional collection of number sequences must exhibit some correlated structure – it must be combinatorial – rather than being entirely random. Gowers’ blog sought a solution for one particular form of the theorem, known as the density version. Gowers had hoped for new insights into the problem, but even he was surprised that by March, after nearly 1,000 comments, he was able to declare the theorem proved He called that period “one of the most exciting six weeks of my mathematical life”, and adds that “the quite unexpected result – an actual solution to the problem – added an extra layer of excitement to the whole thing”. The proof was described in a paper attributed to “D. H. J. Polymath”.
Tao was drawn into that challenge, and has since hosted other projects on Polymath. Mathematics is perhaps a surprising discipline in which to find this sort of collaboration, as traditionally it has been viewed as a solitary enterprise, exemplified by the lonely and often secretive work of the likes of Zhang or Andrew Wiles, who proved Fermat’s Last Theorem in seclusion in the 1990s. But that image is misleading – or perhaps projects like Polymath are playing an active role in changing the culture. “One strength of a Polymath collaboration is in gathering literature and connections with other fields that a traditional small collaboration might not be aware of without a fortuitous conversation with the right colleague”, says Tao. “Simply having a common place to discuss and answer focused technical questions about a paper is very useful.” He says that such online “reading seminars” helped researchers get to grips quickly with Zhang’s original proof.
Refining that proof – Polymath 8 – produced another paper for D. H. J. Polymath. One of the big leaps came from James Maynard, a postdoctoral researcher at the University of Montreal in Canada, who last November showed how to reduce Zhang’s bound of 70 million to just 600. Maynard, however, had already been working on the problem before Zhang’s results were announced, and he says his work was essentially independent of Polymath.
All the same, he sees this as an appropriate problem for such an approach. “Zhang's work was very suitable for many participants to work on”, Maynard says. “The proof can be split into separate sections, with each section more-or-less independent of the others. This allowed different participants to focus on just the sections which appealed to them.”
The success of Polymath has been mixed, however. “Polymath 4 and 7 led to interesting results”, says Gil Kalai of the Hebrew University of Jerusalem, who has administrated some of the projects. “Polymath 3 and 5 led to interesting approaches but not to definite results, and Polymath 2,6 and 9 did not get much off the ground.” And Gowers admitted that for at least some of the challenges the “crowd” was rather small – just a handful of real experts. Partly this might be just a matter of time: after Polymath 1, he remarked that “the number of comments grew so rapidly that merely keeping up with the discussion involved a substantial commitment that not many people were in a position to make.” And perhaps some of the experts who might have contributed were simply not a part of the active blogosphere.
Polymath “hasn't turned out to be a game-changer”, says Tao, “but it’s a valid alternative way of doing mathematical research that seems to be effective in some cases. One nice thing though is that we can react rather quickly to ‘hot’ events in mathematics such as Zhang's work.” He says that the crowdsourcing approach works better for some problems than others. “It helps if the problem is broadly accessible and of interest to a large number of mathematicians, and can be broken up into parts that can be worked on independently, and if many of these parts lie within reach of known techniques.”
“Projects which seem to require a genuinely new idea have so far not been terribly successful”, he adds. “The project tends to assemble all the known techniques, figure out why each one doesn't work for the problem at hand, throw out a few speculative further ideas, and then get stuck. We're still learning what works and what doesn't.”
It’s with such pitfalls in mind that Kalai says “it will be nice to have a Polymath devoted to theory-building rather than to specific problem solving.” He adds that he would also like to see Polymath projects “that are on longer time scale than existing ones but perhaps less intensive, and that people can get in or spin off at will.”
Gowers recognized from the outset that collaboration won’t always eclipse competition. He admits that “it seems highly unlikely that one could persuade lots of people to share good ideas” about a high-kudos goal like the Riemann hypothesis, which relates to the distribution of prime numbers. This, after all, is one of the seven Millennium Problems for the solution of which the privately funded Clay Mathematics Institute in Providence, Rhode Island, has offered prizes of $1m.
All the same, that didn’t deter Gowers from launching Polymath 9 last November, which set out to find proofs for three conjectures that would solve another of the remaining six Millennium Problems: the so-called NP versus P problem. This asks whether all hard problems for which solutions can be quickly verified by a computer (denoted NP) coincides with the class of problems that can be solved equally quickly (denoted P). Gowers did not expect all three of his conjectures to be solved by Polymath 9, but admitted he would be pleased if just one of them could be. However, the results were initially disappointing, and Gowers was about to declare Polymath 9 a failure when he was contacted by Pavel Pudlak of the Mathematical Institute of the Czech Academy of Sciences with a proof that one of the three statements he was hoping to be proved false was in fact true, apparently cutting off this avenue for attacking the problem. Gowers is philosophical. “It’s never a disaster to learn that a statement you wanted to go one way in fact goes the other way”, he wrote. “It may be disappointing, but it’s much better to know the truth than to waste time chasing a fantasy.” In that regard, then, Polymath 9 did something useful after all.
Polymath now functions as a kind of elite open-source facility. People can post suggestions for new projects on a dedicated website maintained by Gowers, Tao, Kalai and open-science advocate Michael Nielsen, and these are then discussed by peers and, if positively received, launched for contributions. “The organization is still somewhat informal”, Tao says. Setting up and sustaining a Polymath project is a big commitment. “It needs an active leader who is willing to spend a fair amount of effort to organise the discussion and keep it moving in productive directions”, says Tao. “Otherwise the initial burst of activity can dissipate fairly quickly. Not many people are willing or able to do this.” “It’s quite difficult to get people interested,” Gowers agrees; so far, he and Tao have initiated all but two of the projects.
Although surprised by Polymath’s success, Kalai says that the trend toward more collaborative efforts started earlier, as signaled by a rise in the average number of coauthors on maths papers. “Polymath projects do not have enough weights to make a substantial change. But they add to the wealth of mathematical activities, and, for better of for worse, their impact on the community is larger than their net scientific impact.” It’s not clear that this is a good way to do maths, he concludes – “but we can certainly explore it.”
Cash or glory
Some other “expert” crowdsourcing ventures are being run as commercial ventures by companies that aim to link people with a problem to solve with people who might have the skills and ideas needed to solve it. These generally charge fees and offer financial rewards for participants. Other initiatives are government-led, such as the NASA Tournament Lab, which seeks “the most innovative, most efficient, and most optimized solutions for specific, real-world challenges being faced by NASA researchers”, and the US-based Challenge.gov, which offers cash prizes for solutions to a whole range of engineering and technological problems.
One of the most prominent commercial enterprises is Innocentive, which hosts a variety of scientific or technological challenges that are open to all of its millions of registered “solvers”. These range from the seemingly banal, if important (developing economical forms of latrine lighting in emergencies, or “keeping hair clean for longer without washing”), to the esoteric (“seeking 4-hydroxy-1H-pyridin-2-one analogues”, or ways of stabilizing foamed emulsions). InnoCentive’s founder Alph Bingham says that their approach “has produced solutions to problems that had been previously investigated for years and even decades.” Good challenges, he says, “are ones where the space of possible solutions is immense and therefore hard to search on a serial basis”.
In contrast to that broad portfolio, other crowdsourcing companies such as Kaggle and CrowdFlower specialize in data analysis. Kaggle has been used, for example, in bioinformatics to predict biological behaviours of molecules from their chemical structure, and in energy forecasting. It has been recently used by a team of astronomers seeking algorithms for mapping the distribution of dark matter in galaxies based on its gravitational-lensing effects on background objects. Through Kaggle, the researchers set up a competition called “Observing Dark Worlds”, which offered cash prizes (donated by the financial company Winton Capital) for the three best algorithms. The winning entries improved the performance, relative to standard algorithms, by about 30 percent.
While this was valuable, astronomer David Harvey of the University of Edinburgh, an author of that study, admits that it’s not always straightforward to apply potential solutions to the problem you’ve set. “Many of the ideas that came out of the competition were great, and provided really interesting insights into the problem”, he says. “But none of the algorithms are ready to be used on real data – they need to be fully tested and developed. And it’s very hard to take some algorithm from someone not in your field and develop it.”
Harvey says that indeed the winning algorithm for “Observing Dark Worlds” still hasn’t yet been fully developed. “However, the advantages of these competitions is not always obvious”, he adds. For example, the second-place entry was written by informatics specialist Iain Murray of the University of Edinburgh, who is continuing to collaborate with Harvey, and now with other astronomers too. “This wouldn’t have happened if it wasn't for Kaggle”, Harvey says. That experience shows how “it’s vital that the winners of the competition work in collaboration post-competition on the problem and develop the initial idea all the way through to a final package.” But Harvey admits that “often these are just side projects for participants, and while they may have a sincere interest in the problem, they do not have the time to commit.”
Harvey points out that the call for such projects might nevertheless be increasing, especially in astronomy. “With new telescopes such as the Square Kilometer Array, the large synoptic survey and Euclid on the horizon, astronomers will be facing real problems of data processing, handling and analysing”, he says. However, Thomas Kitching of University College London, who was the lead scientist on the Dark Worlds project, admits to having mixed feelings about what ultimately such efforts might achieve. In part this is because real expertise might be hard to harness this way. “Most people are not experts, but might have a bit of time”, he says. “There may be some experts, but they have very little time.”
While Polymath relies on unpaid efforts of researchers whose sole reward is professional prestige, Innocentive and Kaggle recognize that harnessing a broader community requires more tangible incentives, typically in the form of cash prizes. “In academia, people are willing to spend a lot of time for ‘kudos’ or for the sake of science – but only up to a point”, says Kitching. “Once the problem requires a lot of time, like coding in Kaggle, then monetary incentives or prizes seem to be required. No one is going to spend seven days a week trying to win unless it’s already their job, so money offsets time.”
Innocentive’s 300,000 solvers stand to gain rewards of between $5,000 and $1m. Kaggle now hosts some of the efforts of Galaxy Zoo for a prize of $16,000 (also provided by Winton Capital). This sort of funding is not necessarily just philanthropic for the donors – Winton Capital, for example, were themselves able to recruit new analysts via the Observing Dark Worlds initiative for a fraction of their usual advertising and interviewing costs.
But it’s not all about lucre. “Winning solvers rarely list the cash among their top motivations”, says Bingham. “Their motivations are frequently more intrinsic, such as intellectual stimulation or curiosity to explore where an idea might lead." InnoCentive aims to encourages non-cash incentives, such as prospects for further collaboration or joint press releases. Yet Bingham adds that “dollar amounts also serve as a kind of score-keeping.” Some of Kaggle’s projects have no cash prizes, and Harvey says that “a lot of the time computer scientists will go there because they want to work on something new and exciting, and not for financial gain.” Indeed, the company invites participants to “compete as a data scientists for fortune, fame and fun.”
“A competition can help to advertise a problem to people who have not thought about it before, a prize can attract them to spend time, and a metric can help to sort signal from noise”, says Kitching. “So in this sense competition, if well posed can help in science. But a poorly posed problem may just increase noise.”
But as Kalai points out, there can be as much value in identifying important questions, and tools to tackle them, as in finding solutions. Kitching recalls a computer called Multivac that appeared in several of Isaac Asimov’s short stories, which was very good at answering questions but still required human scientists to pose them in the first place. Kitching suspects that the crowdsourcing pool will act more like Multivac than like its interrogators. “In the crowdsourcing approach the key to successful science is working out the correct questions to ask the crowd”, he says.
Friday, February 28, 2014
Thursday, February 27, 2014
Floods: more please?
Are the UK floods a sign of climate change? According to a recent poll, 46 percent of people think so, 27 percent think not. The invitation is to regard this as a proxy poll for a general belief in the reality of climate change, and perhaps in humankind’s key causative role in it.
But in fact, any information embedded in this poll is complicated and difficult to entangle. If any climate or weather scientists were quizzed, it seems likely that they will have gravitated, like me, towards the “undecided” category. As they have been repeating insistently and now a little wearily, no single extreme-weather event (and this one certainly qualifies as that) can yet be unequivocally attributed to climate change. This of course is manna for the climate sceptics, who use it to argue that we still don’t know if climate change is really happening, and that this uncertainty reflects a serious limitation, perhaps a fundamental flaw, of the whole basis of climate modelling. It matters little that climatologists say such extreme weather is fully consistent with what the models predict – the misguided but widespread notion that science provides “yes/no” answers to questions, decided by the data, is here proving a burden.
That situation is changing, however. As Simon Lewis points out in Nature this week, it is now becoming possible to make some definite links between specific extreme weather events and anthropogenic climate change. Such analyses are complicated and the conclusions tentative, but they already give grounds for saying a little more than merely “it’s too early to tell”.
What the flood poll really probes, however, is public perceptions about what an altered climate would mean. The effect of the floods is likely to be not so much convincing undecided voters that climate change is already upon us, but showing them what is really at stake in this temperate zone: not balmy Mediterranean-style summers, not distant news of drowned Pacific island states, but Verdun-style mud and sandbags, and images of this green and pleasant land under glittering, muddy water from horizon to horizon. We have finally got a feeling for what it might be like to live in a world a degree or two warmer, and it seems uncomfortably close to home, and not at all pleasant. Shivering east coast Americans are having a somewhat different kind of awakening.
As wake-up calls go, it is pretty mild. But it is also likely to shift perceptions, not just of what to expect but of what the social and economic consequences will be. The more intelligent, or perhaps just cannier, sceptics have ceased questioning the science or the evidence but instead contest the economics: it will cost more, they say, to mitigate climate change, for example via taxes on fossil fuels or expensive green technologies, than to accept and adapt to it. This, for example, is the line taken by the science writer Matt Ridley, who laid out his case last October in an article in the Spectator.
The Viscount Ridley, immensely wealthy Eton-educated Conservative hereditary peer whose Darwinian attitude to economics was notoriously suspended when it came to the bailout of Northern Rock under his chairmanship, is an easy villain. But the Ridley I know (slightly – we sit on the same academic advisory committee), who happens to be an exceptionally good science writer and a clever thinker, is harder to caricature. His argument – in which a warmer world results in fewer net deaths, for example, though winter hypothermia – can’t be casually waved away. The dismantling needs more care.
The economic case is hugely complicated, and plagued by many more uncertainties than the science. It depends, for example, on making projections about nascent or even as yet undeveloped technologies. Even the research on which Ridley almost exclusively draws – by economist Richard Tol – mostly just points out these lacunae, and Tol advises nonetheless that “there is a strong case for near-term action on climate change”. (Ridley jettisons that bit.)
But of course economic figures paper over a multitude of woes. Imagine, for example, that an ice-free summer in the Arctic (which begins to look likely sooner than we expected) leads to the extinction of the economically insignificant polar bear (I’ll come back to that) but creates a fertile new breeding ground for fish stocks, with large economic benefits for fisheries. How would you feel about that? Or if the inundation of a few island states with trivial GDP, leaving the populations homeless, were massively offset by improved wheat yields in a warmer US Midwest? I’m not saying these things will happen, just that GDP is only a part of the story.
More importantly, perhaps, one can’t really put an economic figure on consequences of climate change such as the mass human migration that is predicted from north to south, which could very readily lead to social unrest and even war. Or the drastic changes in ecosystems likely to result, for example if ocean acidification from dissolved carbon dioxide wipes out corals. It isn’t hard to dream up such disasters, and Ridley is right that we need to think carefully, not just reactively, about what the real consequences would be – but economics, let alone highly uncertain economics, doesn’t give a full answer. All we can really agree on is that there seem unlikely to be any net benefits beyond 2070 or so, by which time things are getting really bad – especially if you have ploughed on merrily with business as usual, which Ridley seems to recommend. (He offers no alternative plan.) I won’t be around to see that; with good luck, my children will be. I don’t think theirs will be a problem that can be solved with wellies and sandbags.
All the same, I wish I could trust the arguments Ridley brings to the table. But, surprised by his passing suggestion that polar bears are fine and might even benefit from a bit of polar warming, I decided to check. The US Geological Survey in Alaska says “Our analysis of those data has shown that longer ice-free seasons have resulted in reduced survival of young and old polar bears and a population decline over the past 20 years. Recent observations of cannibalism and unexpected mortalities of prime age polar bears in Alaska are consistent with a population undergoing change.” The National Wildlife Federation says “The chief threat to the polar bear is the loss of its sea ice habitat due to global warming.” It’s impossible to generalize, however: studies suggest that many polar-bear populations will be wiped out within a few decades without human intervention, but some seem to be doing OK and may survive indefinitely (although climate change may introduce other threats, such as disease). If I were a polar bear, I’d feel decidedly less than sanguine about these forecasts. I’d also suspect that Ridley is less the “rational optimist” he styles himself, and more the wishful thinker.
But in fact, any information embedded in this poll is complicated and difficult to entangle. If any climate or weather scientists were quizzed, it seems likely that they will have gravitated, like me, towards the “undecided” category. As they have been repeating insistently and now a little wearily, no single extreme-weather event (and this one certainly qualifies as that) can yet be unequivocally attributed to climate change. This of course is manna for the climate sceptics, who use it to argue that we still don’t know if climate change is really happening, and that this uncertainty reflects a serious limitation, perhaps a fundamental flaw, of the whole basis of climate modelling. It matters little that climatologists say such extreme weather is fully consistent with what the models predict – the misguided but widespread notion that science provides “yes/no” answers to questions, decided by the data, is here proving a burden.
That situation is changing, however. As Simon Lewis points out in Nature this week, it is now becoming possible to make some definite links between specific extreme weather events and anthropogenic climate change. Such analyses are complicated and the conclusions tentative, but they already give grounds for saying a little more than merely “it’s too early to tell”.
What the flood poll really probes, however, is public perceptions about what an altered climate would mean. The effect of the floods is likely to be not so much convincing undecided voters that climate change is already upon us, but showing them what is really at stake in this temperate zone: not balmy Mediterranean-style summers, not distant news of drowned Pacific island states, but Verdun-style mud and sandbags, and images of this green and pleasant land under glittering, muddy water from horizon to horizon. We have finally got a feeling for what it might be like to live in a world a degree or two warmer, and it seems uncomfortably close to home, and not at all pleasant. Shivering east coast Americans are having a somewhat different kind of awakening.
As wake-up calls go, it is pretty mild. But it is also likely to shift perceptions, not just of what to expect but of what the social and economic consequences will be. The more intelligent, or perhaps just cannier, sceptics have ceased questioning the science or the evidence but instead contest the economics: it will cost more, they say, to mitigate climate change, for example via taxes on fossil fuels or expensive green technologies, than to accept and adapt to it. This, for example, is the line taken by the science writer Matt Ridley, who laid out his case last October in an article in the Spectator.
The Viscount Ridley, immensely wealthy Eton-educated Conservative hereditary peer whose Darwinian attitude to economics was notoriously suspended when it came to the bailout of Northern Rock under his chairmanship, is an easy villain. But the Ridley I know (slightly – we sit on the same academic advisory committee), who happens to be an exceptionally good science writer and a clever thinker, is harder to caricature. His argument – in which a warmer world results in fewer net deaths, for example, though winter hypothermia – can’t be casually waved away. The dismantling needs more care.
The economic case is hugely complicated, and plagued by many more uncertainties than the science. It depends, for example, on making projections about nascent or even as yet undeveloped technologies. Even the research on which Ridley almost exclusively draws – by economist Richard Tol – mostly just points out these lacunae, and Tol advises nonetheless that “there is a strong case for near-term action on climate change”. (Ridley jettisons that bit.)
But of course economic figures paper over a multitude of woes. Imagine, for example, that an ice-free summer in the Arctic (which begins to look likely sooner than we expected) leads to the extinction of the economically insignificant polar bear (I’ll come back to that) but creates a fertile new breeding ground for fish stocks, with large economic benefits for fisheries. How would you feel about that? Or if the inundation of a few island states with trivial GDP, leaving the populations homeless, were massively offset by improved wheat yields in a warmer US Midwest? I’m not saying these things will happen, just that GDP is only a part of the story.
More importantly, perhaps, one can’t really put an economic figure on consequences of climate change such as the mass human migration that is predicted from north to south, which could very readily lead to social unrest and even war. Or the drastic changes in ecosystems likely to result, for example if ocean acidification from dissolved carbon dioxide wipes out corals. It isn’t hard to dream up such disasters, and Ridley is right that we need to think carefully, not just reactively, about what the real consequences would be – but economics, let alone highly uncertain economics, doesn’t give a full answer. All we can really agree on is that there seem unlikely to be any net benefits beyond 2070 or so, by which time things are getting really bad – especially if you have ploughed on merrily with business as usual, which Ridley seems to recommend. (He offers no alternative plan.) I won’t be around to see that; with good luck, my children will be. I don’t think theirs will be a problem that can be solved with wellies and sandbags.
All the same, I wish I could trust the arguments Ridley brings to the table. But, surprised by his passing suggestion that polar bears are fine and might even benefit from a bit of polar warming, I decided to check. The US Geological Survey in Alaska says “Our analysis of those data has shown that longer ice-free seasons have resulted in reduced survival of young and old polar bears and a population decline over the past 20 years. Recent observations of cannibalism and unexpected mortalities of prime age polar bears in Alaska are consistent with a population undergoing change.” The National Wildlife Federation says “The chief threat to the polar bear is the loss of its sea ice habitat due to global warming.” It’s impossible to generalize, however: studies suggest that many polar-bear populations will be wiped out within a few decades without human intervention, but some seem to be doing OK and may survive indefinitely (although climate change may introduce other threats, such as disease). If I were a polar bear, I’d feel decidedly less than sanguine about these forecasts. I’d also suspect that Ridley is less the “rational optimist” he styles himself, and more the wishful thinker.
Wednesday, February 19, 2014
The benefits of bendy wings
Here’s my latest news story for Nature.
________________________________________________________________
From insects to whales, flying and swimming animals use the same trick.
A new design principle that enables animals to fly has been discovered by a team of US researchers. They say that the same principle is used for propulsion by aquatic creatures, and suggest that it could supply guidance for designing artificial devices that propel themselves through air and water. The work is published today in Nature Communications [1].
The findings are welcomed by animal-flight expert Graham Taylor of Oxford University, who says that they “should certainly prove a fruitful area for future research”.
The earliest dreams of human flight, from Icarus to Leonardo da Vinci, drew on the notion of flapping wings, like those of birds or bats. But practical designs from the Wright brothers onward have largely abandoned this design in favour of the stationary aerofoil wing. Does that have to be the way, or might artificial flapping-wing devices be built?
In fact, a few have been already, but only very recently. In 2011, the German automation technology company Festo announced a small remote-controlled aircraft called the SmartBird that used flapping wings, based on the motion of a seagull. Aerospace engineers at the University of Illinois at Urbana Champaign are developing a “robotic bat” [2]. Some flying devices have also been based on insect flapping-wing flight, which use rapid wingbeats to produce upwards thrust that allows hovering and high manoeuvrability [3], while others have mimicked the undulating movements of jellyfish [4-6].
Developing flying machines based on bird-like flapping-wing aerodynamics is hampered by the lack of information about how birds achieve stability and control. John Costello of Providence College on Rhode Island, USA, and his colleagues believed that these flight properties might depend crucially on the fact that, unlike the wings of many human craft, animal wings are not rigid but flexible.
Yet there have been conflicting views on how wing flexibility affects the thrust produced by wing flapping, even to the extent of whether it helps or hinders. Costello and colleagues decided to take an empirical approach – to look at just how much real animal wings deform during flight.
They suspected that the same effect of bending should be evident in the operation of fins and flukes used for propulsion in water. In fact, they were initially motivated by their participation in a project for the US Office of Naval Research to develop a biologically inspired “jellyfish vehicle” [5,6].
That work, says Costello, showed that “the addition of a simple passive flap to an otherwise fairly rigid bending surface resulted in orders of magnitude increases in propulsive performance”. But what exactly were the rules behind these bending effects? “We reasoned that animals solved this problem several hundred million years ago”, says Costello, “so we decided to start by looking at natural forms.”
To gather data on the amount of deformation of wings and fins during animal movement, the researchers combed Youtube and Vimeo for video footage of species ranging from fruitflies to humpback whales and from molluscs to bats. They had to be extremely selective in what they used. They needed footage of steady motion (no slowing or speeding up), they needed to compare many flapping cycles for the same species, and they needed to find motion in the plane perpendicular to the line of vision, to obtain accurate information on the amount of bending.
This data was painstakingly collected by team members Kelsey Lucas of Roger Williams University in Bristol, Rhode Island, and Nathan Johnson at Providence College. “I’m not sure how many hundreds or thousands of video sequences they viewed and discarded” Costello admits. “It took many months of searching.”
They found that, for all the vast diversity of propulsor shapes and structures – gossamer-thin membranes, feathered wings, thick and heavy whale tails – there was rather little variation in the bending behaviour when measured (by eye) using the right variables. Specifically, when the data were plotted on a graph of the “flexion ratio” – the ratio of the length from “wing” base to the point where bending starts, to the total “wing” length – against maximum bending angle, all the points clustered within a small region.
In other words, this seems to be an example of “convergent evolution” – animals with very different evolutionary backgrounds have all “found” the same solution to a common problem, in this case the most effective bending criterion for propulsion through fluids. “Whether an animal is a fish or mollusc swimming in water or an insect or bird flying through the air, they all evolved to move within fluid environments”, says Costello. “Their evolution has been governed by the physical laws that determine fluid interactions. It doesn’t matter whether they originated from crawling, walking or jumping ancestors – once they adapted to a fluid, they evolved within a system determined by a common set of limits.”
“Perhaps the simple fact that wings, fins, and flukes of all shapes and sizes deform in a similar manner is not so surprising”, says Taylor. “What is surprising is the coupled variation in materials, morphology, and movement that this similarity implies” – so that “the comparatively flimsy wing of an insect deforms to the same extent in flight as does the powerful fleshy tail fluke of a killer whale”.
Costello is cautiously optimistic about translating the findings into aeronautical engineering design principles. First, he says, more needs to be known about why this narrow range of bending motions is advantageous for propulsion. “We hope to uncover more about the hydrodynamic reasons that these patterns are so common”, he says. “Maybe then the advantages that these animals have found in these traits can be translated into human designs.”
References
1. Lucas, K. N. et al., Nature Commun. 5, 3293 (2014).
2. Kuang, P. D., Dorothy, M. & Chung, S.-J. Abstract, Am Inst. Aeronaut. Assoc. 29-31 March, 2011-1435 (2011). doi: 10.2514/6.2011-1435
3. Ma, K. Y., Chirarattananon, P., Fuller, S. B. & Wood, R. J. Science 340, 603–607 (2013).
4. Ristroph, L. & Childress, S. J. R. Soc. Interface http://dx.doi.org/10.1098/rsif.2013.0992 (2014).
5. Villanueva, A., Smith, C. & Priya, S. Bioinsp. Biomim. 6, 036004 (2011).
6. Colin, S. P. et al., PLoS ONE 7, e48909 (2012).
Friday, February 14, 2014
Making sense of music - in Italian
A popular-science magazine called Sapere has been published monthly in Italy since 1935. (There's a nice history of science popularization in Italy here.) Sapere is now produced by the Italian publisher Dedalo, who are aiming to revitalize it. They have asked me to contribute a regular column, which will be about the cognition of music. Each month I’ll focus on one or two particular pieces of music and explain how they do what they do. Here’s a slightly extended version of the introductory column, which takes as its subject “Over the Rainbow”.
___________________________________________________________
How do we make sense of music, and why does it move us? While much is still mysterious about it, some is not. Cognitive science and neuroscience are starting to reveal rules that our minds use to turn a series of notes and chords into a profound experience that speaks to us and reaches into the depths of our soul. In these columns I’ll aim to explain some of the rules, tricks and principles that turn sound into music.
One of the first things we notice about a song is the melody. Certain melodies capture our attention and interest more than others, and the songwriter’s goal is to find ones that stick. How do they do it?
Take “Over the Rainbow”, the ballad written by Harold Arlen and E. Y. Harburg for The Wizard of Oz (1939). We remember it partly for Judy Garland’s plangent voice – it became her signature tune – but it grabs us from the start with that soaring leap on “Some-where”.
Melodic leaps this big are very rare. Statistically speaking, most steps between successive melody notes are small – usually just between adjacent notes in the musical scale. For music all around the world, the bigger the step, the less often it is used. Partly this may be because it’s generally easier to sing or play notes that are closer together, but there is also a perceptual reason. Small steps in pitch help to “bind” the notes into a continuous phrase: we hear them as belonging to the same tune. The bigger the step, the more likely we’ll perceive it as a break in the tune. This is one of the rules deduced by the Gestalt psychologists around the start of the twentieth century, who were interested in how the mind groups stimuli together to form an organized picture of the world that causes them. They were primarily interested in vision, but the ‘gestalt principles’ apply to auditory experience too.
But small pitch steps can sound boringly predictable after a while, like nursery rhyme tunes. To create memorable tunes, sometimes songwriters have to take a chance on bigger leaps. The one in the first two notes of “Over the Rainbow” is particularly big: a full octave. The same leap occurs at the beginning of “Singing in the Rain”. Typically only one percent or so of pitch steps in melodies are this big. That means they stand out as memorable – but how come we still hear it as a tune at all, when the gestalt principles seem to say that big jumps cause a perceptual break-up?
Well, Arlen (who wrote the music) has added safeguards against that, probably quite unconsciously. First, the two notes on “Some-where” are long ones – you could say that “-where” is held for long enough for the brain to catch up after the leap. Second, the leap comes right at the start of the song, before there’s even really been time for a sense of the tune to develop at all. Third, and perhaps most important, the leap is not alone. There are similar big jumps in pitch (although not quite as big) at the start of the second and third phrases too (“Way up…”, “There’s a…”). In this way, the composer is signalling that the big jumps are an intentional motif of the song – he’s telling us not to worry, this is just a song with big pitch jumps in it. This is a general principle: if you hear a big pitch jump in a melody, it’s very likely that others will follow. In this way, tunes can create their own ‘rules’ which can over-ride the gestalt principles and produce something both coherent and memorable.
Wednesday, February 12, 2014
Closer to ignition
Here’s the original draft of my latest piece for Nature news.
___________________________________________________________________
Another milestone is passed on the long road to fusion energy
The usual joke about controlled nuclear fusion, which could provide much ‘cleaner’ nuclear power than fission, is that it has been fifty years away for the past fifty years. But it just got a bit closer. In a report published in Nature today [1], a team of researchers at the US National Ignition Facility (NIF), based at Lawrence Livermore National Laboratory in California, say that their fusion experiments have managed to extract more energy from the nuclear process than was absorbed by the fuel to trigger it.
That’s certainly not the much-sought “break-even” point at which a fusion reactor can generate more energy than it consumes, because there are many other processes that consume energy before it even reaches the nuclear fuel. But it represents “a critical step on the path to ignition”, according to Mark Herrmann of Sandia National Laboratory in Albuquerque, New Mexico, who heads the project on high-energy X-ray pulses there.
While nuclear fission extracts nuclear energy released during breakup of very heavy nuclei such as uranium, nuclear fusion – the process that powers stars – produces energy by the coalescence of very light nuclei such as hydrogen. A tiny part of the masses of the separate hydrogen nuclei is converted into energy during the reaction.
Although the basic physics of fusion is well understood, conducting it in a controlled manner in a reactor – rather than releasing the energy explosively in a thermonuclear hydrogen bomb – has proved immensely difficult, largely because of the challenge of containing the incredibly hot plasma that fusion generates.
There is no agreed way of doing this, and fusion projects in different parts of the world are exploring a variety of solutions. In most of these projects the fuel consists of the heavy hydrogen isotopes deuterium and tritium, which react to produce the isotope helium-4.
A lot of energy must be pumped into the fuel to drive the nuclei close together and overcome their electrical repulsion. At the NIF this energy is provided by 192 high-power lasers, which send their beams into a bean-sized gold container called a hohlraum, in which the fuel sits inside a plastic capsule. The laser energy is converted into X-rays, some of which are absorbed by the fuel to trigger fusion. Most of the energy, however, is absorbed by the hohlraum itself. That’s why obtaining gain (more energy out than in) within the fuel itself is only a step along the way to “ignition”, the point at which the reactor as a whole produces energy.
The fuel is kept in a plastic shell called the ablator. This absorbs the energy in the hohlraum and explodes, creating the high pressure that makes the fuel implode to reach the high density needed to start fusion. But that pressure can burst through the ablator at weak points and destabilize the implosion, mixing the fuel with the ablator plastic and reducing the efficiency of the fusion process.
The NIF team’s success, achieved in experiments conducted between last September and this January, comes from ‘shaping’ the laser pulses to deliver more power early in the pulse. This creates a relatively high initial temperature in the hohlraum which “fluffs up” the plastic shell. “This fluffing up greatly slows down growth of the instability”, says team leader Omar Hurricane.
As a result, the researchers have been able to achieve a “fuel energy gain” – a ratio of energy released by the fuel to energy absorbed – of between 1.2 and 1.9. “This has never been done before in laboratory fusion research”, says Herrmann. “It’s a very promising advance.”
He adds that much of the energy released was produced by self-heating of the fuel through the radiation released in the fusion reactions – an important requirement for sustaining the fusion process.
But fusion energy generation still remains a distant goal, for which Hurricane admits he can’t yet estimate a timescale. “Our total gain – fusion energy out divided by laser energy in – is only about 1%”, he points out.
“This is more than a little progress, but still modest in terms of energy generation”, Hurricane says. “Our goal right now is to more than double the final pressures in our implosion, by making it go faster and improving its shape.”
Meanwhile, other projects, such as the International Thermonuclear Experimental Reactor (ITER) under construction in southern France, will explore different approaches to fusion. “When trying to solve hard problems it is wise to have multiple approaches, as every potential solution has pros and cons”, says Hurricane.
References
1. Hurricane, O. A. et al., Nature advance online publication doi:10.1038/nature13008 (2014).
___________________________________________________________________
Another milestone is passed on the long road to fusion energy
The usual joke about controlled nuclear fusion, which could provide much ‘cleaner’ nuclear power than fission, is that it has been fifty years away for the past fifty years. But it just got a bit closer. In a report published in Nature today [1], a team of researchers at the US National Ignition Facility (NIF), based at Lawrence Livermore National Laboratory in California, say that their fusion experiments have managed to extract more energy from the nuclear process than was absorbed by the fuel to trigger it.
That’s certainly not the much-sought “break-even” point at which a fusion reactor can generate more energy than it consumes, because there are many other processes that consume energy before it even reaches the nuclear fuel. But it represents “a critical step on the path to ignition”, according to Mark Herrmann of Sandia National Laboratory in Albuquerque, New Mexico, who heads the project on high-energy X-ray pulses there.
While nuclear fission extracts nuclear energy released during breakup of very heavy nuclei such as uranium, nuclear fusion – the process that powers stars – produces energy by the coalescence of very light nuclei such as hydrogen. A tiny part of the masses of the separate hydrogen nuclei is converted into energy during the reaction.
Although the basic physics of fusion is well understood, conducting it in a controlled manner in a reactor – rather than releasing the energy explosively in a thermonuclear hydrogen bomb – has proved immensely difficult, largely because of the challenge of containing the incredibly hot plasma that fusion generates.
There is no agreed way of doing this, and fusion projects in different parts of the world are exploring a variety of solutions. In most of these projects the fuel consists of the heavy hydrogen isotopes deuterium and tritium, which react to produce the isotope helium-4.
A lot of energy must be pumped into the fuel to drive the nuclei close together and overcome their electrical repulsion. At the NIF this energy is provided by 192 high-power lasers, which send their beams into a bean-sized gold container called a hohlraum, in which the fuel sits inside a plastic capsule. The laser energy is converted into X-rays, some of which are absorbed by the fuel to trigger fusion. Most of the energy, however, is absorbed by the hohlraum itself. That’s why obtaining gain (more energy out than in) within the fuel itself is only a step along the way to “ignition”, the point at which the reactor as a whole produces energy.
The fuel is kept in a plastic shell called the ablator. This absorbs the energy in the hohlraum and explodes, creating the high pressure that makes the fuel implode to reach the high density needed to start fusion. But that pressure can burst through the ablator at weak points and destabilize the implosion, mixing the fuel with the ablator plastic and reducing the efficiency of the fusion process.
The NIF team’s success, achieved in experiments conducted between last September and this January, comes from ‘shaping’ the laser pulses to deliver more power early in the pulse. This creates a relatively high initial temperature in the hohlraum which “fluffs up” the plastic shell. “This fluffing up greatly slows down growth of the instability”, says team leader Omar Hurricane.
As a result, the researchers have been able to achieve a “fuel energy gain” – a ratio of energy released by the fuel to energy absorbed – of between 1.2 and 1.9. “This has never been done before in laboratory fusion research”, says Herrmann. “It’s a very promising advance.”
He adds that much of the energy released was produced by self-heating of the fuel through the radiation released in the fusion reactions – an important requirement for sustaining the fusion process.
But fusion energy generation still remains a distant goal, for which Hurricane admits he can’t yet estimate a timescale. “Our total gain – fusion energy out divided by laser energy in – is only about 1%”, he points out.
“This is more than a little progress, but still modest in terms of energy generation”, Hurricane says. “Our goal right now is to more than double the final pressures in our implosion, by making it go faster and improving its shape.”
Meanwhile, other projects, such as the International Thermonuclear Experimental Reactor (ITER) under construction in southern France, will explore different approaches to fusion. “When trying to solve hard problems it is wise to have multiple approaches, as every potential solution has pros and cons”, says Hurricane.
References
1. Hurricane, O. A. et al., Nature advance online publication doi:10.1038/nature13008 (2014).
Tuesday, February 04, 2014
Colour coordinated
Here's a talk, nicely recorded, that I gave on colour and chemistry for the "Big Ideas" course at Bristol at the end of last year. There's a version of this floating on the web that I gave at Michigan something like ten years ago, which cruelly lays bare the ravages of time. For colour junkies, there is another little snippet here for the Atlantic magazine on Newton's spectrum.