Anyway, here is a piece that has been published in the February issue of

*Prospect*. There will be more here on the Polymath project some time in the near future.

______________________________________________________________

Our cultural relationship with the world of mathematics is mythologized like no other academic discipline. While the natural sciences are seen to keep some roots planted in the soil of daily life, in inventions and cures and catastrophes, maths seems to float freely in an abstract realm of number, as much an art as a science. More than any white-coated boffin, its proponents are viewed as unworldly, with minds unfathomably different from ours. We revel in stories of lone geniuses who crack the most refractory problems yet reject status, prizes and even academic tenure. Maths is not just a foreign country but an alien planet.

Some of the stereotypes are true. When the wild-haired Russian Grigori Perelman solved the notorious Poincaré conjecture in 2003, he declined first the prestigious Fields Medal and then (more extraordinarily to some) the $1m Millennium Prize officially awarded to him in 2010. The prize was one of seven offered by the non-profit, US-based Clay Mathematics Foundation for solutions to seven of the most significant outstanding problems in maths.

Those prizes speak to another facet of the maths myth. It is seen as a range of peaks to be scaled: a collection of ‘unsolved problems’, solutions of which are guaranteed to bring researchers (if they want it) fame, glory and perhaps riches. In this way maths takes on a gladiatorial aspect, encouraging individuals to lock themselves away for years to focus on the one great feat that will make their reputation. Again, this is not all myth; most famously, Andrew Wiles worked in total secrecy in the 1990s to conquer Fermat’s Last Theorem. Even if maths is in practice more comradely than adversarial – people have been known to cease working on a problem, or to avoid it in the first place, because they know someone else is already doing so – nonetheless its practitioners can look like hermits bent on Herculean Labours.

It is almost an essential part of this story that those labours are incomprehensible to outsiders. And that too is often the reality. I have reported for several years now on the Abel prize, widely see as the ‘maths Nobel’ (not least because it is awarded by the Norwegian Academy of Science and Letters). Invariably, describing what the recipients are being rewarded for becomes an impressionistic exercise, a matter of sketching out a nested Russian doll of recondite concepts in a tone that implies “Don’t ask”.

Yet this public image of maths is only part of the story. For one thing, some of the hardest problems are actually the most simply stated. Fermat’s Last Theorem, named after the seventeenth-century mathematician Pierre Fermat who claimed to have a solution that he couldn’t fit in the page margin, is a classic example. It states that there are no whole-number solutions for a, b, and c in the equation a**n + b**n = c**n if n is a whole number larger than 2. Because it takes only high-school maths to understand the problem, countless amateurs were convinced that high-school maths would suffice to solve it. When I was an editor at Nature, ‘solutions’ would arrive regularly, usually handwritten in spidery script by authors who would never accept they had made trivial errors. (Apparently Wiles’ solution, which occupied 150 pages and used highly advanced maths, has not deterred these folks, who now seek acclaim for a ‘simpler’ solution.)

The transparency of Fermat’s Last Theorem is shared by some of the other Millennium Prize problems and further classic challenges in maths. Take Goldbach’s conjecture, which makes a claim about the most elusive of all mathematical entities – the prime numbers. These are integers that have no other factors except itself and 1: for example, 2, 3, 5, 7, 11 and 13. The eighteenth-century German mathematician Christian Goldbach is credited with proposing that every even integer greater than 2 can be expressed as the sum of two primes: for example, 4=2+2, 6=3+3, and 20=7+13. One can of course simply work through all the even numbers in turn to see if they can be chopped up this way, and so far the conjecture has been found empirically to hold true up to about 4x10**18. But such number-crunching is no proof, and without it one can’t be sure that an exception won’t turn up around, say, 10**21. Those happy to accept that, given the absence of exceptions so far, they’re unlikely to appear later, are probably not destined to be mathematicians.

Goldbach’s conjecture would be an attractive target for young mathematicians seeking to make their name, but it won’t make them money – it’s not a Millennium Problem. One of the most alluring of that select group is a problem that doesn’t really involve numbers at all, but concerns computation. It is called the P versus NP problem, and is perhaps best encapsulated in the idea that the answer to a problem is obvious once you know it. In other words, it is often easier to verify an answer than to find it in the first place. The NP vs P question is whether, for all problems that can be verified quickly (there’s a technical definition of ‘quickly’), there exists a way of actually finding the right answer comparably fast. Most mathematicians and computer scientists think that this isn’t so – in formal terms, that NP is not equal to P, meaning that some problems are truly harder to solve than to verify. But there’s no proof of that.

This is a maths challenge with unusually direct practical implications. If NP=P, we would know that, for some computing problems that are currently very slow to solve, such as finding the optimal solution to a complex routing problem, there is in fact have a relatively efficient way to get the answer. The problem has philosophical ramifications too. If NP=P, this would imply that anyone who can understand Andrew Wiles’ solution to Fermat’s Last Theorem (which is more of us than you might think, given the right guidance) could also in principle have found it. The rare genius with privileged insight would vanish.

Perhaps the heir to the mystique of Fermat’s Last Theorem, meanwhile, is another of the Millennium Problems: the Riemann hypothesis. This is also about prime numbers. They keep popping up as one advances through the integers, and the question is: is there any pattern to the way they are distributed? The Riemann hypothesis implies something about that, although the link isn’t obvious. Its immediate concern is the Riemann zeta function, denoted ζ(s), which is equal to the sum of 1**-s + 2**-s +3**-s +…, where s is a complex number, meaning that it contains a real part (an ‘ordinary’ number) and an imaginary part incorporating the square root of -1. (Already I’m skimping on details.) If you plot a graph of the curve ζ as a function of s, you’ll find that for certain values of s it is equal to zero. Here’s Riemann’s hypothesis: that the values of s for which ζ(s)=0 are always (sorry, with the exception of the negative even integers) complex numbers for which the real part is precisely ½. It turns out that these zero values of ζ determine how far successive prime numbers deviate from the smooth distribution predicted by the so-called prime number theorem. Partly because it pronounces on the distribution of prime numbers, if the Riemann hypothesis can be shown to be true then several other important conjectures would also be proved.

The distribution of the primes set the context for a recent instructive episode in the way maths is done. Although primes become ever rarer as the numbers get bigger, every so often two will be adjacent odd numbers: so-called twins, such as 26681 and 26683. But do these ‘twin primes’ keep cropping up forever? The (unproven) twin-primes hypothesis says that they do.

In April of last year, a relatively unknown mathematician at the University of New Hampshire named Yitang Zhang unveiled a proof of a "weaker" version of the twin-primes hypothesis which showed that there are infinitely many near-twins separated by less than 70 million. (That sounds like a much wider gap than 2, but its still relatively small when the primes themselves are gargantuan.) Zhang, a Chinese immigrant who had earlier been without an academic job for several years, fits the bill of the lone genius conquering a problem in seclusion. But after news of his breakthrough spread on maths blogs, something unusual happened. Others started chipping in to reduce Zhang’s bound of 70 million, and in June one of the world’s most celebrated mathematicians, Terence Tao at the University of California at Los Angeles, set up an online ‘crowdsourcing’ project called Polymath to pool resources. Before long, 70 million had dropped to 4680. Now, thanks to work by a young researcher named James Maynard at the University of Montreal, it is down to 600.

This extraordinarily rapid progress on a previously recalcitrant problem was thus a collective effort: maths isn’t just about secret labours by individuals. And while the shy, almost gnomically terse Zhang might fit the popular image, the gregarious and personable Tao does not.

What’s more, while projects like the Millennium Problems play to the image of maths as a set of peaks to scale, mathematicians themselves value other traits besides the ability to crack a tough problem. Abel laureates are commonly researchers who have forged new tools and revealed new connections between different branches of mathematicians. Last year’s winner, the Belgian Pierre Deligne, who among other things solved a problem in algebraic geometry analogous to the Riemann hypothesis, was praised for being a “theory builder” as well as a “problem solver”, and the 2011 recipient John Milnor was lauded as a polymath who “opened up new fields”. The message for the young mathematician, then, might be not to lock yourself away but to broaden your horizons.