Friday, April 13, 2018

The thousand-year song

In February I had the pleasure of meeting Jem Finer, the founder of the Longplayer project, to discuss the “music of the future” at this event in London. It seemed a perfect subject for my latest column for Sapere magazine on music cognition, where it will appear in Italian. Here it is in English.
______________________________________________________________

Most people will have experienced music that seemed to go on forever, and usually that’s not a good thing. But Longplayer, a composition by British musician Jem Finer, a founder member of the band The Pogues, really does. It’s a piece conceived on a geological timescale, lasting for a thousand years. So far, only 18 of them have been performed – but the performance is ongoing even as you read this. It began at the turn of the new millennium and will end on 31 December 2999. Longplayer can be heard online and at various listening posts around the world, the most evocative being a Victorian lighthouse in London’s docklands.

Longplayer is scored for a set of Tibetan singing bowls, each of which sounds in a repeating pattern determined by a mathematical algorithm that will not repeat any combination exactly until one thousand years have passed. The parts interweave in complex, constantly shifting ways, not unlike compositions such as Steve Reich’s Piano Phase in which repeating patterns move in and out of step. Right now Longplayer sounds rather serene and meditative, but Finer says that there are going to be pretty chaotic, discordant passages ahead, lasting for decades at a time – albeit not in his or my lifetime.


The visual score of Longplayer. (Image: Jem Finer/Longplayer Foundation)


An installation of Tibetan prayer bowls used for Longplayer at Trinity Buoy Wharf, London Docks. (Photo: James Whitaker)

One way to regard Longplayer is as a kind of conceptual artwork, taking with a pinch of salt the idea that it will be playing in a century’s time, let alone a millennium. Finer, though, has careful plans for how to sustain the piece into the indefinite future in the face of technological and social change. There’s no doubt that performance is a strong feature of the project: live events playing part of the piece have been rather beautiful, the instruments arrayed in concentric circles that reflect both the score itself and the sense of planetary orbits unfurling in slow, dignified synchrony.

But if this all seems ritualistic, so is a great deal of music. I do think Longplayer is a serious musical adventure, not least in how it both emphasizes and challenges the central cognitive process involved in listening: our perception of pattern and regularity. Those are the building blocks of this piece, and yet they take place mostly beyond the scope of an individual’s perception, forcing us – as perhaps the pointillistic dissonance of Pierre Boulez’s total serialism does – to find new ways of listening.

More than this, though, Longplayer connects to the persistence of music through the “deep time” of humanity, offering a message of determination and hope. Tectonic plates may shift, the climate may change, we might even reinvent ourselves – but we will do our best to ensure that this expression of ourselves will endure.


A live performance of part of Longplayer at the Yerba Buena Center, San Francisco, in 2010. (Photo: Stephen Hill)

Thursday, March 01, 2018

On the pros and cons of showing copy to sources - redux

Dana Smith has written a nice article for Undark about whether science journalists should or should not show drafts or quotes to their scientist sources before publication.

I’ve been thinking about this some more after writing the blog entry from which Dana quotes. One issue that I think comes out from Dana’s piece is that there is perhaps something of a generational divide here: I sense that younger writers are more likely to consider it ethically questionable ever to show drafts to sources, while old’uns like me, Gary Stix and John Rennie have less of a problem with it. And I wonder if this has something to do with the fact that the old’uns probably didn’t get much in the way of formal journalistic training (apologies to Gary and John if I’m wrong!), because science writers rarely did back then. I have the impression that “never show anything to sources” is a notion that has entered into science writing from other journalistic practice, and I do wonder if has acquired something of the status of dogma in the process.

Erin Biba suggests that the onus is one the reporter to get the facts right. I fully agree that we have that responsibility. But frankly, we will often not get the facts right. Science is not uniquely hard, but it absolutely is hard. Even when we think we know a topic well and have done our best to tell it correctly, chances are that there are small, and sometimes big, ways in which we’ll miss what real experts will see. To suggest that asking the experts is “the easy way out” sounds massively hubristic to me.

(Incidentally, I’m not too fussed about the matter of checking out quotes. If I show drafts, it’s to check out if I have got any of the scientific details wrong. I often tend to leave in quotes just because there doesn’t seem much point in removing them – they are very rarely queried – but I might omit critical quotes from others to avoid arguments that might otherwise end up needing third-part peer review.)

Dana doesn’t so much go into the arguments for why it is so terrible (in the view of some) to show your copy to sources. She mentions that some say it’s a matter of “journalistic integrity”, or just that it’s a “hard rule” – which makes the practice sound terribly transgressive. But why? The argument often seems to be, “Well, the scientists will get you to change your story to suit them.” To which I say, “Why on earth would I let them do that?” In the face of such attempts (which I’ve hardly ever encountered), why do I not just say, “Sorry, no”? Oh, but you’ll not be able to resist, will you? You have no will and judgement. You’re just a journalist.

Some folks, it’s true, say instead “Oh, I know you’ll feel confident and assertive enough to resist undue pressure to change the message, but some younger reporters will be more vulnerable, so it’s safer to have a blanket policy.” I can see that point, and am not unsympathetic to it (although I do wonder whether journalistic training might focus less on conveying the evils of showing copy to sources and more on developing skills and resources for resisting such pressures). But so long as I’m able to work as a freelancer on my own terms, I’ll continue to do it this way: to use what is useful and discard what is not. I don’t believe it is so hard to tell the difference, and I don’t think it is very helpful to teach science journalists that the only way you can insulate yourself from bad advice is to cut yourself off from good advice too.

Here’s an example of why we science writers would be unwise to trust we can assess the correctness of our writing ourselves, and why experts can be helpful if used judiciously. I have just written a book on quantum mechanics. I have immersed myself in the field, talked to many experts, read masses of books and papers, and generally informed myself about the topic in far, far greater detail than any reporter could be expected to do in the course of writing a news story on the subject. That’s why, when a Chinese team reported last year that they had achieved quantum teleportation between a ground base and a satellite, I felt able to write a piece for Nature explaining what this really means, and pointing out some common misconceptions in the reporting of it.

And I feel – and hope – I managed to do that. But I got something wrong.

It was not a major thing, and didn’t alter the main point of the article, but it was a statement that was wrong.

I discovered this only when, in correspondence with a quantum physicist, he happened to mention in passing that one of his colleagues had criticized my article for this error in a blog. So I contacted the chap in question and had a fruitful exchange. He asserted that there were some other dubious statements in my piece too, but on that matter I replied that he had either misunderstood what I was saying or was presenting an unbalanced view of the diversity of opinion. The point was, it was very much a give-and-take interaction. But it was clear that on this one point he was right and I was wrong – so I got the correction made.

Now, had I sent my draft to a physicist working on quantum teleportation, I strongly suspect that my error would have been spotted right away. (And I do think it would have had to be a specialist in that particular field, not just a random quantum physicist, for the mistake to have been noticed.) I didn’t do so partly because I had no real sources in this case to bounce off, but also partly because I had a false sense of my own “mastery” of the topic. And this will happen all the time – it will happen not because we writers don’t feel confident in our knowledge of the topic, but precisely because we do feel (falsely) confident in it. I cannot for the life of me see why some imported norm from elsewhere in journalism makes it “unethical” to seek expert advice in a case like this – not advice before we write, but advice on what we have actually written.

Erin is right to say that most mistakes, like mine here, really aren’t a big deal. They’re not going to damage a scientist’s career or seriously mislead the public. And of course we should admit to and correct them when they happen. But why let them happen more often than they need to?

As it happens, having said earlier that I very rarely get responses from scientists to whom I’ve shown drafts beyond some technical clarifications, I recently wrote two pieces that were less straightforward. Both were on topics that I knew to be controversial. And in both cases I received some comments that made me suspect their authors were wanting to somewhat dictate the message, taking issue with some of the things the “other side” said.

But this was not a problem. I thought carefully about what they said, took on board some clearly factual remarks, considered whether the language I’d used captured the right nuance in some other places, and simply decided I would respectfully decline to make any modifications to my text in others. Everything was on a case-by-case basis. These scientists were in return very respectful of my position. They seemed to feel that I’d heard and considered their position, and that I had priorities and obligations different from theirs. I felt that my pieces were better as a result, without my independence at all being compromised, and they were happy with the outcome. Everyone, including the readers, were better served as a result of the exchange. I’m quite baffled by how there could be deemed to be anything unethical in that.

And that’s one of the things that makes me particularly uneasy about how showing any copy to sources is sometimes presented not as an informed choice but as tantamount to breaking a professional code. I’ve got little time for the notion that it conflicts with the journalist’s mission to critique science and not merely act as its cheerleader. Getting your facts right and sticking to your guns are separate matters. Indeed, I have witnessed plenty of times the way in which a scientist who is being (or merely feels) criticized will happily seize on any small errors (or just misunderstandings of what you’ve written) as a way of undermining the validity of the whole piece. Why give them that opportunity after the fact? The more airtight a piece is factually, the more authoritative the critique will be seen to be.

I should add that I absolutely agree with Erin that the headlines our articles are sometimes given are bad, misleading and occasionally sensationalist. I’ve discussed this too with some of my colleagues recently, and I agree that we writers have to take some responsibility for this, challenging our editors when it happens. It’s not always a clear-cut issue: I’ve received occasional moans from scientists and others about a headline that didn’t quite get the right nuance, but which I thought weren’t so bad, and so I’m not inclined to start badgering folks about that. (I wouldn’t have used the headline that Nature gave my quantum teleportation piece, but hey.) But I think magazines and other outlets have to be open to this sort of feedback – I was disheartened to find that one that I challenged recently was not. (I should say that others are – Prospect has always been particularly good at making changes if I feel the headlines for my online pieces convey the wrong message.) As Chris Chambers has rightly tweeted, we’re all responsible for this stuff: writers, editors, scientists. So we need to work together – which also means standing up against one another when necessary, rather than simply not talking.

Sunday, February 04, 2018

Should you send the scientist your draft article?

The Twitter discussion sparked by this poll was very illuminating. There’s a clear sense that scientists largely think they should be entitled to review quotes they make to a journalist (and perhaps to see the whole piece), while journalists say absolutely not, that’s not the way journalism works.

Of course (well, I say that but I’m not sure it’s obvious to everyone), the choices are not: (1) Journalist speaks to scientist, writes the piece, publishes; or (2) Journalist speaks to scientist, sends the scientist the piece so that the scientist can change it to their whim, publishes.

What more generally happens is that, after the draft is submitted to the editor, articles get fact-checked by the publication before publication. Typically this involves a fact-checker calling up the scientist and saying “Did you basically say X?” (usually with a light paraphrase). The fact-checker also typically asks the writer to send transcripts of interviews, to forward email exchanges etc, as well as to provide links or references to back up factual statements in the piece. This is, of course, time-consuming, and the extent to which, and rigour with which, it is done depends on the resources of the publication. Some science publications, like Quanta, have a great fact-checking machinery. Some smaller or more specialized journals don’t really have much of it at all, and might rely on an alert subeditor to spot things that look questionable.

This means that a scientist has no way of knowing, when he or she gives an interview, how accurately they are going to be quoted – though in some cases the writer can reassure them that a fact-checker will get in touch to check quotes. But – and this is the point many of the comments on the poll don’t quite acknowledge – it is not all about quotes! Many scientists are equally concerned about whether their work will be described accurately. If they don’t get to see any of the draft and are just asked about quotes, there is no way to ensure this.

One might say that it’s the responsibility of the writer to get that right. Of course it is. And they’ll do their best, for sure. But I don’t think I’ll be underestimating the awesomeness of my colleagues to say that we will get it wrong. We will get it wrong often. Usually this will be in little ways. We slightly misunderstood the explanation of the technique, we didn’t appreciate nuances and so our paraphrasing wasn’t quite apt, or – this is not uncommon – what the scientist wrote, and which we confidently repeated in simpler words, was not exactly what they meant. Sometimes our oversights and errors will be bigger. And if the reporter who has read the papers and talked with the scientists still didn’t quite get it right, what chance is there that even the most diligent fact-checker (and boy are they diligent) will spot that?

OK, mistakes happen. But they don’t have to, or not so often, if the scientist gets to see the text.

Now, I completely understand the arguments for why it might not be a good idea to show a draft to the people whose work is being discussed. The scientists might interfere to try to bend the text in their favour. They might insist that their critics, quoted in the piece, are talking nonsense and must be omitted. They might want to take back something they said, having got cold feet. Clearly, a practice like that couldn’t work in political writing.

Here, though, is what I don’t understand. What is to stop the writer saying No, that stays as it is? Sure, the scientist will be pissed off. But the scientist would be no less pissed off if the piece appeared without them ever having seen it.

Folks at Nature have told me, Well sometimes it’s not just a matter of scientists trying to interfere. On some sensitive subjects, they might get legal. And I can see that there are some stories, for example looking at misconduct or dodgy dealings by a pharmaceutical company, where passing round a draft is asking for trouble. Nature says that if they have a blanket policy so that the writer can just say Sorry, we don’t do that, it makes things much more clear-cut for everyone. I get that, and I respect it.

But my own personal preference is for discretion, not blanket policies. If you’re writing about, say, topological phases and it is brain-busting stuff, trying to think up paraphrases that will accurately reflect what you have said (or what the writer has said) to the interviewee while fact-checking seems a bit crazy when you could just show the researcher the way you described a Dirac fermion and ask them if it’s right. (I should say that I think Nature would buy that too in this situation.)

What’s more, there’s no reason on earth why a writer could not show a researcher a draft minus the comments that others have made on their work, so as to focus just on getting the facts right.

The real reason I feel deeply uncomfortable about the way that showing interviewees a draft is increasing frowned on, and even considered “highly unethical”, is however empirical. In decades of having done this whenever I can, and whenever I thought it advisable, I struggle to think of a single instance where a scientist came back with anything obstructive or unhelpful. Almost without exception they are incredibly generous and understanding, and any comments they made have improved the piece: by pointing out errors, offering better explanations or expanding on nuances. The accuracy of my writing has undoubtedly been enhanced as a result.

Indeed, writers of Focus articles for the American Physical Society, which report on papers generally from the Phys Rev journals, are requested to send articles to the papers’ authors before publication, and sometimes to get the authors to respond to criticisms raised by advisers. And this is done explicitly with the readers in mind: to ensure that the stories are as accurate as possible, and that they get some sense of the to-and-fro of questions raised. Now, it’s a very particular style of journalism at Focus, and wouldn’t work for everyone; but I believe it is a very defensible policy.

The New York Times explained its "no show" policy in 2012, and it made a lot of sense: it seems some political spokespeople and organizations were demanding quote approval and abusing it to exert control over what was reported. Press aides wanted to vet everything. This was clearly compromising to pen and balanced reporting.

But I have never encountered anything like that in many years of science reporting. That's not surprising, because it is (at least when we are reporting on scientific papers for the scientific press) a completely different ball game. Occasionally I have had people working at private companies needing to get their answers to my questions checked by the PR department before passing them on to me. That's tedious, but if it means that what results is something extremely anodyne, I just won't use it. I've also found some institutions - the NIH is particularly bad at this - reluctant to let their scientists speak at all, so that questions get fielded to a PR person who responds with such pathetic blandness and generality that it's a waste of everyone's time. It's a dereliction of duty for state-funded scientific research, but that's another issue.

As it happens, just recently while writing on a controversial topic in physical chemistry, I encountered the extremely rare situation where, having shown my interviewees a draft, one scientist told me that it was wrong for those in the other camp to be claiming X, because the scientific facts of the matter had been clearly established and they were not X. So I said fine, I can quote you as saying “The facts of the matter are not X” – but I will keep the others insisting that X is in fact that case. And I will retain the authorial voice implying that the matter is still being debated and is certainly not settled. And this guy was totally understanding and reasonable, and respected my position. This was no more or less than I had anticipated, given the way most scientists are.

In short, while I appreciate that an insistence that we writers not show drafts to the scientists is often made in an attempt to save us from being put in an awkward situation, in fact it can feel as though we are being treated as credulous dupes who cannot stand up to obstruction and bullying (if it should arise, which in my experience it hasn’t in this context), or resist manipulation, or make up our own minds about the right way to tell the story.

There’s another reason why I prefer to ask the scientists to review my texts, though – which is that I also write books. In non-fiction writing there simply is not this notion that you show no one except your editor the text before publication. To do so would be utter bloody madness. Because You Will Get Things Wrong – but with expert eyes seeing the draft, you will get much less wrong. I have always tried to get experts to read drafts of my books, or relevant parts of them, before publication, and I always thank God that I did and am deeply grateful that many scientists are generous enough to take on that onerous task (believe me, not all other disciplines have a tradition of being so forthcoming with help and advice). Always when I do this, I have no doubt that I am the author, and that I get the final say about what is said and how. But I have never had a single expert reader who has been anything but helpful, sympathetic and understanding. (Referees of books for academic publishers, however – now that’s another matter entirely. Don’t get me started.)

I seem to be in a minority here. And I may be misunderstanding something. Certainly, I fully understand why some science writers, writing some kinds of stories, would find it necessary to refuse to show copy to interviewees before publication. What's more, I will always respect editors’ requests not to show drafts of articles to interviewees. But I will continue to do so, when I think it is advisable, unless requested to do otherwise.

Friday, January 05, 2018

What to look out for in science in 2018

I wrote a piece for the Guardian on what we might expect in science, and what some of the big issues will be, in 2018. It was originally somewhat longer than the paper could accommodate, explaining some issues in more detail. Here’s that longer version.

_____________________________________________________

Quantum computers
This will be the year when we see a quantum computer solve some computational problem beyond the means of the conventional ‘classical’ computers we currently use. Quantum computers use the rules of quantum mechanics to manipulate binary data – streams of 1s and 0s – and this potentially makes them much more powerful than classical devices. At the start of 2017 the best quantum computers had only around 5 quantum bits (qubits), compared to the billions of transistor-based bits in a laptop. By the close of the year, companies like IBM and Google said that they are testing devices with ten times that number of qubits. It still doesn’t sound like much, but many researchers think that just 50 qubits could be enough to achieve “quantum supremacy” – the solution of a task that would take a classical computer so long as to be practically impossible. This doesn’t mean that quantum computers are about to take over the computer industry. For one thing, they can so far only carry out certain types of calculation, and dealing with random errors in the calculations is still extremely challenging. But 2018 will be the year that quantum computing changes from a specialized game for scientists to a genuine commercial proposition.

Quantum internet
Using quantum rules for processing information has more advantages than just a huge speed-up. These rules make possible some tricks that just aren’t imaginable using classical physics. Information encoded in qubits can be encrypted and transmitted from a sender to a receiver in a form that can’t be intercepted and read without that eavesdropping being detectable by the receiver, a method called quantum cryptography. And the information encoded in one particle can in effect be switched to another identical particle in a process dubbed quantum teleportation. In 2017 Chinese researchers demonstrated quantum teleportation in a light signal sent between a ground-based source and a space satellite. China has more “quantum-capable” satellites planned, as well as a network of ground-based fibre-optic cables, that will ultimately comprise an international “quantum internet”. This network could support cloud-based quantum computing, quantum cryptography and surely other functions not even thought of yet. Many experts put that at a decade or so off, but we can expect more trials – and inventions – of quantum network technologies this year.

RNA therapies
The announcement in December of a potential new treatment for Huntington’s disease, an inheritable neurodegenerative disease for which there is no known cure, has implications that go beyond this particularly nasty affliction. Like many dementia-associated neurodegenerative diseases such as Parkinson’s and Alzheimer’s, Huntington’s is caused by a protein molecule involved in regular brain function that can ‘misfold’ into a form that is toxic to brain cells. In Huntington’s, which currently affects around 8,500 people in the UK, the faulty protein is produced by a mutation of a single gene. The new treatment, developed by researchers at University College London, uses a short strand of DNA that, when injected into the spinal cord, attaches to an intermediary molecule involved in translating the mutated gene to the protein and stops that process from happening. The strategy was regarded by some researchers as unlikely to succeed. The fact that the current preliminary tests proved dramatically effective at lowering the levels of toxic protein in the brain suggests that the method might be a good option not just for arresting Huntington’s but other similar conditions, and we can expect to see many labs trying it out. The real potential of this new drug will become clearer when the Swiss pharmaceuticals company Roche begins large-scale clinical trials.

Gene-editing medicine
Diseases that have a well defined genetic cause, due perhaps to just one or a few genes, can potentially be cured by replacing the mutant genes with properly functioning, healthy ones. That’s the basis of gene therapies, which have been talked about for years but have so far failed to deliver on their promise. The discovery in 2012 of a set of molecular tools, called CRISPR-Cas9, for targeting and editing genes with great accuracy has revitalized interest in attacking such genetic diseases at their root. Some studies in the past year or two have shown that CRISPR-Cas9 can correct faulty genes in mice, responsible for example for liver disease or a mouse form of muscular dystrophy. But is the method safe enough for human use? Clinical trials kicked off in 2017, particularly in China but also the US; some are aiming to suppress the AIDS virus HIV, others to tackle cancer-inducing genetic mutations. It should start to become clearing 2018 just how effective and safe these procedures are – but if the results are good, the approach might be nothing short of revolutionary.

High-speed X-ray movies
Developing drugs and curing disease often relies on an intimate knowledge of the underlying molecular processes, and in particular on the shape, structure and movements of protein molecules, which orchestrate most of the molecular choreography of our cells. The most powerful method of studying those details of form and function is crystallography, which involves bouncing beams of X-rays (or sometimes of particles such as electrons or neutrons) off crystals of the proteins and mathematically analysing the patterns in the scattered beams. This approach is tricky, or even impossible, for proteins that don’t form crystals, and it only gives ‘frozen’ structures that might not reflect the behaviour of floppy proteins inside real cells. A new generation of instruments called X-ray free-electron lasers, which use particle-accelerator technologies developed for physics to produce extremely bright X-ray beams, can give a sharper view. In principle they can produce snapshots from single protein molecules rather than crystals containing billions of them, as well as offering movies of proteins in motion at trillions of frames per second. A new European X-ray free-electron laser in Hamburg inaugurated in September is the fastest and brightest to date, while two others in Switzerland and South Korea are starting up too, and another at Stanford in California is getting an ambitious upgrade. As these instruments host their first experiments in 2018, researchers will acquire a new window into the molecular world.

100,000 genomes
By the end of 2018 the private company Genomics England, set up by the UK Department of Health, should have completed its goal of reading the genetic information in 100,000 genomes of around 75,000 voluntary participants. About a third of these people will be cancer patients, who will have a separate genome read from cancer cells and healthy cells; the others will be people with rare genetic diseases and their close relatives. With such a huge volume of data, it should be possible to identify gene mutations linked to cancer and to some of the many thousands of known rare diseases. This information could help diagnoses of cancer and disease, and perhaps also to improve treatments. For example, a gene mutation that causes a rare disease (one of which is likely to affect around one person in 17 at some point in their lives) supplies a possible target for new drugs. Genetic information for cancer patients can also help to tailor specific treatments, for example by identifying those not at risk of side effects from what can otherwise be effective anti-cancer drugs.

Gravitational-wave astronomy
The 2017 Nobel prize in physics was awarded to the chief movers behind LIGO, the US project to detect gravitational waves. These are ripples in spacetime caused by extreme astrophysical events such as the merging of two neutron stars or black holes, which have ultra-strong gravitational fields. The ripples produce tiny changes in the dimensions of space itself as they pass, which LIGO – comprising two instruments in Washington State and Louisiana – detects from changes in the distances travelled by laser beams sent along channels to mirrors a few kilometres away. The first gravitational wave was detected in late 2015 and announced in 2016. Last year saw the announcement of a few more detections, including one in August from the first known collision of two neutron stars. Gravitational-wave detectors now also exist or are being built in Europe, Korea and Japan, while others are planned that will use space satellites. The field is already maturing into a new form of astronomy that can ‘see’ some of the most cataclysmic events in the universe – and which so far fully confirm Einstein’s theory of general relativity, which explains gravitation. We can expect to see more cataclysmic events detected in 2018 as gravitational-wave astronomy becomes a regular tool in the astronomer’s toolkit.

Beyond the standard model
It’s a glorious time for fundamental physics – but not necessarily for the reasons physicists might hope. The so-called standard model of particle physics, which accounts for all the known particles and forces in nature, was completed in 2013 with the discovery of the Higgs boson using the Large Hadron Collider (LHC), the world’s most powerful particle accelerator, at CERN in Switzerland. The trouble is, it can’t be the whole story. The two most profound theories of physics – general relativity (which describes gravity) and quantum mechanics – are incompatible; they can’t both be right as they stand. That problem has loomed for decades, but it’s starting to feel embarrassing. Physicists have so far failed to find ways of breaking out beyond the standard model and finding ‘new physics’ that could show the way forward. String theory offers one possible route to a theory of quantum gravity, but there’s no experimental evidence for it. What’s needed is some clue from particle-smashing experiments for how to extend the standard model: some glimpse of particles, forces or effects outside the current paradigm. Researchers were hoping that the LHC might have supplied that already – in particular, many anticipated finding support for the theory called supersymmetry which some see as the best candidate for the requisite new physics. But so far there’s been zilch. If another year goes by without any chink in the armour appearing, the head-scratching may turn into hair-pulling.

Crunch time for dark matter
That’s not the only embarrassment for physics. It’s been agreed for decades that the universe must contain large amounts of so-called dark matter – about five times as much, in terms of mass, than all the matter visible as stars, galaxies, and dust. This dark matter appears to exert a gravitational tug while not interacting significantly with ordinary matter or light (whence the ‘dark’) in other ways. But no one has any idea what this dark matter consists of. Experiments have been trying to detect it for years, primarily by looking for very rare collisions of putative dark-matter particles with ordinary particles in detectors buried deep underground (to avoid spurious detections caused by other particles such as cosmic rays) or in space. All have drawn a blank, including results from separate experiments in China, Italy and Canada reported in the late summer and early autumn. The situation is becoming grave enough for some researchers to start taking more seriously suggestions that what looks like dark matter is in fact a consequence of something else – such as a new force that modifies the apparent effects of gravity. This year could prove to be crunch time for dark matter: how long do we persist in believing in something when there’s no direct evidence for it?

Return to the moon
In 2018, the moon is the spacefarer’s destination of choice. Among several planned missions, China’s ongoing unmanned lunar exploration programme called Chang’e (after a goddess who took up residence there) will enter its fourth phase in June with the launch of a satellite to orbit the moon’s ‘dark side’ (the face permanently facing away from the Earth, although it is not actually in perpetual darkness). That craft will then provide a communications link to guide the Long March 5 rocket that should head out to this hidden face of the moon in 2019. The rocket will carry a robotic lander and rover vehicle to gather information about the mineral composition of the moon, including the amount of water ice in the south polar basin. It’s all the prelude to a planned mission in the 2030s that will take Chinese astronauts to the lunar surface. Meanwhile, tech entrepreneur Elon Musk has claimed that his spaceflight business SpaceX will be ready to fly two paying tourists around the moon this year in the Falcon Heavy rocket and the Dragon capsule the company has developed. Since neither craft has yet had a test flight, you’d best not hold your breath (let alone try to buy a ticket) – but the rocket will at least get its trial launch this year.

Highway to hell
Exploration of the solar system won’t all be about the moon, however. The European Space Agency and the Japanese Aerospace Exploration Agency are collaborating on the BepiColombo mission, which will set off in October on a seven-year journey to Mercury, the smallest planet in the solar system and the closest to the Sun. Like the distant dwarf planet Pluto until the arrival of NASA’s New Horizons mission in 2015, Mercury has been a neglected little guy in our cosmic neighbourhood. That’s partly because of the extreme conditions it experiences: the sunny side of the planet reaches a hellish 430 oC or so, and the orbiting spacecraft will feel heat of up to 350 oC – although the permanently shadowed craters of Mercury’s polar regions stay cold enough to hold ice. BepiColombo (named after renowed Italian astronomer Giuseppe Colombo) should provide information not just about the planet itself but about the formation of the entire solar system.

Planets everywhere
While there is still plenty to be learnt about our close planetary neighbours, their quirks and attractions have been put in cosmic perspective by the ever-growing catalogue of “exoplanets” orbiting other stars. Over the past two decades the list has grown to nearly 4,000, with many other candidates still being considered. The majority of these were detected by the Kepler space telescope, launched in 2009, which identifies planets from the very slight dimming of their parent star as the planet passes in front (a ‘transit’). But the search for other worlds will hot up in 2018 with the launch of NASA’s Transiting Exoplanet Survey Satellite, which will monitor the brightness of around 200,000 stars during its two-year mission. Astronomers are particularly interested in finding ‘Earth-like planets’, with a size, density and orbit comparable to that of Earth and which might therefore host liquid water - and life. Such candidates should then be studied in more detail by the James Webb Space Telescope, a US-European-Canadian collaboration widely regarded as the successor to the Hubble Space Telescope, due for launch in spring 2019. The Webb might be able to detect possible signatures of life within the chemical composition of exoplanet atmospheres, such as the presence of oxygen. With luck, within just a couple of years or so we may have good reason to suspect we are not alone in the universe.

Mapping the brain
It’s sometimes said, with good reason, that understanding outer space is easier than understanding inner space. The human brain is arguably the most complex object in the known universe, and while no one seems to be expecting any major breakthrough in 2018 in our view of how it works, we can expect to reach next Christmas with a lot more information. Over the summer of 2017 the €10bn European Human Brain Project got a reboot to steer it away from what many saw as an over-ambitious plan to simulate a human brain on a computer and towards a more realistic goal of mapping out its structure down to the level of connections between the billions of individual neurons. This shift in emphasis was triggered by an independent review of the project after 800 neuroscientists threatened to boycott it in 2014 because of concerns about the way it was being managed. One vision now is to create a kind of Google Brain, comparable to Google Earth, in which the brain structures underpinning such cognitive functions as memory and emotion can be ‘zoomed’ from the large scale revealed by MRI scanning down to the level of individual neurons. Such information might guide efforts to simulate more specific ‘subroutines’ of the brain. But one of the big challenges is simply how to collect, record and organize the immense volume of data these studies will produce.

Making clean energy
Amidst the excitement and allure of brains, genes, planets and the cosmos, it’s easy for the humbler sciences, such as chemistry, to get overlooked. That should change in 2019, which UNESCO has just designated as the International Year of the Periodic Table, chemistry’s organizing scheme of elements. But there are good reasons to keep an eye on the chemical sciences this year too, not least because they may hold the key to some of our most pressing global challenges. Since nature has no reason to heed the ignorance of the current US president, we can expect the global warming trend to continue – and some climate researchers believe that the only way to limit future warming to within 2 oC (and thus to avoid some extremely alarming consequences) is to develop chemical technologies for capturing and storing the greenhouse gas carbon dioxide from the atmosphere. At the start of 2017 a group of researchers warned that lack of investment in research on such “carbon capture and storage” technologies was one of the biggest obstacles to achieving this target. By the end of this year we may have a clearer view of whether industry and governments will rise to the challenge. In the meantime, development of carbon-free energy-generating technologies needs boosting too. The invention last year at the Massachusetts Institute of Technology of a device that uses an ultra-absorbent black “carbon nanomaterial” to convert solar heat to light suggests one way to make solar power more efficient, capturing more of the energy in the sun’s rays than current solar cells can manage even in principle. We can hope for more such innovation, as well as efforts to turn the smart science into commercially viable technologies. Don’t expect any single big breakthrough in these areas, though; success is likely to come, if at all, from a portfolio of options for making and using energy in greener ways.