Thursday, March 01, 2018

On the pros and cons of showing copy to sources - redux

Dana Smith has written a nice article for Undark about whether science journalists should or should not show drafts or quotes to their scientist sources before publication.

I’ve been thinking about this some more after writing the blog entry from which Dana quotes. One issue that I think comes out from Dana’s piece is that there is perhaps something of a generational divide here: I sense that younger writers are more likely to consider it ethically questionable ever to show drafts to sources, while old’uns like me, Gary Stix and John Rennie have less of a problem with it. And I wonder if this has something to do with the fact that the old’uns probably didn’t get much in the way of formal journalistic training (apologies to Gary and John if I’m wrong!), because science writers rarely did back then. I have the impression that “never show anything to sources” is a notion that has entered into science writing from other journalistic practice, and I do wonder if has acquired something of the status of dogma in the process.

Erin Biba suggests that the onus is one the reporter to get the facts right. I fully agree that we have that responsibility. But frankly, we will often not get the facts right. Science is not uniquely hard, but it absolutely is hard. Even when we think we know a topic well and have done our best to tell it correctly, chances are that there are small, and sometimes big, ways in which we’ll miss what real experts will see. To suggest that asking the experts is “the easy way out” sounds massively hubristic to me.

(Incidentally, I’m not too fussed about the matter of checking out quotes. If I show drafts, it’s to check out if I have got any of the scientific details wrong. I often tend to leave in quotes just because there doesn’t seem much point in removing them – they are very rarely queried – but I might omit critical quotes from others to avoid arguments that might otherwise end up needing third-part peer review.)

Dana doesn’t so much go into the arguments for why it is so terrible (in the view of some) to show your copy to sources. She mentions that some say it’s a matter of “journalistic integrity”, or just that it’s a “hard rule” – which makes the practice sound terribly transgressive. But why? The argument often seems to be, “Well, the scientists will get you to change your story to suit them.” To which I say, “Why on earth would I let them do that?” In the face of such attempts (which I’ve hardly ever encountered), why do I not just say, “Sorry, no”? Oh, but you’ll not be able to resist, will you? You have no will and judgement. You’re just a journalist.

Some folks, it’s true, say instead “Oh, I know you’ll feel confident and assertive enough to resist undue pressure to change the message, but some younger reporters will be more vulnerable, so it’s safer to have a blanket policy.” I can see that point, and am not unsympathetic to it (although I do wonder whether journalistic training might focus less on conveying the evils of showing copy to sources and more on developing skills and resources for resisting such pressures). But so long as I’m able to work as a freelancer on my own terms, I’ll continue to do it this way: to use what is useful and discard what is not. I don’t believe it is so hard to tell the difference, and I don’t think it is very helpful to teach science journalists that the only way you can insulate yourself from bad advice is to cut yourself off from good advice too.

Here’s an example of why we science writers would be unwise to trust we can assess the correctness of our writing ourselves, and why experts can be helpful if used judiciously. I have just written a book on quantum mechanics. I have immersed myself in the field, talked to many experts, read masses of books and papers, and generally informed myself about the topic in far, far greater detail than any reporter could be expected to do in the course of writing a news story on the subject. That’s why, when a Chinese team reported last year that they had achieved quantum teleportation between a ground base and a satellite, I felt able to write a piece for Nature explaining what this really means, and pointing out some common misconceptions in the reporting of it.

And I feel – and hope – I managed to do that. But I got something wrong.

It was not a major thing, and didn’t alter the main point of the article, but it was a statement that was wrong.

I discovered this only when, in correspondence with a quantum physicist, he happened to mention in passing that one of his colleagues had criticized my article for this error in a blog. So I contacted the chap in question and had a fruitful exchange. He asserted that there were some other dubious statements in my piece too, but on that matter I replied that he had either misunderstood what I was saying or was presenting an unbalanced view of the diversity of opinion. The point was, it was very much a give-and-take interaction. But it was clear that on this one point he was right and I was wrong – so I got the correction made.

Now, had I sent my draft to a physicist working on quantum teleportation, I strongly suspect that my error would have been spotted right away. (And I do think it would have had to be a specialist in that particular field, not just a random quantum physicist, for the mistake to have been noticed.) I didn’t do so partly because I had no real sources in this case to bounce off, but also partly because I had a false sense of my own “mastery” of the topic. And this will happen all the time – it will happen not because we writers don’t feel confident in our knowledge of the topic, but precisely because we do feel (falsely) confident in it. I cannot for the life of me see why some imported norm from elsewhere in journalism makes it “unethical” to seek expert advice in a case like this – not advice before we write, but advice on what we have actually written.

Erin is right to say that most mistakes, like mine here, really aren’t a big deal. They’re not going to damage a scientist’s career or seriously mislead the public. And of course we should admit to and correct them when they happen. But why let them happen more often than they need to?

As it happens, having said earlier that I very rarely get responses from scientists to whom I’ve shown drafts beyond some technical clarifications, I recently wrote two pieces that were less straightforward. Both were on topics that I knew to be controversial. And in both cases I received some comments that made me suspect their authors were wanting to somewhat dictate the message, taking issue with some of the things the “other side” said.

But this was not a problem. I thought carefully about what they said, took on board some clearly factual remarks, considered whether the language I’d used captured the right nuance in some other places, and simply decided I would respectfully decline to make any modifications to my text in others. Everything was on a case-by-case basis. These scientists were in return very respectful of my position. They seemed to feel that I’d heard and considered their position, and that I had priorities and obligations different from theirs. I felt that my pieces were better as a result, without my independence at all being compromised, and they were happy with the outcome. Everyone, including the readers, were better served as a result of the exchange. I’m quite baffled by how there could be deemed to be anything unethical in that.

And that’s one of the things that makes me particularly uneasy about how showing any copy to sources is sometimes presented not as an informed choice but as tantamount to breaking a professional code. I’ve got little time for the notion that it conflicts with the journalist’s mission to critique science and not merely act as its cheerleader. Getting your facts right and sticking to your guns are separate matters. Indeed, I have witnessed plenty of times the way in which a scientist who is being (or merely feels) criticized will happily seize on any small errors (or just misunderstandings of what you’ve written) as a way of undermining the validity of the whole piece. Why give them that opportunity after the fact? The more airtight a piece is factually, the more authoritative the critique will be seen to be.

I should add that I absolutely agree with Erin that the headlines our articles are sometimes given are bad, misleading and occasionally sensationalist. I’ve discussed this too with some of my colleagues recently, and I agree that we writers have to take some responsibility for this, challenging our editors when it happens. It’s not always a clear-cut issue: I’ve received occasional moans from scientists and others about a headline that didn’t quite get the right nuance, but which I thought weren’t so bad, and so I’m not inclined to start badgering folks about that. (I wouldn’t have used the headline that Nature gave my quantum teleportation piece, but hey.) But I think magazines and other outlets have to be open to this sort of feedback – I was disheartened to find that one that I challenged recently was not. (I should say that others are – Prospect has always been particularly good at making changes if I feel the headlines for my online pieces convey the wrong message.) As Chris Chambers has rightly tweeted, we’re all responsible for this stuff: writers, editors, scientists. So we need to work together – which also means standing up against one another when necessary, rather than simply not talking.