David Wootton has sent me some responses to the accusations made by some of the reviewers of his book The Invention of Science, including me in Nature and Steven Poole in New Statesman, that he somewhat over-eggs the “science wars”/relativism arguments. Some other reviewers have suggested that these polemical sections of the book are referring to an academic turf war that doesn’t need to be awarded so much space here. In her review in the Guardian, Lorraine Daston commented that this material is “unlikely to be of interest to readers who are not historians of science over the age of 50.” Well, I plead guilty to the second at least, and so perhaps it isn’t surprising that those chapters most certainly were of interest to me. I might not agree with all of David’s arguments in the book, but I was very happy to see them. It is a discussion that still needs to happen, not least because “histories of science” like that of Steven Weinberg’s To Explain the World are still being put out into the public arena.
For that reason too, I’m delighted to post David’s responses here. I don’t exactly disagree with anything he says; I think the issues are at least partly a matter of interpretation. For example, in my review I commented that Steven Shapin and Simon Schaffer’s influential Leviathan and the Air-Pump (1985) doesn’t to my eye offer the kind of “hard relativist” perspective that David seems to find in it. In my original draft of the book review, I also said the same about David’s comments on Simon Schaffer’s article on prisms:
“I see no reason to believe, for example, that Schaffer really questions Newton’s compound theory of white light in his 1989 essay on prisms and the experimentum crucis, but just that he doubts the persuasiveness of Newton’s own experimental evidence.”
David seemed to say that Simon’s comments even implied he had doubts about the modern theory of optics and additive mixing; I can’t find grounds for reaching that conclusion. In my conversations with Simon, I have never had the slightest impression that he doesn’t regard science as a system of thought that offers a progressively more reliable description of the world. If he thinks it is no truer than witchcraft, he hides it extraordinarily well.
As further evidence of S&S’s relativism, David quotes from Leviathan and the Air-Pump, which, he says, maintains that the success of experimental science depended on its proponents’ “political success ... in insinuating themselves into the activities of other institutions and other interest groups. He who has the most, and the most powerful, allies wins.” When I first read this (in preparing my book Curiosity), it never once occurred to me that S&S meant it as some kind of statement to the effect that we only think Boyle’s law is correct because Boyle was more politically astute than his opponents. I took it to mean that Boyle was able to gain rapid acceptance of his ideas because he was politically well situated (central to the Royal Society, for example) and canny with his rhetoric. It seemed to me that the reception of scientific ideas when they first appear surely is, both then and now, conditioned by social factors. It surely is the case that some such ideas, though they might indeed now be revealed as superior to the alternatives, were more quickly taken up at the time not just (or even) because they were more convincing or better supported by evidence but because of the way their advocates were able to corner the market or rewrite the discourse in their favour. Lavoisier’s “new chemistry” is the obvious example. Indeed, David recognizes that social aspects of scientific debate in his book, which is one of its many strengths. I certainly don’t think Simon would argue that scientific ideas might then stay fixed for hundreds of years simply because their initial proponents gained the upper hand in the cut and thrust of debate.
David says that Steven Shapin does betray an affiliation to extreme relativism, however – and he cites as evidence Shapin’s comment in his (unsurprisingly damning) review of the Weinberg book:
“Science remains almost unique in that respect. It’s modernity’s reality-defining enterprise, a pattern of proper knowledge and of right thinking, in the same way that—though Mr. Weinberg will hate the allusion—Christian religion once defined what the world was like and what proper knowledge should be.”
This is a complicated claim, and I would like to know more about what Shapin meant by it. Perhaps I will ask him. I can see why David might interpret it as a statement to the effect that the scientific theory of the origin of the universe is no more “true” than the account given in Genesis. And I think he is right to point out that Shapin should be alert to the possibility of that interpretation. But I think one can also interpret the remark as saying that we should be as wary of scientism – the idea that the only knowledge that counts as proper knowledge is scientific – as we should be of the doctrinaire Christianity that once pervaded Western thought, which was once the jury before which all ideas were to be scrutinized. Christian theology was certainly regarded at times as a superior arbiter to pre-scientific rationalism in efforts to understand the universe – for example in the 1277 Condemnation that pitched Aristotelian natural history against the Church). But just as Christianity was finally compelled to stay within the proper limits of its authority (in most parts of the civilized Christian world, if not perhaps Kansas), so should we make sure that science does so: it is the best method we have for understanding the physical world, but not the yardstick for all “proper knowledge”. I hope this is what Shapin means, but I confess that I cannot be sure.
The real problem here – and it is one that David rightly complains about – is not so much excessive relativism in the academic study of the history of science, but what he calls a conspiracy of silence within that discipline. It seems to have become taboo to say that scientific knowledge goes through a reliability filter that makes it rather dependable, predictive and amenable to improvement – even if you believe that to be the case. As a historian of science, David must be regularly faced with disapproving frowns and tuts if he wishes to express value judgements about scientific ideas, because this seems to have become bad form and now to be rather rigidly policed in some quarters.
I have experienced this myself, when a publisher’s reviewer of my book Invisible evidently felt it his/her duty to scour it for the slightest taint of presentism – and, when he/she decided it had been detected, to reel out what was obviously a pre-prepared little spiel to that effect. For example, I was sternly told that
“Hooke and Leeuwenhoek did not "in fact" see "single-celled organisms called protozoa". They also did not drive modern cars, neither did they long for a new iphone.”
This is of course just silly (not to say rather incoherent) academic Gotcha-style point-scoring. What I wrote was “It was Leeuwenhoek’s discoveries of invisibly small ‘animals’ – he was in fact seeing large bacteria and single-celled organisms called protozoa – in 1676…” Outrageous, huh?
Then I got some nonsense about "Great Men" histories because I had the temerity to mention that Pasteur and Koch did some important work on germ theory. The reviewer’s terror of making what his/her colleagues would regard as a disciplinary faux pas seems to be preventing him/her from being able to actually tell any history.
The situation in that case became clear enough when the reviewer finally complained that it was hard to judge my argument because what he/she needed was “a clear statement of the author's intent and theoretical position” – followed by “rewriting the whole text in such a way that the author clearly articulates his chosen positions throughout.” To which I’m afraid I replied: “What is my “theoretical position”? It’s in the text, not in some badge that I choose to display at the outset. The persistent misreading of the text to force it into one camp or another [and the cognitive dissonance evident when it doesn’t quite fit] seems to highlight a pretty serious problem with the academic approach, for all that I benefit from it shamelessly.”
So perhaps David will understand (I suspect he does already) that I have considerable sympathy with his predicament. I just wonder if his frustration (like mine) leaked out a little too much. I don’t know if he is right to say that “The [Oxford] faculty, as a group of professional historians, feels it must ward off anyone interested in studying science as a project that succeeds and makes progress, and at the same time encourage anyone who wants to study science as a purely social enterprise” – and if he is, that doesn’t seem terribly healthy. But the job advert he quotes doesn’t seem to me to deny the possibility of progress, but simply to point out that the primary job of the historian is not to sift the past for nuggets of the present.
Which of course brings me to Weinberg. He apparently wants to reshape the history of science, although his response to critics in the NYRB makes me more sympathetic to the sincerity, if not to the value, of his programme. I wonder if we might get a little clearer about the issues here by considering how one might wish to, say, write about medieval and early modern witchcraft. I wonder if what David sees as an unconscionable silence from historians on the veracity and validity of witchcraft is more a matter of historians thinking that, in the 21st century, one should not feel obliged to begin a paper or a book with a statement along the lines of
“I must point out that witchcraft is not a very effective way to understand the world, and if you wish to make a flying device, you will be far better advised to use the modern theory of fluid mechanics.”
On the other hand, if said author were to be quizzed along the lines of “But does witchcraft make broomsticks fly?”, it would be intellectually feeble, indeed derelict, to respond “That’s not the issue I am addressing, and I do not propose to comment on it.” David implies that this happens; I suspect he is right, though I do not know how often. There doesn’t seem to be anything sacrificed by saying instead something like: “Of course, witchcraft will not summon demons and make people fly. Now let me get on with talking about it.”
The Weinberg position, on the other hand, seems to be along the lines of “By all means study witchcraft as history, if you like, but as far as science is concerned we should make it absolutely clear that it was just superstitious nonsense that got in the way of true progress.” To which, of course, the historian might want to say “But Robert Boyle believed that demons exist and could be summoned!” The Weinbergian (I don’t want to put words into his own mouth) might respond, “Well Boyle wasn’t perfect and he believed some pretty daft things – like alchemical transmutation.”
And at that point I say “You really don’t give a toss what Robert Boyle thought, do you? You just want to mark his homework.” But I do give a toss, and not just because Boyle was an interesting thinker, or because I don’t have any illusion that we are smarter today than people were in the seventeenth century. I want to take seriously what Boyle thought and why, because it is a part of how ideas have developed, and because I don’t believe the history of science was a process of gradually shaking off delusions and misapprehensions and refining our rationality. It is much messier than that, now and always. If your starting position in assessing Boyle’s belief in demons and alchemy is that he was sometimes a bit gullible and deluded, then you are simply not going to get much of a grasp of what or how he thought. (Boyle was somewhat gullible when it came to alchemical charlatans, but his belief in transmutation wasn’t a part of that credulity.)
My own position is more along the lines of “It’s interesting that people once believed in witchcraft. I wonder what sustained that belief, and how it interacted with emerging ideas about science?” I am not being disingenuous if I say that I am inevitably a naïve reader of Shapin, Schaffer, Daston, Fara, and indeed David Wootton. But I find this same spirit in all of their books, and that’s what I appreciate in them.
Comments from David Wootton
A number of the reviews of The Invention of Science have expressed puzzlement that my book opens and closes with extensive historiographical, methodological, and philosophical discussions. Why not just leave all that stuff out? The charge is that I am refighting the Science Wars of the 1990s when everyone else has moved on. I under- stand why people would think this, but, with respect, I think they are wrong. Let’s break down the issues as follows:
1) Are relativists still confident that they speak for the history of science profession? Yes they are. See for example Steven Shapin’s breathtaking review of Steven Weinberg in the Wall Street Journal, where Shapin actually presents belief in science as being strictly comparable to belief in Christianity (http://goo.gl/qULelt) [1]. Or see Shapin’s and Schaffer’s introduction to the anniversary edition of Leviathan and the Air Pump (2011). Or see Peter Dear’s “Historiography of Not-So-Recent Science”, History of Science 50 (2012), 197-211 (“we are all post- modernists now”).
2) Are students still taught from relativist textbooks? Yes they are. The key text- books are Shapin’s Scientific Revolution (1996; now translated into seventeen languages); Peter Dear’s Revolutionizing the Sciences (2001, revised in 2009); John Henry’s The Scientific Revolution (1997, with later revisions). This may change – there is Principe’s Very Short Introduction (2011), for example – but it hasn’t changed yet.
3) Has the profession moved on? Rather than moving on, it has decided to pretend the Science Wars never happened, and as a consequence it is stuck in a rut, incapable of generating a new account of what was happening in science in the early modern period. To quote Lorraine Daston’s 2009 essay on the present state of the discipline (http://goo.gl/rMEAiy), what historians have produced is “a swarm of microhistories ... archivally based and narrated in exquisite detail.” These microhistories, as she herself acknowledges, do not enable one to put together a bigger picture. The resulting confusion is embodied, for example, in David Knight’s Voyaging in Strange Seas: the Great Revolution in Science (Yale, 2014).
4) Are the relativists more moderate than I maintain? Philip Ball thinks I and the authors of Leviathan and the Air Pump have more in common than I imagine. I doubt Shapin and Schaffer will think so, and I suggest Philip rereads p. 342 of that book, which maintains that the success of experimental science depended on its proponents’ “political success ... in insinuating themselves into the activities of other institutions and other interest groups. He who has the most, and the most powerful, allies wins.” In this sort of story the evidence counts for nothing – indeed, the strong programme insists that the evidence must count for nothing (and note the introduction of the strong programme’s key principle of symmetry on p. 5)[2].
5) Can you separate methodology and historiography from substantive history? It’s very difficult to do so, because your methodology and the previous history of your discipline shape the questions you ask and the answers you give. Thus relat- ivist historiography has privileged controversy studies (http://goo.gl/uVfxFF), and simply ignored cases where new scientific claims have been accepted without dispute. Indeed if the Duhem-Quine thesis were right there are always grounds for dispute when new evidence is presented. I don’t see how one can discuss the collapse of Ptolemaic astronomy in the years immediately after 1610 without acknowledging that this is an event which has been invisible to previous historians because they have been unwilling to acknowledge that an empirical fact (the phases of Venus) could be decisive in determining the fate of a well- established theory — in a case like this it is not the evidence that is new, but the questions that are being asked of it, and these are inseparable from issues of methodology and historiography [3].
6) The Economist thinks I have a disagreement with a few “callow” relativists. Odd that these insignificant people hold chairs in Harvard, Cambridge, Oxford, Edinburgh, Cornell. But there is a much bigger point here: a fundamental claim made by my opponents is that historians are committed, in principle, to treating bad and good knowledge identically. The historical profession tends to agree with them (see for example Gordon Wood’s NYRB essay on medicine in the American Revolution, http://goo.gl/ZoFuMu: “The problem is most historians are relativists”).
The consequences are apparent in the Cambridge History of Science, vol. 3, ed. Park and Daston (2006), which contains a twenty page chapter on “Coffee Houses and Print Shops” (as part of a two hundred page section on “Personae- and Sites of Natural Knowledge”) and others equally long on “Astrology” and “Magic” (Astrology gets twenty pages while Astronomy gets thirty), but, despite being 850 pages long, contains no extended discussion of Digges, Stevin, Gilbert, or Pascal, nothing on magnets, and only two pages on vacuum experiments [4].
It is also apparent in Oxford University’s recent (April 2015) advertisement for its Chair in the History of Science which stated: “The professor will share the faculty’s vision of the scope of the history of science, which is less focused on the history of scientific truth and more interested in reconstructing the practices of science, and the claims to science-based authority within given societies at given times” [5]. The Oxford Faculty of History does not declare its vision of the scope of the discipline when advertising its chair in, say, military history. But the history of science is different. The faculty, as a group of professional historians, feels it must ward off anyone interested in studying science as a project that succeeds and makes progress, and at the same time encourage anyone who wants to study science as a purely social enterprise. What interests them is not scientific knowledge but the authority claimed by “scientists” — be they alchemists or phrenologists. What’s at stake here is not just the history of science, but also the claim, made over and over again by historians, that the past must be studied solely in its own terms — an approach which may lead to understanding, but cannot lead to explanation. So historians of witchcraft report encounters with devils as if the devils were real — and never ask what’s really going on.
7) What is science? I was dismayed to discover that students in my own university were being taught (by someone with a new PhD in history of science from a prestigious institution) that there was no such thing as science in the seventeenth century. But this, after all, is what Henry’s textbook says, and Dear in his 2012 review essay confidently asserts: “specialist historians seem increasingly agreed that science as we now know it is an endeavour born of the nineteenth century.” On her university website one distinguished historian of science is described thus: “Paula Findlen teaches history of science before it was ‘science’ (which is, after all, a nineteenth-century word).” (http://web.stanford.edu/ dept/HPS/findlen.html, accessed 7 Dec 2015). How have we got to the point where it appears to make sense to claim that “science” is a nineteenth-century word? Because Newton, we are told, was not a scientist (which indeed is a nineteenth-century word) but a philosopher. Even if one charitably rephrases Findlen’s statement (or the statement made on her behalf) to read “‘science’ as we currently use the term is a nineteenth-century concept” it would be wrong unless, by a circular argument, one insists that earlier usages of the word can’t possibly have meant by science what we mean by science. The whole point of my book is to show that by the end of the seventeenth century “science” (as Dryden called it) really was science as we understand the term. To unpick the miscon- ception that there was no science in the seventeenth century you have to look at the history of words like “science” and “scientist” (noting, for example, the founding of the French Académie des Sciences in 1666), but also at an historiographical tradition which has insisted that what we think of as science is just a temporary and arbitrary social practice, like metaphysical poetry or Methodism, not an enduring and self-sustaining body of reliable knowledge.
8) What would have happened if I had left out the methodological and historiographical debates? I tried the alternative approach, of writing in layperson’s terms for commonsensical people, first. Just look at how my book Bad Medicine was treated by Steven Shapin, in the pages of the London Review of Books: http:/ /goo.gl/aA67fr! The book was a success in that lots of people read it and liked it, many of them doctors (see www.badmedicine.co.uk); but historians of medicine brushed it off. So this time I have felt obliged to address the core arguments which supposedly justify ignoring progress — the arguments that have bamboozled the profession for the last fifty years — in the hope of being taken a little more seriously, not by sensible people (who can’t understand why I don’t just cut to the chase), but by the professionals who think that the history of science is like cardiac surgery — not something “the laity” (Shapin’s peculiar term) can possible participate in, understand, or criticise, but something for the professionals alone. In trying to address this new clerisy I have evidently tried the patience of some of my more sensible, level-headed readers. That’s unfortunate and a matter of considerable regret: but if the way in which history of science is taught in the universities is to change, someone must take on the experts on their own ground, and someone must question the notion that the history of science ought not to concern itself with (amongst much else) the history of scientific truth. By all means skip the beginning and concluding chapters if you have no interest in how the history of science (and history more generally) is taught; but please read them carefully if you do.
Notes
[1] There is a paywall: to surmount it google “Why Scientists Shouldn’t Write History” and click on the first link. For a discussion see http://goo.gl/VYNVhX. I am grateful to Philip Ball for acknowledging that my book is very different in character from Weinberg’s, which saves me from having to stress the point.
[2] Patricia Fara thinks that social constructivism is “the idea that what people believe to be true is affected by their cultural context.” If that were the case then we would all be social constructivists and I really would be arguing with a straw person. But of course it isn’t, as I show over and over again in my book. It is, rather, the claim (made by her Cambridge colleague Andrew Cunningham) that science is “a human activity, wholly a human activity, and nothing but a human activity” — in other words that it is socially constituted, not merely socially influenced (the model for such an argument being, of course, Durkheim on religion). The consequence of this, constructivists rightly hold, is epistemological egalitarianism — any particular belief is to be regarded as being just as good as any other.
[3] Take for example William Donahue’s discussion of Galileo and the phases of Venus in Park and Daston, 585: “He argued... that this phenomenon was inconsistent with the Ptolemaic arrangement of the planets...” Galileo and his contemporaries understood perfectly well that Galileo had proved the Ptolemaic arrangements of the planets could not be right — the whole impact of Galileo’s discovery is lost by reducing it to a mere argument. Indeed Donahue does not acknowledge that it had any impact while I show the impact is measurable by counting editions of Sacrobosco.
[4] A colleague of mine unkindly calls this the Polo history: Polo Mints, to quote Wikipedia, “are a brand of mints whose defining feature is the hole in the middle.”
[5] The text is no longer on the Oxford University website, but can still be found, for example, at http://goo.gl/KOY05f (accessed 7 Dec 2015).
Friday, December 18, 2015
Thursday, December 03, 2015
Can science be made to work better?
Here is a longer version of the leader that I wrote for Nature this week.
_______________________________________________________________________
Suppose you’re seeking to develop a technique for transferring proteins from a gel to a plastic substrate for easier analysis. Useful, maybe – but will you gain much kudos for it? Will it enhance the reputation of your department? One of the sobering findings of last year’s survey of the 100 most cited papers on the Web of Science (Nature 514, 550; 2014) was how many of them reported such apparently mundane methodological research (this one was number six).
Not all prosaic work reaches such bibliometric heights, but that doesn’t deny its value. Overcoming the hurdles of nanoparticle drug delivery, for example, requires the painstaking characterization of pathways and rates of breakdown and loss in the body: work that is probably unpublishable, let alone unglamorous. One can cite comparable demands of detail for getting just about any bright idea to work in practice – but it’s the initial idea, not the hard grind, that garners the praise and citations.
An aversion to routine yet essential legwork seems at face value to be quite the opposite of the conclusions of a new study on how scientists pick their research topics. This analysis of discovery and innovation in biochemistry (A. Rzhetsky et al., Proc. Natl Acad. Sci. USA 112, 14569; 2015) finds that, in this field at least, choices of research problems are becoming more conservative and risk-averse. The results suggest that this trend over the past 30 years is quite the reverse of what is needed to make scientific discovery efficient.
But these problems – avoidance of both risk and drudge – are just opposite sides of the same coin. They reflect the fact that scientific norms, institutions and reward structures increasingly force researchers to aim at a “sweet spot” that will maximize their career prospects: work that is novel enough to be publishable but orthodox enough not to alarm or offend referees. That situation is surely driven in large degree by the importance attached to citation indices, as well as by the insistence of grant agencies that the short-term impact of the work can be defined in advance.
One might quibble with the necessarily crude measures of research strategy and knowledge generation employed in the PNAS study. But its general conclusion – that current norms discourage risk and therefore slow down scientific advance, and that the problem is worsening – ring true. It’s equally concerning that the incentives for boring but essential collection of fine-grained data to solve a specific problem are vanishing in a publish-or-perish culture.
A fashionably despairing cry of “Science is broken!” is not the way forward. The wider virtue of Rzhetsky et al.’s study is that it floats the notion of tuning practices and institutions to accelerate the process of scientific discovery. The researchers conclude, for example, that publication of experimental failures would assist this goal by avoiding wasteful repetition. Journals chasing impact factors might not welcome that, but they are no longer to sole repositories of scientific findings. Rzhetsky et al. also suggest some shifts in institutional structures that might help promote riskier but potentially more groundbreaking research – for example, spreading both risk and credit among teams or organizations, as used to be common at Bell Labs.
The danger is that efforts to streamline discovery simply become codified into another set of guidelines and procedures, creating yet more hoops that grant applicants have to jump through. If there’s one thing science needs less of, it is top-down management. A first step would be to recognize the message that research on complex systems has emphasized over the past decade or so: efficiencies are far more likely to come from the bottom up. The aim is to design systems with basic rules of engagement for participating agents that best enable an optimal state to emerge. Such principles typically confer adaptability, diversity, and robustness. There could be a wider mix of grant sources and sizes, say, less rigid disciplinary boundaries, and an acceptance that citation records are not the only measure of worth.
But perhaps more than anything, the current narrowing of objectives, opportunities and strategies in science reflects an erosion of trust. Obsessive focus on “impact” and regular scrutiny young (and not so young) researchers’ bibliometric data betray a lack of trust that would have sunk many discoveries and discoverers of the past. Bibliometrics might sometimes be hard to avoid as a first-pass filter for appointments (Nature 527, 279; 2015), but a steady stream of publications is not the only or even the best measure of potential.
Attempts to tackle these widely acknowledged problems are typically little more than a timid rearranging of deckchairs. Partly that’s because they are seen as someone else’s problem: the culprits are never the complainants, but the referees, grant agencies and tenure committees who oppress them. Yet oddly enough, these obstructive folk are, almost without exception, scientists too (or at least, once were).
It’s everyone’s problem. Given the global challenges that science now faces, inefficiencies can exact a huge price. It is time to get serious about oiling the gears.
_______________________________________________________________________
Suppose you’re seeking to develop a technique for transferring proteins from a gel to a plastic substrate for easier analysis. Useful, maybe – but will you gain much kudos for it? Will it enhance the reputation of your department? One of the sobering findings of last year’s survey of the 100 most cited papers on the Web of Science (Nature 514, 550; 2014) was how many of them reported such apparently mundane methodological research (this one was number six).
Not all prosaic work reaches such bibliometric heights, but that doesn’t deny its value. Overcoming the hurdles of nanoparticle drug delivery, for example, requires the painstaking characterization of pathways and rates of breakdown and loss in the body: work that is probably unpublishable, let alone unglamorous. One can cite comparable demands of detail for getting just about any bright idea to work in practice – but it’s the initial idea, not the hard grind, that garners the praise and citations.
An aversion to routine yet essential legwork seems at face value to be quite the opposite of the conclusions of a new study on how scientists pick their research topics. This analysis of discovery and innovation in biochemistry (A. Rzhetsky et al., Proc. Natl Acad. Sci. USA 112, 14569; 2015) finds that, in this field at least, choices of research problems are becoming more conservative and risk-averse. The results suggest that this trend over the past 30 years is quite the reverse of what is needed to make scientific discovery efficient.
But these problems – avoidance of both risk and drudge – are just opposite sides of the same coin. They reflect the fact that scientific norms, institutions and reward structures increasingly force researchers to aim at a “sweet spot” that will maximize their career prospects: work that is novel enough to be publishable but orthodox enough not to alarm or offend referees. That situation is surely driven in large degree by the importance attached to citation indices, as well as by the insistence of grant agencies that the short-term impact of the work can be defined in advance.
One might quibble with the necessarily crude measures of research strategy and knowledge generation employed in the PNAS study. But its general conclusion – that current norms discourage risk and therefore slow down scientific advance, and that the problem is worsening – ring true. It’s equally concerning that the incentives for boring but essential collection of fine-grained data to solve a specific problem are vanishing in a publish-or-perish culture.
A fashionably despairing cry of “Science is broken!” is not the way forward. The wider virtue of Rzhetsky et al.’s study is that it floats the notion of tuning practices and institutions to accelerate the process of scientific discovery. The researchers conclude, for example, that publication of experimental failures would assist this goal by avoiding wasteful repetition. Journals chasing impact factors might not welcome that, but they are no longer to sole repositories of scientific findings. Rzhetsky et al. also suggest some shifts in institutional structures that might help promote riskier but potentially more groundbreaking research – for example, spreading both risk and credit among teams or organizations, as used to be common at Bell Labs.
The danger is that efforts to streamline discovery simply become codified into another set of guidelines and procedures, creating yet more hoops that grant applicants have to jump through. If there’s one thing science needs less of, it is top-down management. A first step would be to recognize the message that research on complex systems has emphasized over the past decade or so: efficiencies are far more likely to come from the bottom up. The aim is to design systems with basic rules of engagement for participating agents that best enable an optimal state to emerge. Such principles typically confer adaptability, diversity, and robustness. There could be a wider mix of grant sources and sizes, say, less rigid disciplinary boundaries, and an acceptance that citation records are not the only measure of worth.
But perhaps more than anything, the current narrowing of objectives, opportunities and strategies in science reflects an erosion of trust. Obsessive focus on “impact” and regular scrutiny young (and not so young) researchers’ bibliometric data betray a lack of trust that would have sunk many discoveries and discoverers of the past. Bibliometrics might sometimes be hard to avoid as a first-pass filter for appointments (Nature 527, 279; 2015), but a steady stream of publications is not the only or even the best measure of potential.
Attempts to tackle these widely acknowledged problems are typically little more than a timid rearranging of deckchairs. Partly that’s because they are seen as someone else’s problem: the culprits are never the complainants, but the referees, grant agencies and tenure committees who oppress them. Yet oddly enough, these obstructive folk are, almost without exception, scientists too (or at least, once were).
It’s everyone’s problem. Given the global challenges that science now faces, inefficiencies can exact a huge price. It is time to get serious about oiling the gears.