Here’s my latest Muse for Nature News.
_________________________________________________________
Scientific reputations emerge in a collective manner. But does this guarantee that fame rests on merit?
Does everyone in science get the recognition they deserve? Well obviously, your work hasn’t been sufficiently appreciated by your peers, but what about everyone else? Yes, I know he is vastly over-rated, and it’s a mystery why she gets invited to give so many keynote lectures, but that aside – is science a meritocracy?
How would you judge? Reputation is often a word-of-mouth affair; grants, awards and prizes offer a rather more concrete measure of success. But increasingly, scientific excellence is measured by citation statistics, not least by the ubiquitous h-index [1], which seeks to quantify the impact of your total oeuvre. Do all or any of these things truly reflect the worth of one’s scientific output?
Many would probably say: sort of. Most good work gets recognized eventually, and most Nobel prizes are applauded and deemed long overdue, rather than denounced as undeserved. But not always. Sometimes important work doesn’t get noticed in the author’s lifetime, and it’s a fair bet that some never comes to light at all. There’s surely an element of chance and luck in the establishment of reputations.
A new paper in PLoS ONE by Santo Fortunato of the Institute for Scientific Interchange in Turin, Italy, Dirk Helbing of ETH in Zurich, Switzerland, and coworkers aims to shed some light on the mechanism by which citations are accrued [2]. They have found that some landmark papers of Nobel laureates quite quickly give their authors a sudden boost in citation rate – and that this boost extends to the author’s earlier papers too, even if they were in unrelated areas.
For example, citations to a pivotal 1989 paper by chemistry Nobel laureate John Fenn on electrospray ionization mass spectrometry [3] took off exponentially, but also raised the citation profile of at least six of Fenn’s older papers. These peaks in citation rate stand out remarkably clearly for several laureates (some of whom have more than one peak), and might be a useful indicator both of important breakthroughs and of scientific performance.
This behaviour could seem reassuring or disturbing, depending on your inclination. On the one hand, some of these researchers were not particularly well known before they published their landmark papers – and yet the value of the work does seem to have been recognized, overcoming the rich-get-richer effect by which those already famous tend more easily to accrue more fame [4]. This boost could help innovative new ideas to take root. On the other hand, such a rise to prominence brings a new rich-get-richer effect, for it awards ‘unearned’ citations to the researcher’s other papers.
And the findings seem to imply that citations are sometimes selected not because they are necessarily the best or most appropriate but to capitalize on the prestige and presumed authority of the person cited. This further distorts a picture that already contains a rich-get-richer element among citations themselves. An earlier analysis suggested that some citations become common largely by chance, benefitting from a feedback effect in which they are chosen simply because others have chosen them before [5].
But at root, what this finding underscores is that science is a social enterprise, with all the consequent quirks and nonlinearities. That has potential advantages, but also drawbacks. In an ideal world, every researchers would reach an independent judgement about the value of a paper or a body of work, and the sum of these judgements should then reflect something fundamental about its worth.
That, however, is no longer an option, not least because there is simply too much to read – no one can hope to keep up with all that happens in their field, let alone in related ones. As a result, the scientific community must act as a collective search engine that hopefully alights on the most promising material. The question is whether this social network is harnessed efficiently, avoiding blind alleys while not overlooking gems.
No one really knows the answer to that. But some social-science studies highlight the possible consequences. For example, it seems that selections made ostensibly on merit are somewhat capricious when others’ choices are taken into account: objectively ‘good’ and ‘bad’ material still tends on average to be seen as such, but feedbacks can create a degree of randomness in what succeeds and fails [6]. Doubtless the same effects operate in the political sphere – so that democracy is a somewhat compromised meritocracy – and also in economics, which is why prices frequently deviate from their ‘fundamental’ value.
But Helbing suggests that there is probably an optimal balance between independence and group-think. A computer model of people exiting a crowded room in an emergency shows that it empties most efficiently when there is just the right amount of follow-the-crowd herding [7]. Are scientific reputations forged in this optimal regime? And if not, what would it take to engineer more wisdom into this particular crowd?
References
1. Hirsch, J. E. Proc. Natl Acad. Sci. USA 102, 16569-16572 (2005).
2. Mazloumian, A., Eom, Y.-H., Helbing, D., Lozano, S. & Fortunato, S. PLoS ONE 6(5), e18975 (2011).
3. Fenn, J. B., Mann, M., Meng, C. K., Wong, S. F. & Whitehouse, C. M., Science 246, 64-71 (1989).
4. Merton, R. K. Science 159, 56-63 (1968).
5. Simkin, M. V. & Roychowdhury, V. P. Ann. Improb. Res. 11, 24-27 (2005).
6. Salganik, M. J., Dodds, P. S. & Watts, D. J. Science 311, 854-856 (2006).
7. Helbing, D., Farkas, I. & Vicsek, T. Nature 407, 487-490 (2000).
1 comment:
Are Scientific Reputations Boosted Artificially?
Paper submitted to the Widecombe Fair Journal; Authers:
Tom Pearse, Bill Brewer, Jan Stewer, Peter Gurney, Peter Davy, Dan'l Whiddon, Harry Hawk, and Old Uncle Tom Cobbleigh
Post a Comment