Anxiety about the e-future – and in particular who it is going to make redundant – seems suddenly to be bursting out of every corner of the media. There was David Byrne in the Guardian Saturday Review recently worrying that Spotify and Pandora are going to eliminate the income stream for new musicians. Will Self, reviewing film critic Mark Kermode’s new book in the same supplement, talked about the ‘Gutenberg minds’ like Kermode (and himself) who are in denial about how “our art forms and our criticisms of those art forms will soon belong only to the academy and the museum” – digital media are not only undermining the role of the professional critic but changing the whole nature of what criticism is. Then we have Dave Eggers’ new novel The Circle, a satire on the Google/Facebook/Aamazon/Apple takeover of everything and the tyranny of social media. Meanwhile, Jonathan Franzen rails against the media dumbing-down of serious discourse about anything, anywhere, as attention spans shrink to nano-tweet dimensions.
Well, me, I haven’t a clue. I’m tempted to say that this is all a bit drummed up and Greenfield-esque, and that I don’t see those traits in my kids, but then, they are my kids and cruelly deprived of iPads and iPhones, and in any case are only 3 and 8. To say any such thing is surely to invite my words to come back in ten years time and sneer at my naivety. I’ve not the slightest doubt that I’m in any case wedded to all kinds of moribund forms, from the album (of the long-playing variety) to the fully punctuated text to the over-stuffed bookshelf.
But not unrelated to this issue is Philip Hensher’s spat with an academic over his refusal to write an unpaid introduction to an academic text. Hensher’s claim that it is becoming harder for authors to make a living and to have any expectation of getting paid (or paid in any significant way) for much of what they have to do is at least partly a concern from the same stable as Byrne’s – that we now have a culture that is used to getting words, music and images for next to nothing, and there is no money left for the artists.
They’re not wrong. The question of literary festivals is one that many authors are becoming increasingly fed up about, as the Guardian article on Hensher acknowledges. Personally I almost always enjoy literary festivals, and will gladly do them if it’s feasible for my schedule. The Hay Festival, which Guy Walters grumbles about, is one of the best – always fun (if usually muddy), something the family can come to, and a week’s worth of complimentary tickets seems considerable compensation for the lack of a fee. (And yes, six bottles of wine – but at least they’re from Berry Bros, and many literary festivals don’t even offer that.) But I’m also conscious that for middling-to-lower-list writers like me, it is extremely hard to say no to these things even if we wanted to. There’s the fact that publishers would be ‘disappointed’ and probably in the end disgruntled. But more than anything, there’s the sad egotistic fear that failing to appear, or even to be invited, means that you’re slipping closer to the edge of the ‘literary community’. I suspect that this fear, more than anything, is what has allowed literary festivals to proliferate so astonishingly. Well, and the fact that I’m probably not alone in being very easily satisfied (which might be essentially the same as saying that if you’re not a big name, you’re not hard to flatter). Being put up in that lovely country house hotel in Cumbria and given an evening meal has always seemed to me perfectly adequate remuneration for talking at the Words by the Water Festival (ah, so kind of you to ask again, yes I’d love to…).
But the Cambridge professor calling Hensher “priggish and ungracious” for refusing to write for free is another matter. Hensher was in fact far more gracious in response than he had any reason to be. When I am regularly asked to give up a day’s work to travel to give a talk at some academic institution (“we will of course pay your travelling costs”), I generally consider it to be a reflection of the fact that (i) academic departments simply don’t have a budget for paying speakers, and (ii) academics can very easily forget that, whereas they draw their salary while attending conferences and delivering seminars, writers don’t have a salary except for (sometimes) when they write. And so I often go and do it anyway, if I like the folks who have invited me, and/or think it will be interesting. Let alone anything else, it is good to get out and meet people. Same with unpaid writing, of which I could do a fair bit if I agreed to: I’ll contribute an article to a special issue or edited volume if I feel it would be interesting to do so, but it is rare indeed that there will be any acknowledgement that, unlike an academic, I’d then be working for free. But for a writer to be called ‘ungracious’ for refusing an ‘invitation’ to do such unpaid work is pretty despicable.
Thursday, October 24, 2013
Tuesday, October 22, 2013
Before small worlds
Here is my latest piece for BBC Future. I have also posted a little comment on the work on a Youtube channel that I am in the process of creating: see here. It’s an experiment, about which I will say more later.
____________________________________________________________
“Everyone on this planet is separated by only six other people”, claims a character in John Guare’s 1990 play Six Degrees of Separation, which provided us with the defining image of our social networks. “It’s a small world”, we say when we meet someone at a party who turns out to share a mutual friend. And it really is: the average number of links connecting you to any other random person might not be exactly six – it depends on how you define links, for one thing – but it is a small number of about that size.
But has it always been this way? It’s tempting to think so. Jazz musicians in the early 20th century were united by barely three degrees of separation. Much further back, scientists in the seventeenth century maintained a dense social network via letters, as did humanist scholars of the Renaissance. But those were specialized groups. Intellectual and aristocratic elites in history might have all known one another, but was it a small world for ordinary folk too, when mail deliveries and road travel were hard and dangerous and many people were illiterate anyway? That’s what networks expert Mark Newman of the University of Michigan at Ann Arbor and his coworkers have set out to establish.
The modern understanding of small-world social networks has come largely from direct experiments. Guare took his idea from experiments conducted in the late 1960s by social scientist Stanley Milgram of Harvard University and his coworkers. In one study they attempted to get letters to a Boston stockbroker by sending them to random people in Omaha, Nebraska, bearing only the addressee’s name and profession and the fact that he worked in Boston. Those who received the letter were asked to forward it to anyone they knew who might be better placed to help it on its way.
Most of the letters didn’t arrive at all. But of those that did, an average of only six journeys were needed to get them there. A much larger-scale re-run of the experiment in 2003 using email forwarding found an almost identical result: the average ‘chain length’ for messages delivered to the target was between 5 and 7 [P. S. Dodds, R. Muhamad & D. J. Watts, Science 301, 827 (2013)].
Needless to say, it’s not possible to conduct such epistolary experiments for former ages. But there are other ways to figure out what human social networks in history looked like. These networks don’t only spread news, information and rumour, but also things that are decidedly less welcome, such as disease. Many diseases are passed between individuals by direct, sometime intimate contact, and so the spread of an epidemic can reflect the web of human contacts on which it happens.
This is in fact one of the prime motivations for mapping out human contact networks. Epidemiologists now understand that the structure of the network – whether it is a small world or not, say – can have a profound effect on the way a disease spreads. For some types of small world, infectious diseases can pervade the entire population no matter how slow small the chance of infection is, and can be very hard to root out entirely once this happens. Some computer viruses are like this, lurking indefinitely on a few computers somewhere in the world.
Newman and colleagues admit that networks of physical contact, which spread disease, are not the same as networks of social contact: you can infect people you don’t know. But in earlier times most human interactions were conducted face to face, and in relatively small communities people rarely saw someone who they didn’t recognize.
The fact that diseases spread relatively slowly in the pre-industrial world already suggests that it was not a small world. For example, it took at least three years for the Black Death to spread through Europe, Scandinavia and Russia in the 14th century, beginning in the Levant and the Mediterranean ports.
However, network researchers have discovered that it takes only a very small number of ‘long-distance’ links to turn a ‘large world’ network, such as a grid in which each individual is connected only to their nearby neighbours, into a small world.
Newman and colleagues have used this well documented spread of the Black Death to figure out what the underlying network of physical contacts looked like. The disease was spread both by direct person-to-person transmission of the pathogenic bacterium and by being carried by rats and fleas. But neither rats or fleas travel far unless carried by humans, for example on the ships that arrived at the European ports. So transmission reflects the nature of human mobility and contact.
The researchers argue that the crucial point is not how quickly or slowly the disease spread, but what the pattern was like. It moved through the Western world rather like an ink blot spreading across a map of Europe: a steady advance of the ‘disease front’. The researchers’ computer simulations and calculations show that this is possible only if the typical path length linking two people in the network is long: if it’s not a small world. If there were enough long-range links to produce a small world, then the pattern would look quite different: not an expanding ‘stain’ but a blotchy spread in which new outbreaks get seeded far from the origin of the infection.
A: Spreading of an infectious disease in a "large world"
B: Spreading in a "small world"
So if the world was still ‘large’ in the 14th century, when did it become ‘small’? Newman and colleagues hope that other epidemiological data might reveal that, but they guess that it happened with the advent of long-distance transportation in the 19th century, which seems also to have been the time that rapidly spreading epidemics appeared. There’s always a price for progress.
Reference: S. A. Marvel, T. Martin, C. R. Doering, D. Lusseau & M. E. J. Newman, preprint http://www.arxiv.org/abs/1310.2636.
____________________________________________________________
“Everyone on this planet is separated by only six other people”, claims a character in John Guare’s 1990 play Six Degrees of Separation, which provided us with the defining image of our social networks. “It’s a small world”, we say when we meet someone at a party who turns out to share a mutual friend. And it really is: the average number of links connecting you to any other random person might not be exactly six – it depends on how you define links, for one thing – but it is a small number of about that size.
But has it always been this way? It’s tempting to think so. Jazz musicians in the early 20th century were united by barely three degrees of separation. Much further back, scientists in the seventeenth century maintained a dense social network via letters, as did humanist scholars of the Renaissance. But those were specialized groups. Intellectual and aristocratic elites in history might have all known one another, but was it a small world for ordinary folk too, when mail deliveries and road travel were hard and dangerous and many people were illiterate anyway? That’s what networks expert Mark Newman of the University of Michigan at Ann Arbor and his coworkers have set out to establish.
The modern understanding of small-world social networks has come largely from direct experiments. Guare took his idea from experiments conducted in the late 1960s by social scientist Stanley Milgram of Harvard University and his coworkers. In one study they attempted to get letters to a Boston stockbroker by sending them to random people in Omaha, Nebraska, bearing only the addressee’s name and profession and the fact that he worked in Boston. Those who received the letter were asked to forward it to anyone they knew who might be better placed to help it on its way.
Most of the letters didn’t arrive at all. But of those that did, an average of only six journeys were needed to get them there. A much larger-scale re-run of the experiment in 2003 using email forwarding found an almost identical result: the average ‘chain length’ for messages delivered to the target was between 5 and 7 [P. S. Dodds, R. Muhamad & D. J. Watts, Science 301, 827 (2013)].
Needless to say, it’s not possible to conduct such epistolary experiments for former ages. But there are other ways to figure out what human social networks in history looked like. These networks don’t only spread news, information and rumour, but also things that are decidedly less welcome, such as disease. Many diseases are passed between individuals by direct, sometime intimate contact, and so the spread of an epidemic can reflect the web of human contacts on which it happens.
This is in fact one of the prime motivations for mapping out human contact networks. Epidemiologists now understand that the structure of the network – whether it is a small world or not, say – can have a profound effect on the way a disease spreads. For some types of small world, infectious diseases can pervade the entire population no matter how slow small the chance of infection is, and can be very hard to root out entirely once this happens. Some computer viruses are like this, lurking indefinitely on a few computers somewhere in the world.
Newman and colleagues admit that networks of physical contact, which spread disease, are not the same as networks of social contact: you can infect people you don’t know. But in earlier times most human interactions were conducted face to face, and in relatively small communities people rarely saw someone who they didn’t recognize.
The fact that diseases spread relatively slowly in the pre-industrial world already suggests that it was not a small world. For example, it took at least three years for the Black Death to spread through Europe, Scandinavia and Russia in the 14th century, beginning in the Levant and the Mediterranean ports.
However, network researchers have discovered that it takes only a very small number of ‘long-distance’ links to turn a ‘large world’ network, such as a grid in which each individual is connected only to their nearby neighbours, into a small world.
Newman and colleagues have used this well documented spread of the Black Death to figure out what the underlying network of physical contacts looked like. The disease was spread both by direct person-to-person transmission of the pathogenic bacterium and by being carried by rats and fleas. But neither rats or fleas travel far unless carried by humans, for example on the ships that arrived at the European ports. So transmission reflects the nature of human mobility and contact.
The researchers argue that the crucial point is not how quickly or slowly the disease spread, but what the pattern was like. It moved through the Western world rather like an ink blot spreading across a map of Europe: a steady advance of the ‘disease front’. The researchers’ computer simulations and calculations show that this is possible only if the typical path length linking two people in the network is long: if it’s not a small world. If there were enough long-range links to produce a small world, then the pattern would look quite different: not an expanding ‘stain’ but a blotchy spread in which new outbreaks get seeded far from the origin of the infection.
A: Spreading of an infectious disease in a "large world"
B: Spreading in a "small world"
So if the world was still ‘large’ in the 14th century, when did it become ‘small’? Newman and colleagues hope that other epidemiological data might reveal that, but they guess that it happened with the advent of long-distance transportation in the 19th century, which seems also to have been the time that rapidly spreading epidemics appeared. There’s always a price for progress.
Reference: S. A. Marvel, T. Martin, C. R. Doering, D. Lusseau & M. E. J. Newman, preprint http://www.arxiv.org/abs/1310.2636.
Thursday, October 10, 2013
Colour in the Making
I have just received delivery of Colour in the Making: From Old Wisdom to New Brilliance, a book published by Black Dog, in which I have an essay on colour technology in the nineteenth century. And I can say without bias that the book is stunning. This is the first time I have seen what else it contains, and it is a gorgeous compendium of information about pigments, colour theory, and colour technology and use in visual art from medieval painting to printing and photography. There are also essays on medieval paints by Mark Clarke and on digital colour mixing by Carinna Parraman. This book is perhaps rather too weighty to be a genuine coffee-table volume, but is a feast for the eyes, and anyone with even a passing interest in colour should get it. I will put my essay up on my website soon.
Friday, October 04, 2013
The name game
My new book Serving the Reich is published on 10 October. Here is one of the little offshoots, a piece for Research Fortnight (which the kind folks there have made available for free) on the perils of naming in science. (Jim, I told you I’d steal that quote.)
___________________________________________________________________
Where would quantum physics be without Planck’s constant, the Schrödinger equation, the Bohr atom or Heisenberg’s uncertainty principle – or, more recently, Feynman diagrams, Bell’s inequality and Hawking radiation? You might not know what all these things are, but you know who discovered them.
Surely it’s right and proper that scientists should get the credit for what they do, after all. Or is it? This is what Einstein had to say on the matter:
“When a man after long years of searching chances on a thought which discloses something of the beauty of this mysterious universe, he should not therefore be personally celebrated. He is already sufficiently paid by his experience of seeking and finding. In science, moreover, the work of the individual is so bound up with that of his scientific predecessors and contemporaries that it appears almost as an impersonal product of his generation.”
Whether by design or fate, Einstein seems to have avoided having his name explicitly attached to his greatest works, the theories of special and general relativity. (The “Einstein coefficient” is an obscure quantity almost no one uses.)
But Einstein was working in the period when this fad for naming equations, units and the other paraphernalia of science after their discoverers had barely begun. The quantum pioneers were in fact among those who started it. The Dutch physicist Peter Debye insisted, against the wishes of Hitler’s government, that the new Kaiser Wilhelm Institute of Physics in Berlin, which he headed from 1935 to 1939, be called the Max Planck Institute. He had Planck’s name carved in stone over the entrance, and after the war the entire Kaiser Wilhelm Gesellschaft – the network of semi-private German research institutes – was renamed the Max Planck Society, the title that it bears today.
But Debye himself now exemplifies the perils of this practice. In 2006 he was accused in a book by a Dutch journalist of having collaborated with the Nazi government during his time in Germany, and of endorsing their anti-Semitic measures. In response, the University of Utrecht was panicked into removing Debye’s name from its Institute for Nanomaterials Science, saying that “recent evidence is not compatible with the example of using Debye’s name”. Likewise, the University of Maastricht in Debye’s home city asked for permission to rename the Debye Prize, a science award sponsored by the philanthropic Hustinx Foundation in Maastricht.
It’s now generally agreed that these accusations were unfair – Debye was no worse than the vast majority of physicists working in Nazi Germany, and certainly bears no more discredit than Max Planck himself, the grand old man of German physics, whose prevarication and obedience to the state prevented him from voicing opposition to measures that he clearly abhorred. (Recognizing this, the Universities of Utrecht and Maastricht have now relented.) Far more culpable was Werner Heisenberg, who allegedly told the occupied Dutch scientists in 1943 that “history legitimizes Germany to rule Europe and later the world”. He gave propaganda lectures on behalf of the government during the war, and led the German quest to harness nuclear power. Yet no one has questioned the legitimacy of the German Research Foundation’s Heisenberg Professorships.
Here, then, is one of the pitfalls of science’s obsession with naming: what happens when the person you’re celebrating turns out to have a questionable past? Debye, Planck and Heisenberg are all debatable cases: scarcely anyone in positions of influence in Germany under Hitler emerged without some blemish. But it leaves a bitter taste in the mouth to have to call the influence of electric fields on atomic quantum energy states the Stark effect, after its discoverer the Nobel laureate Johannes Stark – an ardent Nazi and anti-Semite, and one of the most unpleasant scientists who ever lived.
Some might say: get over it. No one should expect that people who do great things are themselves great people, and besides, being a nasty piece of work shouldn’t deprive you of credit for what you discover. Both of these things are true. But nevertheless science seems to impose names on everything it can, from awards to units, to a degree that is unparalleled in other fields: we speak of atonality, cubism, deconstructionism, not Schoenbergism, Picassoism and Derridism. This is so much the opposite of scientists’ insistence, à la Einstein, that it doesn’t matter who made the discovery that it seems worth at least pondering on the matter.
Why does science want to immortalize its greats this way? It is not as though there aren’t alternatives: we can have heliocentrism instead of Copernicanism, the law of constant proportions for Proust’s law, and so on. What’s more, naming a law or feature of nature for what it says or does, and not who saw or said it first, avoids arguments about the latter. We know, for example, that the Copernican system didn’t originate with Copernicus, that George Gabriel Stokes didn’t discover Stokes’ law, that Peter Higgs was not alone in proposing the Higgs particle. Naming laws and ideas for people is probably in part a sublimation of scientists’ obsession with priority. It certainly feeds it.
The stakes are higher, however, when it comes to naming institutions, as Utrecht’s Debye Institute discovered. There’s no natural justice which supports the name you choose to put on your lintel – it’s a more or less arbitrary decision, and if your scientific patron saint suddenly seems less saintly, it doesn’t do your reputation any good. Leen Dorsman, a historian of science and philosophy at Utrecht, was scathing about what he called this “American habit” during the “Debye affair”:
“The motive is not to honour great men, it is a sales argument. The name on the façade of the institute shouts: Look at us, look how important we are, we are affiliated with a genuine Nobel laureate.”
While acknowledging that Debye himself contributed to the tendency in Germany, Dorsman says that it was rare in the egalitarian society of the Netherlands until recently. At Utrecht University itself, he attributes it to a governance crisis that led to the appointment of leaders “who had undergone the influence of new public management ideas.” It is this board, he says, that began naming buildings and institutions in the 1990s as a way to restore the university’s self-confidence.
“My opinion is that you should avoid this”, Dorsman says. “There is always something in someone’s past that you wouldn’t like to be confronted with later on, as with Debye.” He adds that even if there isn’t, naming an institution after a “great scientist” risks allying it with a particular school of thought or direction of research, which could cause ill feeling among employees who don’t share that affiliation.
If nevertheless you feel the need to immortalize your alumni this way, the moral seems to be that you’d better ask first how well you really know them. The imposing Francis Crick Institute for biomedical research under construction in London looks fairly secure in the respect – Crick had his quirks, but he seems to have been a well-liked, upfront and decent fellow. Is anyone, however, now going to take their chance with a James Watson Research Centre? And if not, shouldn’t we think a bit more carefully about why not?
___________________________________________________________________
Where would quantum physics be without Planck’s constant, the Schrödinger equation, the Bohr atom or Heisenberg’s uncertainty principle – or, more recently, Feynman diagrams, Bell’s inequality and Hawking radiation? You might not know what all these things are, but you know who discovered them.
Surely it’s right and proper that scientists should get the credit for what they do, after all. Or is it? This is what Einstein had to say on the matter:
“When a man after long years of searching chances on a thought which discloses something of the beauty of this mysterious universe, he should not therefore be personally celebrated. He is already sufficiently paid by his experience of seeking and finding. In science, moreover, the work of the individual is so bound up with that of his scientific predecessors and contemporaries that it appears almost as an impersonal product of his generation.”
Whether by design or fate, Einstein seems to have avoided having his name explicitly attached to his greatest works, the theories of special and general relativity. (The “Einstein coefficient” is an obscure quantity almost no one uses.)
But Einstein was working in the period when this fad for naming equations, units and the other paraphernalia of science after their discoverers had barely begun. The quantum pioneers were in fact among those who started it. The Dutch physicist Peter Debye insisted, against the wishes of Hitler’s government, that the new Kaiser Wilhelm Institute of Physics in Berlin, which he headed from 1935 to 1939, be called the Max Planck Institute. He had Planck’s name carved in stone over the entrance, and after the war the entire Kaiser Wilhelm Gesellschaft – the network of semi-private German research institutes – was renamed the Max Planck Society, the title that it bears today.
But Debye himself now exemplifies the perils of this practice. In 2006 he was accused in a book by a Dutch journalist of having collaborated with the Nazi government during his time in Germany, and of endorsing their anti-Semitic measures. In response, the University of Utrecht was panicked into removing Debye’s name from its Institute for Nanomaterials Science, saying that “recent evidence is not compatible with the example of using Debye’s name”. Likewise, the University of Maastricht in Debye’s home city asked for permission to rename the Debye Prize, a science award sponsored by the philanthropic Hustinx Foundation in Maastricht.
It’s now generally agreed that these accusations were unfair – Debye was no worse than the vast majority of physicists working in Nazi Germany, and certainly bears no more discredit than Max Planck himself, the grand old man of German physics, whose prevarication and obedience to the state prevented him from voicing opposition to measures that he clearly abhorred. (Recognizing this, the Universities of Utrecht and Maastricht have now relented.) Far more culpable was Werner Heisenberg, who allegedly told the occupied Dutch scientists in 1943 that “history legitimizes Germany to rule Europe and later the world”. He gave propaganda lectures on behalf of the government during the war, and led the German quest to harness nuclear power. Yet no one has questioned the legitimacy of the German Research Foundation’s Heisenberg Professorships.
Here, then, is one of the pitfalls of science’s obsession with naming: what happens when the person you’re celebrating turns out to have a questionable past? Debye, Planck and Heisenberg are all debatable cases: scarcely anyone in positions of influence in Germany under Hitler emerged without some blemish. But it leaves a bitter taste in the mouth to have to call the influence of electric fields on atomic quantum energy states the Stark effect, after its discoverer the Nobel laureate Johannes Stark – an ardent Nazi and anti-Semite, and one of the most unpleasant scientists who ever lived.
Some might say: get over it. No one should expect that people who do great things are themselves great people, and besides, being a nasty piece of work shouldn’t deprive you of credit for what you discover. Both of these things are true. But nevertheless science seems to impose names on everything it can, from awards to units, to a degree that is unparalleled in other fields: we speak of atonality, cubism, deconstructionism, not Schoenbergism, Picassoism and Derridism. This is so much the opposite of scientists’ insistence, à la Einstein, that it doesn’t matter who made the discovery that it seems worth at least pondering on the matter.
Why does science want to immortalize its greats this way? It is not as though there aren’t alternatives: we can have heliocentrism instead of Copernicanism, the law of constant proportions for Proust’s law, and so on. What’s more, naming a law or feature of nature for what it says or does, and not who saw or said it first, avoids arguments about the latter. We know, for example, that the Copernican system didn’t originate with Copernicus, that George Gabriel Stokes didn’t discover Stokes’ law, that Peter Higgs was not alone in proposing the Higgs particle. Naming laws and ideas for people is probably in part a sublimation of scientists’ obsession with priority. It certainly feeds it.
The stakes are higher, however, when it comes to naming institutions, as Utrecht’s Debye Institute discovered. There’s no natural justice which supports the name you choose to put on your lintel – it’s a more or less arbitrary decision, and if your scientific patron saint suddenly seems less saintly, it doesn’t do your reputation any good. Leen Dorsman, a historian of science and philosophy at Utrecht, was scathing about what he called this “American habit” during the “Debye affair”:
“The motive is not to honour great men, it is a sales argument. The name on the façade of the institute shouts: Look at us, look how important we are, we are affiliated with a genuine Nobel laureate.”
While acknowledging that Debye himself contributed to the tendency in Germany, Dorsman says that it was rare in the egalitarian society of the Netherlands until recently. At Utrecht University itself, he attributes it to a governance crisis that led to the appointment of leaders “who had undergone the influence of new public management ideas.” It is this board, he says, that began naming buildings and institutions in the 1990s as a way to restore the university’s self-confidence.
“My opinion is that you should avoid this”, Dorsman says. “There is always something in someone’s past that you wouldn’t like to be confronted with later on, as with Debye.” He adds that even if there isn’t, naming an institution after a “great scientist” risks allying it with a particular school of thought or direction of research, which could cause ill feeling among employees who don’t share that affiliation.
If nevertheless you feel the need to immortalize your alumni this way, the moral seems to be that you’d better ask first how well you really know them. The imposing Francis Crick Institute for biomedical research under construction in London looks fairly secure in the respect – Crick had his quirks, but he seems to have been a well-liked, upfront and decent fellow. Is anyone, however, now going to take their chance with a James Watson Research Centre? And if not, shouldn’t we think a bit more carefully about why not?
David and Goliath - who do you cheer for?
I have just reviewed Malcolm Gladwell’s new book for Nature. I had my reservations, but on seeing Steven Poole’s acerbic job in today’s New Statesman I do wonder whether in the end I gave this a slightly easy ride. Steven rarely passes up a chance to stick the boot in, but I can’t argue with his rather damning assessment of Gladwell’s argument. Anyway, here’s mine.
___________________________________________________
David and Goliath: Underdogs, Misfits and the Art of Battling Giants
Malcolm Gladwell
Penguin Books
We think of David as the weedy foe of mighty Goliath, but he had the upper hand all along. The Israelite shepherd boy was nimble and could use his deadly weapon without getting close to his opponent. Given the skill of ancient slingers, this was more like fighting pistol against sword. David won because he changed the rules; Goliath, like everyone else, was anticipating hand-to-hand combat.
That biblical story about power and how it is used, misused and misinterpreted is the frame for Malcolm Gladwell’s David and Goliath. “The powerful are not as powerful as they seem”, he argues, “nor the weak as weak.” Weaker sports teams can win by playing unconventionally. The children of rich families are handicapped by complacency. Smaller school classes don’t necessarily produce better results.
Gladwell describes a police chief who cuts crime by buying Thanksgiving turkeys for problem families, the doctor who cured children with a drug cocktail everyone thought to be lethal. The apparent indicators of strength, such as wealth or military superiority, can prove to be weakness; what look like impediments, such as broken homes or dyslexia, can work to one’s advantage. Provincial high-flyers may under-achieve at Harvard because they’re unaccustomed to being surrounded by even more brilliant peers, whereas at a mediocre university they’d have excelled. Even if some of these conclusions seem obvious in retrospect, Gladwell is a consummate story-teller and you feel you would never have articulated the point until he spelt it out.
But don’t we all know of counter-examples? Who is demoralized and who thrives from the intellectual stimulus depends on particular personal attributes and all kinds of other intangibles. More often than not, dyslexia and broken homes are disadvantages. The achievement of a school or university class may depend more on what is taught, and how, and why, than on size. The case of physician Jay Freireich, who developed an unconventional but ultimately successful treatment for childhood leukaemia, is particularly unsettling. If Freireich had good medical reasons for administering untested mixtures of aggressive anti-cancer drugs, they aren’t explained here. Instead, there is simply a description of his bullish determination to try them out come what may, apparently engendered by his grim upbringing. Yet determination alone can – as with Robert Koch’s misguided conviction that the tuberculosis extract tuberculin would cure the disease – equally prove disastrous.
Even the biblical meta-narrative is confusing. So David wasn’t after all the plucky hero overcoming the odds, but more like Indiana Jones defeating the sword-twirling opponent by pulling out his pistol and shooting him? Was that cheating, or just thinking outside the box? There are endless examples of the stronger side winning, whether in sport, business or war, no matter how ingenious their opponents. Mostly, money does buy privilege and success. So why does David win sometimes and sometimes Goliath? Is it even clear which is which (poor Goliath might even have suffered from a vision impairment)?
These complications are becoming clear, for example in criminology. Gladwell is very interested in why some crime-prevention strategies work and others don’t. But while his “winning hearts and minds” case studies are surely a part of the solution, recent results from behavioural economics and game theory suggest that there are no easy answers beyond the fact that some sort of punishment (ideally centralized, not vigilante) is needed for social stability. Some studies suggest that excessive punishment can be counter-productive. Others show that people do not punish simply to guard their own interests, but will impose penalties on others even to their own detriment. Responses to punishment are culturally variable. In other words, punishment is a complex matter, and resists simple prescriptions.
Besides, winning is itself a slippery notion. Gladwell’s sympathies are for the underdog, the oppressed, the marginalized. But occasionally his stories celebrate a very narrow view of what constitutes success: becoming a Hollywood mogul or the president of an investment banking firm – David turned Goliath, with little regard for what makes people genuinely inspiring, happy or worthy.
None of this is a problem of Gladwell’s exposition, which is always intelligent and perceptive. It’s a problem of form. His books, like those of legions of inferior imitators, present a Big Idea. But it’s an idea that only works selectively, and it’s hard for him or anyone else to say why. These human stories are too context-dependent to deliver a take-home message, at least beyond the advice not always to expect the obvious outcome.
Perhaps Gladwell’s approach does not lend itself to book-length exposition. In The Tipping Point he pulled it off, but his follow-ups Blink, about the reliability of the gut response, and Outliers, a previous take on what makes people succeed, similarly had theses that unravelled the more you thought about them. What remains in this case are ten examples of Gladwell’s true forte: the long-form essay, engaging, surprising and smooth as a New York latte.
___________________________________________________
David and Goliath: Underdogs, Misfits and the Art of Battling Giants
Malcolm Gladwell
Penguin Books
We think of David as the weedy foe of mighty Goliath, but he had the upper hand all along. The Israelite shepherd boy was nimble and could use his deadly weapon without getting close to his opponent. Given the skill of ancient slingers, this was more like fighting pistol against sword. David won because he changed the rules; Goliath, like everyone else, was anticipating hand-to-hand combat.
That biblical story about power and how it is used, misused and misinterpreted is the frame for Malcolm Gladwell’s David and Goliath. “The powerful are not as powerful as they seem”, he argues, “nor the weak as weak.” Weaker sports teams can win by playing unconventionally. The children of rich families are handicapped by complacency. Smaller school classes don’t necessarily produce better results.
Gladwell describes a police chief who cuts crime by buying Thanksgiving turkeys for problem families, the doctor who cured children with a drug cocktail everyone thought to be lethal. The apparent indicators of strength, such as wealth or military superiority, can prove to be weakness; what look like impediments, such as broken homes or dyslexia, can work to one’s advantage. Provincial high-flyers may under-achieve at Harvard because they’re unaccustomed to being surrounded by even more brilliant peers, whereas at a mediocre university they’d have excelled. Even if some of these conclusions seem obvious in retrospect, Gladwell is a consummate story-teller and you feel you would never have articulated the point until he spelt it out.
But don’t we all know of counter-examples? Who is demoralized and who thrives from the intellectual stimulus depends on particular personal attributes and all kinds of other intangibles. More often than not, dyslexia and broken homes are disadvantages. The achievement of a school or university class may depend more on what is taught, and how, and why, than on size. The case of physician Jay Freireich, who developed an unconventional but ultimately successful treatment for childhood leukaemia, is particularly unsettling. If Freireich had good medical reasons for administering untested mixtures of aggressive anti-cancer drugs, they aren’t explained here. Instead, there is simply a description of his bullish determination to try them out come what may, apparently engendered by his grim upbringing. Yet determination alone can – as with Robert Koch’s misguided conviction that the tuberculosis extract tuberculin would cure the disease – equally prove disastrous.
Even the biblical meta-narrative is confusing. So David wasn’t after all the plucky hero overcoming the odds, but more like Indiana Jones defeating the sword-twirling opponent by pulling out his pistol and shooting him? Was that cheating, or just thinking outside the box? There are endless examples of the stronger side winning, whether in sport, business or war, no matter how ingenious their opponents. Mostly, money does buy privilege and success. So why does David win sometimes and sometimes Goliath? Is it even clear which is which (poor Goliath might even have suffered from a vision impairment)?
These complications are becoming clear, for example in criminology. Gladwell is very interested in why some crime-prevention strategies work and others don’t. But while his “winning hearts and minds” case studies are surely a part of the solution, recent results from behavioural economics and game theory suggest that there are no easy answers beyond the fact that some sort of punishment (ideally centralized, not vigilante) is needed for social stability. Some studies suggest that excessive punishment can be counter-productive. Others show that people do not punish simply to guard their own interests, but will impose penalties on others even to their own detriment. Responses to punishment are culturally variable. In other words, punishment is a complex matter, and resists simple prescriptions.
Besides, winning is itself a slippery notion. Gladwell’s sympathies are for the underdog, the oppressed, the marginalized. But occasionally his stories celebrate a very narrow view of what constitutes success: becoming a Hollywood mogul or the president of an investment banking firm – David turned Goliath, with little regard for what makes people genuinely inspiring, happy or worthy.
None of this is a problem of Gladwell’s exposition, which is always intelligent and perceptive. It’s a problem of form. His books, like those of legions of inferior imitators, present a Big Idea. But it’s an idea that only works selectively, and it’s hard for him or anyone else to say why. These human stories are too context-dependent to deliver a take-home message, at least beyond the advice not always to expect the obvious outcome.
Perhaps Gladwell’s approach does not lend itself to book-length exposition. In The Tipping Point he pulled it off, but his follow-ups Blink, about the reliability of the gut response, and Outliers, a previous take on what makes people succeed, similarly had theses that unravelled the more you thought about them. What remains in this case are ten examples of Gladwell’s true forte: the long-form essay, engaging, surprising and smooth as a New York latte.
Who reads the letters?
I often wonder how the letters pages of newspapers and magazines work. For the main articles, most publications use some form of fact-checking. But what can you do about letters in which anyone can make any claim? Does anyone check up on them before publishing? I was struck by a recent letter in New Statesman, for example, which purportedly came from David Cameron’s former schoolteacher. Who could say if it was genuine? (And, while loath to offer the slightest succour to Cameron, is it quite proper for a former teacher to be revealing stuff about his onetime pupils?)
The problem is particularly acute for science. Many a time this or that sound scientific article has been challenged by a letter from an obvious crank. Of course, sometimes factual errors are indeed pointed out this way, but who can tell which is which? I’ve seen letters printed that a newspaper’s science editor would surely have trashed very easily.
This is the case with a letter in the Observer last Sunday from a chap keen to perpetuate the myth that the world’s climate scientists are hiding behind a veil of secrecy. Philip Symmons says that he hasn’t been able to work out for himself if the models currently used for climate projections are actually capable of accurate hindcasts of past climate, since those dastardly folks at the Hadley Centre refuse to let him have the information, even after he has invoked the Freedom of Information Act. What are they afraid of, eh? What are they hiding?
If the Letters editor had asked Robin McKie, I’m sure he would have lost no time in pointing out that this is utter nonsense. The hindcast simulations Symmons is looking for are freely available to all in the last IPCC report (2007 – Figure 9.5). I found that figure after all of five minutes’ checking on the web. And incidentally, the results are extremely striking – without anthropogenic forcings, the hindcasts go badly astray after about 1950, but with them they stay right on track.
It’s clear, then, that Symmons in fact has no interest in actually getting an answer to his question – he just wants to cast aspersions. I can’t figure out why the Observer would let him do that, given how easy it should be to discover that his letter is nonsense. Surely they aren’t still feeling that one needs to present “both sides”?
The problem is particularly acute for science. Many a time this or that sound scientific article has been challenged by a letter from an obvious crank. Of course, sometimes factual errors are indeed pointed out this way, but who can tell which is which? I’ve seen letters printed that a newspaper’s science editor would surely have trashed very easily.
This is the case with a letter in the Observer last Sunday from a chap keen to perpetuate the myth that the world’s climate scientists are hiding behind a veil of secrecy. Philip Symmons says that he hasn’t been able to work out for himself if the models currently used for climate projections are actually capable of accurate hindcasts of past climate, since those dastardly folks at the Hadley Centre refuse to let him have the information, even after he has invoked the Freedom of Information Act. What are they afraid of, eh? What are they hiding?
If the Letters editor had asked Robin McKie, I’m sure he would have lost no time in pointing out that this is utter nonsense. The hindcast simulations Symmons is looking for are freely available to all in the last IPCC report (2007 – Figure 9.5). I found that figure after all of five minutes’ checking on the web. And incidentally, the results are extremely striking – without anthropogenic forcings, the hindcasts go badly astray after about 1950, but with them they stay right on track.
It’s clear, then, that Symmons in fact has no interest in actually getting an answer to his question – he just wants to cast aspersions. I can’t figure out why the Observer would let him do that, given how easy it should be to discover that his letter is nonsense. Surely they aren’t still feeling that one needs to present “both sides”?
Subscribe to:
Posts (Atom)