Thursday, July 17, 2008

Who says the Internet broadens your horizons?
[Here’s the long version of my latest, understandable shortened Muse for Nature News.]

A new finding that electronic journals create a narrowing of scientific scholarship illustrates the mixed blessings of online access.

It’s a rare scientist these days who does not know his or her citation index, most commonly in the form of the h-index introduced in 2005 by physicist Jorge Hirsch [1]. Proposed as a measure of the cumulative impact of one’s published works, this and related indices are being used informally to rank scientists, whether this be for drawing up lists of the most stellar performers or for assessing young researchers applying for tenure. Increasingly, careers are being weighed up through citation records.

All this makes more pressing the question of how papers get cited in the first place: does this provide an honest measure of their worth? A study published in Science by sociologist James Evans at the University of Chicago adds a new ingredient to this volatile ferment [2]. He has shown that the increasing availability of papers and journals online, including what may be decades of back issues, is paradoxically leading to a narrowing of the number and range of papers cited. Evans suggests that this is the result of the way browsing of print journals is being replaced by focused online searches, which tend both to identify more recent papers and to quickly converge on a smaller subset of them.

The argument is that when a journal goes online, fewer people flick through the print version and so there is less chance that readers will just happen across a paper related to their work. Rather, an automated search, or following hyperlinks from other online articles, will take them directly to the most immediately relevant articles.

Evans has compiled citation data for 34 million articles from a wide range of scientific disciplines, some dating back as far as 1945. He has studied how citation patterns changed as many of the journals became available online. On average, a hypothetical journal would, by making five years of its issues available free or commercially online, suffer a drop in the number of its own articles cited from 600 to 200.

That sounds like a bad business model, but in fact there are some important qualifications here. It doesn’t necessarily mean that a journal gets cited less when it goes online, but simply that its citations get focused on fewer distinct articles. And all these changes are set against an ever-growing body of published work, which means that more and more papers are getting cited overall. The changes caused by going online are relative, set within the context of a still widening and deepening universe of citations.

All the same, this means that the trend for online access is making citation patterns narrower than they would be otherwise: fixated on fewer papers and fewer journals.

In some ways, the narrowing is not a bad thing. Online searching can deliver you more quickly to just those papers that are most immediately relevant to your own work, without having to wade through more peripheral material. This may in turn mean that the citation lists in papers are more helpful and pertinent to readers.

Online access also makes it much easier for researchers to check citation details – to look at what a reference actually said, rather than what someone else implies they said. It’s not clear how often this is actually done, however – one study
(see also here), using mis-citations as a proxy, has suggested that 70-90 percent of literature citations have simply been copied from other reference lists, rather than being directly consulted [3,4]. But at the very least, easier access should reduce the chances of that.

Yet there are two reasons in particular why Evans’ findings are concerning. One is in fact a mixed blessing. With online resources, scientific consensus is reached more quickly and efficiently, because for example hyperlinked citations allow you to deduce rapidly which papers other are citing. Some search strategies also rely on consensual views about relevance and significance.

This might mean that less attention, time and effort get wasted down dead ends. But it also means there is more chance of missing something important. “It pushes science in the direction of a press release”, says Evans. “Unless they are picked up immediately, things will be forgotten more quickly.”

Moreover, feedback about the value judgements of others seems to lead to amplification of opinions in a way that is not necessarily linked to ‘absolute’ value [5]. It’s an example of the rich-get-richer or ‘Matthew’ effect, whereby fame becomes self-fulfilling and a few individuals get disproportionate rewards at the expense of other, perhaps equally deserving cases. While highly cited papers may indeed deserve to be, it seems the citation statistics would not look very different if these papers had simply benefited from random amplification of negligible differences in quality [6]. Again, this could happen even with old-style manual searching of journals, but online searches make it more likely.

The other worry is that this trend exacerbates the already lamented narrowing of researchers’ horizons. It is by scanning through the contents pages of journals that you find out what others outside your field are doing. If scientists are reading only the papers that are directly relevant to their immediate research, science as a whole will suffer, not least because its tightly drawn disciplines will cease to be fertilized by ideas from outside.

Related to this concern is the possibility of collective amnesia: the past ceases to matter in a desperate bid to keep abreast of the present. Older scientists have probably been complaining that youngsters no longer ‘read the old literature’ ever since science journals existed, but it seems that neglecting the history of your field is made more likely with online tools.

There’s a risk of overplaying this issue, however. It’s likely that so-called ‘ceremonial citation’, the token nod to venerable and unread papers, has been going on for a long time. And the increased availability of foundational texts online can only be a good thing. Nonetheless, Evans’ data indicate that online access is driving citations to become ‘younger’ and reducing an article’s shelf-life. This must surely increase the danger of reinventing the wheel. And there is an important difference between having decided that an old paper is not sufficiently relevant to cite, and having assumed it, or having not even known of its existence.

In many ways these trends are just an extension to the scientific research community of things that have been much debated in the broader sphere of news media, where the possibilities for personalization of content leads to a solipsistic outlook in which individuals hear only the things they want to hear. (The awful geek-speak for this – individuated news – itself makes the point, having apparently been coined in ignorance of the fact that individuation already has a different historical meaning.) Instead of reading newspapers, the fear is that people will soon read only the ‘Daily Me.’ Web journalist Steve Outing has said that “90 percent of my daughters’ media consumption is ‘individuated’. For kids today, non-individuated media is outside the norm.” We may be approaching the point where that also applies to young scientists, particularly if it is the model they have become accustomed to as children.

Ultimately, the concerns that Evans raises are thus not a necessary consequence of the mere fact of online access and archives, but stem from the cultural norms within which this material is becoming available. And it is no response – or at least, a futile one – to say that we must bring back the days when scientists would have to visit the library each week and pick up the journals. The efficiency of online searching and the availability of archives are both to be welcomed. But a laissez-faire attitude to this ‘literature market’ could have some unwelcome consequences, in particular the risk of reduced meritocracy, loss of valuable research, and increased parochialism. The paper journal may be on the way out, but we’d better make sure that the journal club doesn’t go the same way.

1. J. E. Hirsch, Proc. Natl Acad. Sci. USA 102, 16569-16572 (2005).
2. J. A. Evans, Science 321, 395-399 (2008).
3. M. V. Simkin & V. P. Roychowdhury, Complex Syst. 14, 269-274 (2003).
4. M. V. Simkin & V. P. Roychowdhury, Scientometrics 62, 367-384 (2005).
5. M. J. Sagalnik et al., Science 311, 854-856 (2006).
6. M. V. Simkin & V. P. Chowdhury, Annals Improb. Res. 11, 24-27 (2005).

Sunday, July 13, 2008

Is music just for babies?

I’m grateful to a friend for pointing me towards a recent preposterous article on music by Terry Kealey in the Times, suggesting in essence that music is anti-intellectual, regressive and appeals to our baser instincts. Now, I have sparred with Terry before and I know that he likes to be provocative. I don’t want to seem to be rising to the bait like some quivering Verdi aficionado. But really, he shouldn’t be allowed to be so naughty without being challenged. I have to say that his article struck me as a classic case of a little knowledge being a dangerous thing.

His bizarre opening gambit seems to be that music and intelligence are somehow mutually exclusive, so that one may make way for the other. This will come as news to any neuroscientist or psychologist who has ever studied music. A large part of the argument seems to rest on the idea that perfect pitch is a sign of mental incapacity. Isn’t it particularly common in autistic people and children, he asks? Er, no, frankly. Sorry, it’s as simple as that. Terry may be confusing the fact that children can acquire perfect pitch through learning more easily than adults – but that’s true of many things, including language (which presumably does not make language an infantile attribute). Perfect pitch is also more common in Chinese people, but I think even a controversialist like Terry might stop short of wanting to say that this proves his point. Merely, it seems to be enhanced in speakers of tonal languages, which stands to reason.

But more to the point – and this is a bit of a giveaway – perfect pitch has nothing to do with musical ability. There is no correlation between the two. It is true that many composers had/have perfect pitch, but that’s no mystery, because as Terry points out, it can be learnt with effort, i.e. with lots of exposure to music. It is, indeed, even possible to have perfect pitch and to be simultaneously clinically tone deaf, since one involves the identification of absolute pitch in single notes and the other of pitch relationships between multiple notes.

Birds too have perfect pitch, we’re told, and so did Neanderthals (thanks to another of Stephen Mithen’s wild speculations, swallowed hook, line and sinker). And don’t birds have music too, showing that it is for bird-brains? Sorry, again no. Anyone who thinks birds have music doesn’t know what music is. Music has syntax and hierarchical patterns. Birdsong does not – it is a linear sequence of acoustic signals. I’m afraid Terry again and again disqualifies himself during the article from saying anything on the subject.

Similarly, he claims that music has only emotional value, not intellectual. So how to explain the well-documented fact that musical training improves children’s IQ? Or that music cognition uses so many different areas of the brain – not just ‘primitive emotional centres’ such as the amygdala but logic-processing centres in the frontal cortex and areas that overlap with those used for handling language syntax? This is a statement of pure prejudice that takes no account of any evidence. ‘To a scientist, music can appear as a throwback to a primeval, swampy stage of human evolution’, Terry claims. Not to any scientist I know.

Finally, we have Terry’s remark that music encourages dictatorships, because Hitler and Stalin loved it, but Churchill and Roosevelt were indifferent. I am not going to insult his intelligence by implying that this is a serious suggestion that warrants a po-faced response, but really Terry, you have to be careful about making this sort of jest in the Times. I can imagine retired colonels all over the country snorting over their toast and marmalade: ‘Good god, the chap has a point!’

I must confess that I find something rather delightful in the fact that there are still people today who will, like some latter-day Saint Bernard of Clairvaux, denounce music as inciting ‘lust and superstition’. It’s wonderful stuff in its way, although one can’t help but scent the faint reek of the attacks on jazz in the early twentieth century, which of course had far baser motivations. Plato shared these worries too – Terry fails to point out that he felt only the right music educated the soul in virtue, while the wrong music would corrupt it. The same was true of Saint Augustine, but in his case it was his very love of music that made him fearful – he was all too aware of the strong effects it could exert, for ‘better’ or ‘worse’. In Terry Kealey’s case, it seems as though all music leaves him feeling vaguely unclean and infantilized, or perhaps just cold. That’s sad, but not necessarily beyond the reach of treatment.

Saturday, July 12, 2008

Were there architectural drawings for Chartres?

Michael Lewis has given Universe of Stone a nice review in the Wall Street Journal. The reason I want to respond to the points he raises is not to score points or pick an argument, but because they touch on interesting issues.

Lewis’s complaint about the absence of much discussion of the sculpture at Chartres is understandable if one is led by the American subtitle to expect a genuine biography of the building. And it’s natural that he would have been. But as my UK subtitle indicates, this is not in fact my aim: this is really a book about the origins of Gothic and what it indicates about the intellectual currents of the twelfth-century renaissance. The Chartrain sculpture doesn’t have so much to say about that (with some notable exceptions that I do mention).

Lewis’s most serious criticism, however, concerns the question of architectural drawings in the period when Chartres was built. As he says (and as he acknowledges I say), drawings for Gothic churches certainly did exist: there are some spectacular ones for Strasbourg and Reims Cathedrals in particular. As I say in my book, ‘These are extremely detailed and executed with high technical proficiency.’ They date from around 1250 onwards.

The question is: were similar drawings used for Chartres? Lewis is in no doubt: ‘analogous drawings would certainly have existed for Chartres.’ That's a level of certainty that other historians of Gothic don't seem to share - unsurprisingly, given that we lack any evidence either way. But most importantly, I would surely and rightly have been hauled over the coals if I had committed the cardinal sin of assuming that one period in the Middle Ages stands proxy for all others. The mid-thirteenth century was a very different time from the late twelfth, in terms of building practices as in many other respects: in particular, architecture became much more professionalized in the early thirteenth century than it had been before. My guess, as it is no more than that, is that if drawings existed for Chartres – which is certainly possible, but nothing more – they would have looked more akin to those of Villard de Honnecourt, made around 1220 or so, which have none of the precision of the Strasbourg drawings. Lewis says that the sophistication of the latter speaks of a mature tradition that must have already existed for a long time. That seems reasonable, until you consider this. Suppose all cathedrals before Chartres had been destroyed. We might, with analogous reasoning to Lewis’s, then look at its flying buttresses and say ‘Well, they certainly must have had good flying buttresses in the 1130s, since these ones are so mature.’ And of course we’d be utterly wrong. (What’s more, the skills needed to make flying buttresses are considerably more demanding than those needed to make scale drawings.)

I think Lewis may have misunderstood my text in places. I never claimed that the architect of Chartres designed it all ‘in his head’. I simply said that this is what architectural historian Robert Branner claimed. (I'm not sure I'd agree with him.) Neither did I say that architectural drawings would all be simply ‘symbolic, showing codified relationships without any real attention to dimension’ – I said that this was true of medieval art and maps.

I’m grateful to Lewis for raising this as an issue, and his comments suggest that it might be good if I spell things out a little more explicitly in the paperback (which, in the US, will probably have a different subtitle!).

Friday, July 04, 2008

Behind the mask of the LHC

[Here is my latest Muse for Nature News, which, bless them, they ran at its extravagant length and complexity.]

The physics that the Large Hadron Collider will explore has tentative philosophical foundations. But that’s a good thing.

Physicists, and indeed all scientists, should rejoice that the advent of the Large Hadron Collider (LHC) has become a significant cultural event. Dubbed the ‘Big Bang machine’, the new particle accelerator at CERN — the European centre for particle physics near Geneva — should answer some of the most profound questions in fundamental physics and may open up a new chapter in our exploration of why the world is the way it is. The breathless media coverage of the impending switch-on is a reassuring sign of the public thirst for enlightenment on matters that could easily seem recondite and remote.

But there are pitfalls with this kind of jamboree. The most obvious is the temptation for hype and false promises about what the LHC will achieve, as though all the secrets of creation are about to come tumbling out of its tunnels. And it is an uneasy spectacle to see media commentators duty-bound to wax lyrical about matters they understandably don’t really grasp. Most scientists are now reasonably alert to the dangers of overselling, even if they sometimes struggle to keep them in view.

It’s also worth reminding spectators that the LHC is no model of ‘normal’ science. The scale and cost of the enterprise are much vaster than those enjoyed by most researchers, and this very fact restricts the freedom of the scientists involved to let their imaginations and intuitions roam. The key experiments are necessarily preordained and decided by committee and consensus, a world away from a small lab following its nose. This is not intrinsically a bad thing, but it is different.

There is, however, a deeper reason to think carefully about what the prospect of the LHC offers. Triumphalism can mask the fact that there are some unresolved questions about the scientific and philosophical underpinnings of the enterprise, which will not necessarily be answered by statistical analyses of the debris of particle collisions. These issues are revealingly explored in a preprint by Alexei Grinbaum, a researcher at the French Atomic Energy Commission (CEA) in Gif-sur-Yvette [1].

Under the carpet

Let’s be clear that high-energy physics is by no means alone in preferring to sweep some foundational loose ends under the carpet so that it can get on with its day-to-day business. The same is true, for example, of condensed-matter physics (which, contrary to media impressions, is what most physicists do) and quantum theory. It is a time-honoured principle of science that a theory can be useful and valid even if its foundations have no rigorous justification.

But the best reason to tease apart the weak joints in the basement of fundamental physics is not in order to expose it as a precarious edifice — which it is not — but because these issues are so interesting in themselves.

Paramount among them, says Grinbaum, is the matter of symmetry. That’s a ubiquitous word in the lexicon of high-energy physics, but it is far from easy for a lay person to see what is meant by it. At root, the word retains its everyday meaning. But what this corresponds to becomes harder to discern when, for example, symmetry is proposed to unite classes of quantum particles or fields.

Controlling the masses

It is symmetry that anchors the notion of the Higgs particle, probably the one target of the LHC that anyone with any interest in the subject will have heard of. It is easy enough to explain that ‘the Higgs particle gives other particles their mass’ (an apocryphal quote has Lenin comparing it to the Communist Party: it controls the masses). And yes, we can offer catchy analogies about celebrities accreting hordes of hangers-on as they pass through a party. But what does this actually mean? Ultimately, the Higgs mechanism is motivated by a need to explain why a symmetry that seemed once to render equivalent two fundamental forces — the electromagnetic and weak nuclear forces — has been broken, so that the two forces now have different strengths and ranges.

This — the ‘symmetry breaking’ of a previously unified ‘electroweak’ force — is what the LHC will primarily probe. The Higgs explanation for this phenomenon fits nicely into the Standard Model of particle physics — the summation of all we currently know about this branch of reality. It is the only component of the Standard Model that remains to be verified (or not).

So far, this is pretty much the story that, if pressed beyond sound bites, the LHC’s spokespeople will tell. But here’s the thing: we don’t truly know what role symmetry does and should play in physical theory.

Practically speaking, symmetry has become the cornerstone of physics. But this now tends to pass as an unexamined truth. The German mathematician Hermann Weyl, who introduced the notion of gauge symmetry (in essence, a description of how symmetry acts on local points in space) in the 1920s, claimed that “all a priori statements in physics have their origin in symmetry”. For him and his contemporaries, laws of physics have to possess certain symmetry properties — Einstein surely had something of this sort in mind when he said that “the only physical theories that we are willing to accept are the beautiful ones”. For physicist Steven Weinberg, symmetry properties “dictate the very existence” of all physical forces — if they didn’t obey symmetry principles, the Universe would find a way to forbid them.

Breaking the pattern

But is the Universe indeed some gloriously symmetrical thing, like a cosmic diamond? Evidently not. It’s a mess, not just at the level of my desk or the arbitrary patchwork of galaxy clusters, but also at the level of fundamental physics, with its proliferation of particles and forces. That’s where symmetry-breaking comes in: when a cosmic symmetry breaks, things that previously looked identical become distinct. We get, among other things, two different forces from one electroweak force.

And the Higgs particle is generally believed to hold the key to how that happened. This ‘particle’ is just a convenient, potentially detectable signature of the broader hypothesis for explaining the symmetry breaking — the ‘Higgs mechanism’. If the mechanism works, there is a particle associated with it.

But the problem with the Higgs mechanism is that it does not and cannot specify how the symmetry is broken. As a result, it does not uniquely determine the mass of the Higgs particle. Several versions of the theory offer different estimates, which vary by a factor of around 100. That’s a crucial difference in terms of how readily the LHC might observe it, if at all. Now, accounts of this search may present this situation blandly as simply a test of competing theories; but the fact is that the situation arises because of ambiguities about what symmetry-breaking actually is.

The issue goes still deeper, however. Isn’t it curious that we should seek for an explanation of dissimilar entities in terms of a theory in which they are the same? Suppose you find that the world contains some red balls and some blue ones. Is it more natural to decide that there is a theory that explains red balls, and a different one that explains blue balls, or to assume that red and blue balls were once indistinguishable? As it happens, we already have very compelling reasons to believe that the electromagnetic and weak forces were once unified; but deciding to make unification a general aim of physical theories is quite another matter.

Physics Nobel laureate David Gross has pointed out the apparent paradox in that latter approach: “The search for new symmetries of nature is based on the possibility of finding mechanisms, such as spontaneous symmetry breaking, that hide the new symmetry” [2]. Grinbaum is arguing that it’s worth pausing to think about that assumption. To rely on symmetry arguments is to accept that the resulting theory will not predict the particular outcome you observe, where the symmetry may be broken in an arbitrary way. Only experiments can tell you what the result of the symmetry-breaking is.

Should we trust in beauty?

Einstein’s statement is revealing because it exposes a strand of platonic thinking in modern physics: beauty matters, and it is a vision of beauty based on order and symmetry. Pragmatically speaking, arguments that use symmetry have proved to be fantastically fertile in fundamental physics. But as Weyl’s remark shows, they are motivated only by assumptions about how things ought to be.

A sense of aesthetic beauty is now not just something that physicists discover in the world; it is, in the words of Gian Francesco Giudice, a theoretical physicist at CERN, “a powerful guiding principle for physicists as they try to construct new theories” [3]. They look for ways to build it in. This, as Grinbaum points out, “is logically unsound and heuristically doubtful”.

Grinbaum says that such aesthetic judgements give rise to ideas about the ‘naturalness’ of theories. This notion of naturalness figures in many areas of science, Giudice points out, but is generally dangerously subjective: it is ‘natural’ to us that the solar system is heliocentric, but it wasn’t at all to the ancient Greeks, or indeed to Tycho Brahe, the sixteenth-century Danish astrologer.

But Giudice explains that “a more precise form of naturalness criterion has been developed in particle physics and it is playing a fundamental role in the formulation of theoretical predictions for new phenomena to be observed at the LHC”. The details of this concept of naturalness are technical, but in essence it purports to explain why symmetry-breaking of the electroweak interaction left gravity so much weaker than the weak force (its name notwithstanding). The reasoning here leads to the prediction that production of the Higgs particle will be accompanied by a welter of other new particles not included in the Standard Model. The curious thing about this prediction is that it is motivated not to make any theory work out, but simply to remove the apparent ‘unnaturalness’ of the imbalance in the strengths of the two forces. It is basically a philosophical matter of what ‘seems right’.

Workable theories

There are also fundamental questions about why physics has managed to construct all manner of workable theories — of electromagnetism, say — without having to postulate the Higgs particle at all. The simple answer is that, so long as we are talking about energies well below the furious levels at which the Higgs particle becomes apparent, and which the LHC hopes to create, it is enough to subsume the whole Higgs mechanism within the concept of mass. This involves creating what physicists call an effective field theory, in which phenomena that become explicit above a certain energy threshold remain merely implicit in the parameters of the theory. Much the same principle permits us to use Newtonian mechanics when objects’ velocities are much less than the speed of light.

Effective field theories thus work only up to some limiting energy. But Grinbaum points out that this is no longer just a practical simplification but a methodology: “Today physicists tend to think of all physical theories, including the Standard Model, as effective field theories with respect to new physics at higher energies.” The result is an infinite regression of such theories, and thus a renunciation of the search for a ‘final theory’ — entirely the opposite of what you might think physics is trying to do, if you judge from popular accounts (or occasionally, from their own words).

Effective field theories are a way of not having to answer everything at once. But if they simply mount up into an infinite tower, it will be an ungainly edifice at best. As philosopher of science Stephan Hartmann at Tilburg University in the Netherlands has put it, the predictive power of such a composite theory would steadily diminish “just as the predictive power of the Ptolemaic system went down when more epicycles were added” [4].

Einstein seemed to have an intimation of this. He expressed discomfort that his theory of relativity was based not simply on known facts but on an a priori postulate about the speed of light. He seemed to sense that this made it less fundamental.

These and other foundational issues are not new to LHC physics, but by probing the limits of the Standard Model the new collider could bring them to the fore. All this suggests that it would be a shame if the results were presented simply as data points to be compared against theoretical predictions, as though to coolly assess the merits of various well-understood proposals. The really exciting fact is that the LHC should mark the end of one era — defined by the Standard Model — and the beginning of the next. And at this point, we do not even know the appropriate language to describe what will follow — whether, for example, it will be rooted in new symmetry principles (such as supersymmetry, which relates hitherto distinct particles), or extra dimensions, or something else. So let’s acknowledge and even celebrate our ignorance, which is after all the springboard of the most creative science.

1. Grinbaum, A. Preprint at (2008).
2. Gross, D. in Conceptual Foundations of Quantum Field Theory Cao, T. Y. (ed.) (Cambridge Univ. Press, 1999).
3. Giudice, G. F. Preprint at (2008).
4. Hartmann, S. Stud. Hist. Phil. Mod. Phys. 32, 267-304 (2001).