Tuesday, August 20, 2013

Appearances matter most in musical performance


This is a long version of the news story I’ve just published with Nature – there is just so much to talk about here.

_________________________________________________________________

Our judgements of quality depend more on how a musician moves than what they sound like.

He’d whip his long hair around as he played, beads of sweat flying into the audience, and women would swoon or throw their clothes onto the stage. No, this isn’t the young Mick Jagger or Jimmy Page but Franz Liszt, sometimes dubbed the first rock star, of whose famously theatrical piano recitals Robert Schumann once said that, if he played behind a screen, “a great deal of poetry would be lost”.

But who cares about the histrionics – it’s the music that matters, right? Not according to a new study which shows that people’s judgements about the quality of a musical performance are influenced more by what they see than by what they hear [1].

These findings by social psychologist Chia-Jung Tsay of University College London, who is also an acclaimed classical pianist, may be embarrassing and even shocking to music lovers. The vast majority of participants in Tsay’s experiments – around 83 percent of both untrained participants and professional musicians – insisted at the outset that sound was their key criterion in assessing video and audio recordings of performances.

Yet it wasn’t. The participants were presented with recordings for the three finalists in each of 10 prestigious international competitions, and were asked to guess the winner. With just sound, or sound plus video, novices and experts both guessed right at about the same level as chance (33 percent of the time), or a little less. But with video alone, the success rate for both rose to about 46-53 percent. The experts did no better than the novices.

In experiments where participants were randomly assigned to receive the silent videos, Tsay says “they expressed much frustration and lack of confidence in their choices, not realizing that they were the ones to best approximate the original decisions.” She would receive comments like “It’s impossible to take seriously without sound” and “This is meaningless since I can't hear [the performer]”

This is a brilliant paper”, says philosopher of music Vincent Bergeron of the University of Ottawa in Canada. “The fact that the judgments of both novice and expert participants are affected in the very same way suggests that the visual channel constitute a powerful and robust factor in the evaluation of musical performances.”

Rethinking performance

The results raise provocative questions about what musical performance really is. Classical audiences in particular might like to claim that they are there to enjoy the exquisite sounds the performers are making – but it seems their assessments are based primarily on what they are seeing.

“As an academic I was delighted to find these counterintuitive results”, says Tsay. “As a classical musician, I was initially somewhat disturbed. It was surprising to find that there is such a wide gap between what we believe matters in the evaluation of music performance and what is actually being used to judge performances.”

But Bergeron isn’t perturbed. He has previously argued that visuals do play a part in how we experience music when we see it performed [2]. “One could plausibly argue that music made for performance, such as classical music, is a visual as well as a sonic art, and that it should also be evaluated on the basis of how it looks”, he says.

Bergeron’s earlier case built partly on the work of Jane Davidson on the University of Western Australia in Crawley, Australia, who also found that judgements of quality depend on sight as well as sound [3]. Music neuropsychologist Daniel Levitin of McGill University in Canada agrees that Tsay’s results might have been anticipated, both because of earlier work on the subject [4,5] and because of what we know about cognition in general.

“In a sense, the visual channel is more primordial than the auditory”, he says. Besides, “there are lots of ways in which our intuitions about our own cognition are wrong”, he says. “The whole field of perception and cognition is full of these, such as visual and auditory illusions.”

But Davidson says that there seem to be nuances in such judgements. For example, in her studies “musicians were still able to differentiate between poorer and better quality musical sound”, she says.

In Davison’s experience, the assessments of non-musicians may rely more on visuals than that of professional musicians. “In my own studies, musicians were able to use sound and vision independently”, she says. “It was only non-musicians who relied mainly on the visual information.”

She adds that some studies, including her own, that lay sound from one performance on top of visuals from another find that, although the visuals dominate perceptions, such tricks “don’t fool experts”.

What kind of messages do we take away from visual information? Tsay was able to rule out the possibility that a performer’s gender, race or attractiveness influence judgements, at least in her experiments, by tests in which she reduced the video data to black-and-white outlines of the performers. Participants still guessed the competition winners correctly with much the same better-than-chance success rate (48 percent) as before.

Tsay thinks that, at least for this kind of music, visual cues carry implications about the degree of passion and motivation that the performer displays. These are qualities that many participants cited as important in their evaluations, and even musical novices can identify them visually. Perhaps they do it even more accurately than the ‘experts’, Tsay says, because they are unencumbered by the sound “that professional musicians unintentionally and non-consciously discard.”

Looking good

One has also to wonder if musicians already unconsciously know that it matters what they look like in performance. Tsay suspects they do. “Many performers do have the intuition that the role of visual information is an important one”, she says. Moreover, having studied at top musical institutions such as the Juillard School in New York, she says “Some teachers at conservatories seem to be quite attuned to the important role that visual information plays in the judgment of music, and they make their students aware of its impact for effective performance.”

Davidson agrees that performers sense the significance of how they look. “Look at the artistry of Judy Garland”, she says. “Every move is integrated into a smooth action plan, as if it were created in the moment, yet it is totally rehearsed and polished as an integrated essential element of her vocal performance.”

“Really good musicians do this too,” she adds. “There are data from Glenn Gould’s career which shows how he moved very differently when only concerned with creating a sound recording rather than when in the recital room.”

The topic is probably an under-researched aspect of musical performance, however. (Musicologist Susan Fast of McMaster University in Hamilton, Canada, has provided a rare analysis of the visual body language cultivated by the rock group Led Zeppelin [6].) “I think the claims of the current paper need some really good social psychological contextualizations and clarifications”, says Davidson.

Whether or not musicians do learn to enhance their “visual merit”, the question is sure to arise: “should they?” Isn’t this a bit like cheating? Is the celebrated Chinese pianist Lang Lang (see above), for example, fooling audiences as he makes great sweeping gestures with his arms, eyes closed and head thrown back in ecstasy?

Perhaps the key question is whether the visual information is reliable – helping us to pick out the most deserving winner, say – or misleading, making us prefer performers who rely on visual flair rather than musical depth. “It is possible that some performances can be ‘trained’ or ‘choreographed’ in a way that may not be authentic or true to the meaning of the musical composition, but may still remain effective as a performance as judged by audiences”, Tsay admits.

“The video is a "bad" signal if it leads to bad outcomes, that is, if we reward musicians in competitions conducted this way and then find that those musicians fail to sustain creative careers”, says Levitin. “I don't know of any study that looks at these outcome measures.”

But one might also argue that a competition is seeking only to identify who is “best” on the day. In which case, what should “best” mean?

“I would say that it depends on one's ontology”, says Bergeron. “Someone who thinks that musical performances are essentially sonic events should recognize that our aesthetic evaluations of musical performances might be systematically mistaken. However, someone who is not prepared to accept that our aesthetic evaluations of musical performances might be systematically mistaken should recognize that musical performances might be visual as well as sonic events.”

In any event, says Bergeron, the fact is that ‘experts’ seem to be swayed by visuals whether they like it or not. “This might be a practical reason to embrace the idea that music made for performance is a visual as well as a sonic art, since it might be psychologically impossible to distinguish, in our experience of performances, those aesthetic qualities that belong to the sound from those that belong to the visual aspects.”

In the end, says Tsay, this comes down to a matter of priorities. “It may be less a question whether the visual channel gives us ‘good’ or ‘bad’ data, and more a question of what we as musicians and audience members believe truly reflects quality”, she says. “This likely changes with time and with changes in technologies and the consumption of music.” After all, in Liszt’s day live performance was the only way audiences would ever hear music. And with the cult of the “artist as expressive genius” firmly established since Beethoven’s day, it made sense for him to perform with flair. Evidently, it still does.

References
1. Tsay, C.-J. Proc. Natl Acad. Sci. USA doi: 10.1073/1221454110.
2. Bergeron, V. & Lopes, D. M. Philos. Phenomenol. Res. 78, 1 (2009).
3. Davidson, J. W. Psychol. Music 21, 103 (1993).
4. Vines, B. et al., Ann. N. Y. Acad. Sci. 1060, 462 (2005)
5. Vines, B. et al., Cognition 101, 80 (2006).
6. Fast, S. In the Houses of the Holy (Oxford University Press, Oxford, 2001).

Monday, August 12, 2013

Bohr's beginnings

Here’s a book review, of sorts (I was asked to write something more like an essay review) just published in New Scientist.

________________________________________________________________

Love, Literature and the Quantum Atom Finn Aaserud & J. L. Heilbron Oxford University Press, 2013 ISBN 978-0-19-968028-3

Niels Bohr was one of the most profound thinkers among the early pioneers of quantum theory. He was the first truly to recognize and confront the philosophical problems that the theory posed, and the solutions he offered, such as the idea of complementarity and the Copenhagen interpretation, are still debated today. One hundred years ago he devised the first quantum picture of the atom, and he also anticipated quantum effects in biology.

What impresses most about Bohr’s scientific thought is that he could leave consistency to littler minds. Like James Clerk Maxwell, another genuinely deep physicist, he was happy to leave some matters unresolved and to accept contradiction. So what if the Bohr atom violates classical electrodynamics, which says it should decay? So what if wave can be particle? That’s just how things are (or how they seem, which for Bohr was much the same).

Love, Literature and the Quantum Atom is valuable for reminding us of this. But it’s a peculiar beast all the same, bearing signs of having been cobbled together for the Bohr atom centenary. In the first section, Finn Aaserud, director of the Niels Bohr Archive, offers a fresh perspective on Bohr’s early family life through newly released correspondence, especially with his wife Margrethe. Then the science historian John Heilbron, who collaborated with Thomas Kuhn in 1969 on a study of the Bohr atom, supplies a new account of the development of that seminal work in which he considers Bohr’s interests in literature, particularly that of Goethe and Ibsen. Finally the book reprints Bohr’s three-part paper (the so-called “Trilogy”) from 1913, “On the Constitution of Atoms and Molecules”.

To link Bohr’s extra-curricular reading with his science, Heilbron has been set a more or less impossible task. At times he can pursue it only by finding apt quotes from Ibsen’s Peer Gynt or Goethe’s Faust with which to punctuate Bohr’s professional life, irrespective of whether Bohr himself had the words in mind. Heilbron’s account of Bohr’s scientific journey is as insightful and informative as we’d expect from him. But once we get to the equipartition principle and the Balmer series, Goethe doesn’t have much to add.

This experiment fails not because a scientist’s interest in arts and literature can tell us nothing about his or her science, but because it seems Bohr’s cannot. He read widely and thought deeply, but on this showing was addicted to the strain of Germanic-Nordic romanticism that today looks like sentimentality, even chauvinism: great men striving to be great, while their pure-hearted, maidenly lovers pledge placid and dewy-eyed support. Margrethe was in fact Bohr’s staunch and sometimes steely ally, as he knew and appreciated – which is why all his talk of “my little one” who he would (using Ibsen’s words) “lock away as heart’s treasure” makes you realise why modernism and Virginia Woolf were so badly needed.

In a soul as noble as Bohr, this kind of sentiment has its touching aspect. But it’s not hard to see why, for less principled men, these visions of struggle and destiny, of heroes and Vikings, led down darker paths. It’s not too much to suggest that this was a Germanic thing (including Dutch and Danish – as physicist Hendrik Casimir attested, in the Netherlands too the intellectual elite was saturated in Germanic Kultur). There’s no inevitable path from Goethe to Goebbels, but the notion of Bildung – the particularly German character development all professors had to undergo – did breed the sort of patriarchal and patriotic conservatism that, as Heilbron showed in his splendid biography of Max Planck (The Dilemmas of an Upright Man, 1986), made it all but impossible for the traditional academics to muster any resistance to the Nazis.

This is why I’m left with mixed feelings about this glimpse at Bohr’s hinterland. On the one hand it is refreshing to see a great scientist being passionate about a difficult philosopher like Kierkegaard instead of coming up with empty soundbites about philosophy being dead. On the other hand, such an education evidently did little to build a moral framework; those few who, like Bohr and Max von Laue, behaved with something approaching heroism in the face of Hitler did so from some inner reserve of integrity that drew little on their broad education. Their generation was in this respect neither better nor worse than the culturally unsophisticated Feynman or the later generations brought up on Star Trek, Star Wars or Tomb Raider. Whatever it is that makes truly noble and responsible (let alone successful) scientists, it isn’t great art.

Thursday, August 08, 2013

Colour for free


I have written up my “history of colour chemistry” talk for publication in the little-known journal Interfaces, produced by the Université Paris Diderot and others. This stems from a conference on colour held at the university in early 2012, which, as the other contributions to this volume indicate, was a very diverse affair. The kind folks in Paris have made this and some of the other articles available online for free as pdfs - you can find it here.

Fuelling physics envy?

Here’s an opinion piece I have just published in Physics World, before it was edited.

_________________________________________________________________

Physics envy might get a fresh stimulus from a new paper which claims to present “bibliometric evidence for a hierarchy of the sciences”. By analysing features such as authorship, mode of expression and range of citations in about 29,000 papers from maths to the social sciences and humanities, bibliometrics experts Daniele Fanelli of the University of Edinburgh and Wolfgang Glänzel of the Catholic University of Leuven say that there are good objective reasons to support the hierarchy that proclaims maths and physics the ‘hardest’ and most solidly grounded of the sciences [1].

The work provides a fascinating glimpse at the stylistic and methodological differences that exist between disciplines. It’s telling us something worth knowing: that fundamental differences in style and content across the sciences are real, so that it might be a mistake to evaluate and manage all the sciences in the same way. What it is not telling us is that physics is the most exemplary or exalted of the sciences.

The authors are mostly scrupulous in avoiding that implication. They suggest that this hierarchy is only to be expected because, progressing from physics to sociology, the complexities of the subject matter – the degrees of freedom, if you like – are increasing. So it is scarcely surprising that the phenomena become harder to interpret and consensus becomes harder to achieve: as Fanelli and Glänzel put it, the data “become less able to speak for themselves.” In this much, they are endorsing the view first espoused in the 1830s by the French philosopher Auguste Comte, who also posited a hierarchy from mathematics to physics, chemistry, biology, psychology and sociology based on the level of complexity involved. Comte was the father of positivism, which asserts that all authoritative knowledge derives from an objective, data-driven, scientific study of the world.

Comte’s hierarchy is typically expressed in terms of the ‘hard’ and ‘soft’ sciences. Fanelli and Glänzel embrace these terms, saying that they “seem to capture an essential feature of science”, and that pretending they do not exist could be a “costly mistake”. The authors don’t deny that all disciplines have cultural and “non-cognitive” components, but say that they seem nevertheless shaped “by objective constraints imposed by the subject matter”.

Before grappling with those assertions, let’s look at what the duo did. They figured that a defining characteristic of a ‘hard’ science is the ability to reach a shared interpretation of phenomena. Consensus might be expected to be reflected in several general features of papers. For example, they will be shorter, since there is less need to justify and explain a study; the references will tend to be more recent (key questions are resolved faster), fewer, less diverse and dominated by tightly focused papers rather than general monographs. But titles might be longer, since the issues addressed will be more precisely defined, and the number of coauthors might be greater, since more researchers share commonly agreed goals and because increased specialization makes collaboration essential. Fanelli and Glänzel analysed these parameters in thousands of papers on the Thomson Reuters’ Web of Science, categorized into disciplines such as physics, chemistry, plant and animal sciences, and psychiatry/psychology, and find that the expected trends are borne out by the data.

So what’s the problem? Let’s start with semantics: ‘hard’ and ‘soft’ are prejudicial terms. It is very difficult to avoid reading them both as “hard-headed/soft-headed”, suggesting that the social sciences are pervaded by woolly thinking, and as “hard/easy”, suggesting that the physical sciences are more intellectually challenging and reinforcing the snooty conviction that the most brilliant scientists choose physics. But arguably (most) questions in physics are in fact the easiest to answer securely because they tend to be the easiest to isolate and interrogate experimentally. Economics is failing to answer our real-world questions not because economists are less able, but because economics is so complex, with few if any universal laws and very patchy data. (There’s another reason too, which I’ll come to shortly.)

Yet more invidious than the ‘hard/soft’ terminology is the whole notion of a hierarchy. By definition, this implies a judgement of status: there’s a top and a bottom. At best it invokes condescension towards those disciplines unlucky enough not to be physics; at worst, we’re invited to feel impatient that these ‘softer’ sciences haven’t yet got themselves physics-ified. Comte certainly felt that all sciences aspire to the condition of physics, and he looked forward to the time when the social sciences reached this stage of higher evolution. It was in Comte’s time that historians of science began to construct the narrative in which the mathematization of nature, as displayed in Newton’s Principia, was the defining achievement of the Scientific Revolution, ignoring the fact that this approach was of no value in, say, zoology, botany, chemistry, geology and medicine. When Immanuel Kant declared that the chemistry of his day was “not science” because it was insufficiently mathematical, he was exposing his limited understanding of what chemistry was about, both then and now.

Not only is mathematization, with its consequent opportunities for reductive subdivision of problems, of limited value in some sciences, but they – the life and social sciences particularly – have a dependence on context and history that offers scant purchase for physics-style universal rules, and means different data sets may tell different stories. When those dependencies are neglected for the sake of simplification, as in mainstream neoclassical economic theory, the result is a model so abstracted and simplistic that no amount of empirical input – not even the near-collapse of the global economy – can make much impression on the ramparts of its ivory towers.

I happen to believe that many sciences, from biology to sociology, can in fact benefit from physics-based ideas. But placing physics at the top of the tree doesn’t help, because it blurs the view of where “physics thinking” is and isn’t appropriate. And presenting science in terms of “consensus deficit” is not just misguided but potentially dangerous. A quest for consensus tacitly accepts Comte’s assumption that all questions can be given a single, scientifically based answer. But many cannot, not just in the humanities but also in history, politics, ethics, the social sciences, economics and beyond. Even in the so-called ‘hard’ sciences, the value of having complementary but not entirely compatible models is under-rated. For some questions about humanity, we may be better served by a diversity of views – including old ones – than by a doomed dream of consensus.

1. D. Fanelli & W. Glänzel, PLoS ONE 8, e66938 (2013).

The transformation of Paris


Here is my previous Under the Radar story for BBC Future – the next should be along very shortly. There’s a bigger story to this stuff – I hope I might get to tell it some time.

_____________________________________________________________

They are what make Paris so distinctive: the grand, wide boulevards that march in straight lines through the city, lined with bustling cafés and tempting patisseries. But this isn’t how Paris looked at the time of the Revolution in the late eighteenth century. The city is one of the most striking examples of rational urban planning, conducted in the middle of the nineteenth century during the ‘Second Empire’ of Napoleon III to ease congestion in the dense network of medieval streets.

It’s not hard to see how the redesign, conducted by Baron Georges-Eugène Haussmann at the emperor’s command, transformed Parisian life. You only have to compare the cityscape today with the narrow, convoluted passageways of the Marais district, one of the few parts of Paris largely untouched by Haussmann’s plans. But what exactly did these so-called ‘Second Empire reforms’ really do to the properties of the road network? How did they alter the way residents navigated the city? Can we be sure that other changes, whether contemporaneous or subsequent, didn’t have equally profound impact?

Until recently, these are questions that will have relied largely on the subjective impressions of urban theorists. After all, we can’t make measurements to compare today’s traffic flow with that from the days of Robespierre. But a new study by a collaboration of mathematical physicists and social historians in France shows that, simply by analysing old and new maps of the city, it’s possible to quantify what effect Haussmann’s plans had on the shape and life of Paris. The results offer a case history of how cities may evolve through a combination of spontaneous self-organization and top-down central planning.

Marc Barthelemy of the CEA Institute of Theoretical Physics in the Parisian suburb of Gif-sur-Yvette and his colleagues have analysed maps of the city road network at six moments in time since the Revolution: 1789, 1826, 1836, 1888, 1999 and 2010. They looked at some basic properties of the networks, such as the numbers of nodes (intersections) and edges (roads between nodes), as well as using more sophisticated concepts from the modern theory of complex networks, such as the quantity called ‘betweenness centrality’ (BC) that measures the importance of individual nodes to the navigability of the network.

The results are revealing. Whether or not Haussmann made a difference depends on what you look at. For example, between 1836 (before the changes) and 1888 (when they were essentially complete), the total number of nodes and their total length both increase very sharply – more or less doubling – while changing rather little thereafter. You might say that Haussmann added a lot of ways of getting from place to place. But this growth is mirrored by a steep rise in the city’s population, suggesting that this factor, rather than planning in itself, drove the increases: they might have happened anyway, albeit not necessarily in the same way.

What’s more, changes in the average BC values of the network also suggest that there was nothing unusual about the Haussmann developments, compared to what came before and after. Rather, the web of streets just got steadily denser, as has been found for some other cities.

A quite different picture emerged, however, when the researchers looked at the spatial patterns of change. When they plotted maps of the nodes with the largest BC values – the intersections that are most important for finding a shortcut between any two other nodes – the results look quite different up to 1836 and after 1888. In the earlier period, most of the high-BC nodes are clustered around the city centre, although between 1826 and 1836 an important traffic channel opened up in the Saint Martin region in the east of Paris, where several large properties owned by the church or aristocrats were sold and divided up to create new houses and roads.

But after Haussmann, the high-BC nodes form a more open, widely spaced system of key channels, somewhat like the vein network of a leaf. In other words, Haussmann’s avenues and boulevards helped to prevent routes becoming funnelled through the congested city centre, and gave Paris space to breathe.

The new roads also altered the typical shape of blocks. It’s been found previously that many urban road networks tend to intersect at right angles, dividing up the space ever more finely into square or rectangular blocks a bit like the crack networks of ceramic glazes. That’s what Paris looked like before the 1850s. But the new boulevards sliced boldly through this grid, creating a wider variety of block shapes, especially triangles and elongated rectangles.

So whether the Second Empire reforms transformed the face of Paris is a subtle question. Some of the changes over the nineteenth century, such as higher street density and increase in intersections, might have happened anyway thanks to the growth in population. In other ways, Haussmann stamped a ‘non-natural’ geometry on the city’s evolving network. Although Haussmann’s plans were criticized both at the time and by later architects, it looks as though they did a pretty good job, making the city centre less congested in a way that Parisians still benefit from today. London, in contrast, missed its chance: the grand new streets proposed by Christopher Wren after the Great Fire in 1666 weren’t built in time to prevent the city’s natural, spontaneous evolution from reasserting itself. All the same, using the tools that Barthelemy and colleagues have developed, it might now be possible to probe Haussmann’s scheme more closely – to ask, for example, how close it came to finding the very best solution to the problems it tackled.

Reference: M. Barthelemy, P. Bordin, H. Berestycki & M. Gribaudi, Nature Scientific Reports 3, 2153 (2013).