Saturday, February 29, 2020

How you hear the words of songs

This is my latest column for the Italian science magazine Sapere.

________________________________________________________________



The distinctions between song and spoken word have always been fuzzy. There’s musicality of rhythm and rhyme in poetry, and some researchers think the origins of song merge with those of verse in oral traditions for passing on stories and knowledge. Many musical stylings lie on the continuum between melodic singing and spoken recitation, ranging from the quasi-melodic recitative of traditional opera, the almost pitchless Sprechstimme technique introduced by Schoenberg and Berg in operas such as Pierrot Lunaire and Lulu, the Beat poetics of Tom Waits or the rapid-fire wordplay of rap.

It’s also well established that the cognitive processing of music and language share resources in the brain. For example, the same distinctive pattern of brain activity appears, in the language-processing region called Broca’s area, when we hear both a violation of linguistic syntax and a violation of the ‘normal’ rules of chord progressions. Yet the brain appears to use quite different parts of the brain to decode speech and sung melody: to a large extent, it categorizes them as different kinds of auditory input, and analyses them in different ways.

To a first approximation, speech is mostly processed in the left hemisphere of the brain, while melody is sent to the right hemisphere. Philippe Albouy and colleagues, working in the lab of leading music cognitive scientist Robert Zatorre at McGill University in Montreal, have now figured out how that processing differs in detail: what the brain seems to be looking for in each case. They asked a professional composer to generate ten new melodies, to each of which they set ten sentences, creating a total of 100 “songs” that a professional singer then recorded unaccompanied.

They played these recordings to 49 participants while altering the sound to degrade its information. In some cases they scrambled details of timing, so the words sounded slurred or indistinct. In others they filtered the sound to alter the acoustic frequencies (spectra), giving the songs a robotic, “metallic” quality. Participants were played an untreated song followed by the pair of altered versions, and were asked to focus either on the words or the melody.

For the tunes where timing details were altered, the melodies remained audible but not the words. With spectral manipulation, the reverse was true: people could make out the words but not the tune. So it seems that the speech-processing brain looks for temporal cues to decode the sound, whereas for melody-processing it’s the spectral content that matters more. Albouy and colleagues confirmed, using functional MRI for brain imaging, that the changes caused different activity in the auditory cortex on the left and right of the brain respectively.

Importantly, this doesn’t mean that the brain sends the signal one way for song and the other for speech. Both sides are working together in both cases, for otherwise we couldn't make out the lyrics of songs or the prosody – the meaningful rise and fall in pitch – of speech. The musical brain is integrated, but delegates roles according to need.