Friday, November 30, 2012
Massive organ
I know, this is what Facebook was invented for. But I haven't got my head round that yet, so here it is anyway. It will be a Big Noise. Café Oto is apparently the place to go in London for experimental music, and there's none more experimental than this. Andy Saunders of Towering Inferno has put it together. Who? Look up their stunning album Kaddish: as Wiki has it, "It reflects on The Holocaust and includes East European folk singing [the peerless Márta Sebestyén], Rabbinical chants, klezmer fiddling, sampled voices (including Hitler's), heavy metal guitar and industrial synthesizer. Brian Eno described it as "the most frightening record I have ever heard"." Come on!
The American invasion
I have a little muse on the Royal Society Winton Science Book Prize on the Prospect blog. Here it is. It was a fun event, and great to see that all the US big shots came over for it. My review of Gleick’s book is here.
_____________________________________________________________
Having reviewed a book favourably tends to leave one with proprietary feelings towards it, which is why I was delighted to see James Gleick’s elegant The Information (Fourth Estate) win the Royal Society Winton Science Book Prize last night. Admittedly, Gleick is not an author who particularly needs this sort of accolade to guarantee good sales, but neither did most of the other contenders, who included Steven Pinker, Brian Greene and Joshua Foer. Pinker’s entry, The Better Angels of Our Nature (Penguin) was widely expected to win, and indeed it is the sort of book that should: bold, provocative and original. But Gleick probably stole the lead for his glorious prose, scarcely done justice by the judging panel’s description as having “verve and fizz”. For that, go to Foer.
Gleick has enjoyed international acclaim ever since his first book in 1987, Chaos, which introduced the world to the ‘butterfly effect’ – now as much of a catchphrase for our unpredictable future as Malcolm Gladwell’s ‘tipping point’. But in between then and now, Gleick’s style has moved away from the genre-fiction potted portraits of scientists (“a tall, angular, and sandy-haired Texas native”, “a dapper, black-haired Californian transplanted from Argentina”), which soon became a cliché in the hands of lesser writers, and has matured into something approaching the magisterial.
And might that, perhaps, explain why five of the six finalists for this year’s prize were American? (The sixth, Lone Frank, is a Danish science writer, but sounds as though she learnt her flawless English on the other side of the pond.) There have been American winners before, Greene among them, but most (including the past four) have been British. Maybe one should not read too much into this American conquest – it just so happened that three of the biggest US hitters, as well as one new Wunderkind, had books out last year. But might the American style be better geared to the literary prize?
There surely is an American style: US non-fiction (not just in science writing) differs from British, just as British does from continental European. (Non-British Europeans have been rare indeed in the science book shortlists.) They do grandeur well, in comparison to which even our popular-science grandees, such as Richard Dawkins, Steve Jones and Lewis Wolpert, seem like quiet, diligent academics. The grand style can easily tip into bombast, but when it works it is hard to resist. Just reading the list of winners of the Pulitzer Prize for Non-Fiction makes one feel exhausted – no room here for the occasional quirkiness of the Samuel Johnson.
This year’s science book prize shortlist was irreproachable – indeed, one of the strongest for years. But it will be interesting to see whether, in this straitened time for writers, only the big and bold will survive.
_____________________________________________________________
Having reviewed a book favourably tends to leave one with proprietary feelings towards it, which is why I was delighted to see James Gleick’s elegant The Information (Fourth Estate) win the Royal Society Winton Science Book Prize last night. Admittedly, Gleick is not an author who particularly needs this sort of accolade to guarantee good sales, but neither did most of the other contenders, who included Steven Pinker, Brian Greene and Joshua Foer. Pinker’s entry, The Better Angels of Our Nature (Penguin) was widely expected to win, and indeed it is the sort of book that should: bold, provocative and original. But Gleick probably stole the lead for his glorious prose, scarcely done justice by the judging panel’s description as having “verve and fizz”. For that, go to Foer.
Gleick has enjoyed international acclaim ever since his first book in 1987, Chaos, which introduced the world to the ‘butterfly effect’ – now as much of a catchphrase for our unpredictable future as Malcolm Gladwell’s ‘tipping point’. But in between then and now, Gleick’s style has moved away from the genre-fiction potted portraits of scientists (“a tall, angular, and sandy-haired Texas native”, “a dapper, black-haired Californian transplanted from Argentina”), which soon became a cliché in the hands of lesser writers, and has matured into something approaching the magisterial.
And might that, perhaps, explain why five of the six finalists for this year’s prize were American? (The sixth, Lone Frank, is a Danish science writer, but sounds as though she learnt her flawless English on the other side of the pond.) There have been American winners before, Greene among them, but most (including the past four) have been British. Maybe one should not read too much into this American conquest – it just so happened that three of the biggest US hitters, as well as one new Wunderkind, had books out last year. But might the American style be better geared to the literary prize?
There surely is an American style: US non-fiction (not just in science writing) differs from British, just as British does from continental European. (Non-British Europeans have been rare indeed in the science book shortlists.) They do grandeur well, in comparison to which even our popular-science grandees, such as Richard Dawkins, Steve Jones and Lewis Wolpert, seem like quiet, diligent academics. The grand style can easily tip into bombast, but when it works it is hard to resist. Just reading the list of winners of the Pulitzer Prize for Non-Fiction makes one feel exhausted – no room here for the occasional quirkiness of the Samuel Johnson.
This year’s science book prize shortlist was irreproachable – indeed, one of the strongest for years. But it will be interesting to see whether, in this straitened time for writers, only the big and bold will survive.
Tuesday, November 27, 2012
The universal reader
This is the pre-edited version of my latest, necessarily much-curtailed news story for Nature.
_____________________________________________________________
New study suggests the brain circuits involved in reading are the same the world over
For Westerners used to an alphabetic writing system, learning to read Chinese characters can feel as though it is calling on wholly new mental resources. But it isn’t, according to a new study that uses functional magnetic-resonance imaging (fMRI) to examine people’s brain activity while they read. The results suggest that the neural apparatus involved in reading might be common to all cultures, despite their very different writing systems, and that culture simply fine-tunes this.
Stanislas Dehaene of the National Institute of Health and Medical Research in Gif-sur-Yvette, France, and his coworkers say that reading involves two neural subsystems: one that recognizes the shape of the words on the page, and the other that decodes the physical motor gestures used to make the marks.
In their tests of French and Chinese subjects, they found that both groups use both systems while reading their native language, but with different emphases that reflect the different systems of writing. They describe their findings today in the Proceedings of the National Academy of Sciences USA [1].
“Rather than focusing on ear and eye in reading, the authors rightly point out that hand and eye are critical players”, says Uta Frith, a cognitive neuroscientist at University College London. “This could lead into novel directions – for instance, it might provide answers why many dyslexics also have very poor handwriting and not just poor spelling.”
Understanding how the brain decodes symbols during reading might not only offer clues into the origin of learning impairments such as dyslexia, but also inform learning strategies for general literacy and how these might be attuned to children or adults.
It has been unclear whether the brain networks responsible for reading are universal or culturally distinct. Some previous studies have suggested that alphabetic (such as French) and logographic (such as Chinese, where single characters represent entire words) writing systems might engage different networks.
There is evidence that all cultures use a shape-recognition region in the brain’s posterior left hemisphere, including in particular a so-called visual word-forming area (VWFA). But some research has implied that Chinese readers also use other brain networks that are unimportant for Western readers – perhaps because the Chinese logographic system places great emphasis on the order and direction of the strokes that make up a character, thereby engaging a ‘motor memory’ for writing gestures.
Dehaene and colleagues suspected that such motor aspects of reading are universal. Some educators have long advocated this: the Montessori method, for example, uses sandpaper letters that children can trace with their fingers to reinforce the gestural aspects of letter recognition. Motor processing is evidently universal for writing, involving a brain region known as Exner’s area, and the researchers postulated that this is activated in reading too, to interpret the gestures assumed to have gone into making the marks.
To examine what the brain is up to during reading, Dehaene and colleagues used fMRI to monitor brain activity in French and Chinese subjects reading words and characters in their own language in cursive script. They asked the subjects to recognize the words and recorded their response times.
However, unbeknown to the subjects, their responses were being manipulated in subtle ways by a process called ‘priming’. Before the word itself was presented on a screen, the subjects saw other words or symbols flashed up in just 50 milliseconds – too short a time, in general, for them to be registered consciously.
These subliminal images prepared the brain for the target word. If one of them was identical to the target word itself, subjects recognized the true target more quickly. The ‘masked’ images could also show ‘nonsense’ words written with the strokes progressing in the usual (forward) direction, or as the reverse (backward) of the usual gestural direction. Moreover, the targets could be shown either as static images or dynamically unfolding as though being written – both forwards and backwards. Finally, the target could also be distorted, for example with the letters unnaturally bunched up or the strokes slightly displaced.
The researchers used these manipulations both to match the amount of stimulus given to the subjects for the very different scripts of French and Chinese, and to try to isolate the different brain functions involved in reading. For example, spatial distortion of characters disrupts the VWFA involved in shape recognition, while words appearing dynamically stimulates Exner’s area (the motor network), but this network gets thrown if the words seem to be being written with backwards gestures. In each case, such disruptions slow the response time.
Dehaene and colleagues found that the same neural networks – the VWFA and Exner’s area – were indeed activated in both French and Chinese subjects, and could be isolated using the different priming schemes. But there were cultural differences too: for example, static distortion of the target slowed down recognition for the French subjects more than the Chinese, while the effects of gestural direction were stronger for the Chinese.
The researchers suspect that the gestural system probably plays a stronger role while the VWFA has not fully matured – that is, in young children, supporting the idea that reinforcement via the motor system can assist reading. “So far the motor decoding side has been rather neglected in reading education,” says Frith.
“It is conceivable that you find individuals where one system is functioning much better than the other”, she adds. “This may be a source of reading problems not yet explored. In the past I have studied people who can read very well but who can't spell. Perhaps the spelling aspect is more dependent on kinetic memories?”
However, psycholinguist Li-Hai Tan at the University of Hong Kong questions how far these results can be generalized to non-cursive printed text. “Previous studies using printed non-cursive alphabetic words in general have not reported activity in the gesture recognition system of the brain”, he says. “However, this gesture system has been found in fMRI studies with non-cursive Chinese characters. The motor system plays an important role in Chinese children's memory of characters, whether cursive or not.”
The universality of the ‘reading network’, say Dehaene and colleagues, also supports suggestions that culturally specific activities do not engage new parts of the brain but merely fine-tune pre-existing circuits. “Reading thus gets a free ride on ancient brain systems, and some reading systems are more user-friendly for the brain”, says Frith.
Reference
1. Nakamura, K. et al., Proc. Natl Acad. Sci. USA doi:10.1073/pnas.1217749109 (2012).
Monday, November 26, 2012
Faking Moby's fragrance
Here’s my latest piece for the BBC’s Future site. God, it is nice to have the luxury of indulging in some nice context without having to get to the news in the first breath. Indeed, it’s part of the thesis of this column that context can be key to the interest of a piece of work.
___________________________________________________________________
Smelling, as the New York Times put it in 1895, “like the blending of new-mown hay, the damp woodsy fragrance of a fern-copse, and the faintest possible perfume of the violet”, the aromatic allure of ambergris is not hard to understand. In the Middle East it is an aphrodisiac, in China a culinary delicacy. King Charles II is said to have delighted in dining on it mixed with eggs. Around the world it has been a rare and precious substance, a medicine and, most of all, a component of musky perfumes.
You’d never think it started as whale faeces, and smelling like it too. As Herman Melville said in that compendium of all things cetacean Moby Dick, it is ironic that “fine ladies and gentlemen should regale themselves with an essence found in the inglorious bowels of a sick whale”.
But vats of genetically modified bacteria could one day be producing the expensive chemical craved by the perfume industry for woody, ambergris-like scents, if research reported by biochemists at the Swiss fragrance and flavourings company Firmenich in Geneva comes to fruition. Their results are another demonstration that rare and valuable complex chemicals, including drugs and fuels, can be produced by sophisticated genetic engineering methods that convert bacteria into microscopic manufacturing plants.
Made from the indigestible parts of squid eaten by sperm whales, and usually released only when the poor whale dies from a blocked and ruptured intestine and has been picked apart by the sea’s scavengers, ambergris matures as it floats in the brine from a tarry black dung to a dense, pungent grey substance with the texture of soft, waxy stone.
Because ambergris needs this period of maturation in the open air, it couldn’t be harvested from live sperm whales even in the days when hunting was sanctioned. It could be found occasionally in whale carcasses – in Moby Dick the Pequod’s crew trick a French whaler into abandoning a whale corpse so that they can capture its ambergris. But most finds are fortuitous, and large pieces of ambergris washed ashore can be worth many thousands of dollars.
The perfume industry has long accepted that it can’t rely on such a scarce, sporadic resource, and so it has found alternatives to ambergris that smell similar. One of the most successful is a chemical compound called Ambrox, devised by Firmenich’s fragrance chemists in the 1950s and featured, I am told, in Dolce & Gabbana’s perfume Light Blue. One perfume website describes it, with characteristically baffling hyperbole, as follows: “You're hit with something that smells warm, oddly mineral and sweetly inviting, yet it doesn't exactly smell like a perfumery or even culinary material. It's perfectly abstract, approximating a person's aura rather than a specific component”.
To make Ambrox, chemists start with a compound called sclareol, named after the southern European herb Salvia sclarea (Clary sage) from which it is extracted. In other words, to mimic a sperm whale’s musky ambergris, you start with an extract of sage. This is par for the course in the baffling world of human olfaction. Although in this case Ambrox has a very similar structure to the main smelly molecules in ambergris, that doesn’t always have to be so: two odorant molecules can smell almost identical while having very different molecular structures (they are all generally based on frameworks of carbon atoms linked into rings and chains). That’s true, for example, of two other ambergris-like odorants called timberol and cedramber. Equally, two molecules that are almost identical, even mirror images of one another, can have very different odours. Quite how such molecules elicit a smell when they bind to the proteins in the olfactory membrane of the nasal cavity is still not understood.
Clary sage is easier to get hold of than ambergris, but even so the herb contains only tiny amounts of sclareol, and it is laborious to extract and purify. That’s why Firmenich’s Michel Schalk and his colleagues wanted to see if they could take the sclareol-producing genes from the herb and put them in the gut bacterium Escherichia coli, the ubiquitous single-celled workhorse of the biotechnology industry whose fermentation for industrial purposes is a well-developed art.
Sclareol belongs to a class of organic compounds called terpenes, many of which are strong-smelling and are key components of the essential-oil extracts of plants. Sclareol contains two rings of six carbon atoms each, formed when enzymes called diterpene synthases stitch together parts of a long chain of carbon atoms. The Firmenich researchers show that the formation of sclareol is catalysed in two successive steps by two different enzymes.
Schalk and colleagues extracted and identified the genes that encode these enzymes, and transplanted them into E. coli. That alone, however, doesn’t necessarily make the bacteria capable of producing lots of sclareol. For one thing, it has to be able also to make the long-chained starting compound, which can be achieved by adding yet another gene from a different species of bacteria that happens to produce the stuff naturally.
More challengingly, all of the enzymes have to work in synch, which means giving them genetic switches to regulate their activity. This approach – making sure that the components of a genetic circuit work together like the parts of a machine to produce the desired chemical product – is known as metabolic engineering. This is one level up from genetic engineering, tailoring microorganisms to carry out much more demanding tasks than those possible by simply adding a single gene. It has already been used for bacterial production of other important natural compounds, such as the anti-malarial drug artemisinin.
With this approach, the Firmenich team was able to create an E. coli strain that could turn cheap, abundant glycerol into significant quantities (about 1.5 grams per litre) of sclareol. So far this has just been done at a small scale in the lab. If it can be scaled up, you might get to smell expensively musky without the expense. Or at least, you would if price did not, in the perfume business, stand for an awful lot more than mere production costs.
Reference: M. Schalk et al., Journal of the American Chemical Society doi:10.1021/ja307404u (2012).
___________________________________________________________________
Smelling, as the New York Times put it in 1895, “like the blending of new-mown hay, the damp woodsy fragrance of a fern-copse, and the faintest possible perfume of the violet”, the aromatic allure of ambergris is not hard to understand. In the Middle East it is an aphrodisiac, in China a culinary delicacy. King Charles II is said to have delighted in dining on it mixed with eggs. Around the world it has been a rare and precious substance, a medicine and, most of all, a component of musky perfumes.
You’d never think it started as whale faeces, and smelling like it too. As Herman Melville said in that compendium of all things cetacean Moby Dick, it is ironic that “fine ladies and gentlemen should regale themselves with an essence found in the inglorious bowels of a sick whale”.
But vats of genetically modified bacteria could one day be producing the expensive chemical craved by the perfume industry for woody, ambergris-like scents, if research reported by biochemists at the Swiss fragrance and flavourings company Firmenich in Geneva comes to fruition. Their results are another demonstration that rare and valuable complex chemicals, including drugs and fuels, can be produced by sophisticated genetic engineering methods that convert bacteria into microscopic manufacturing plants.
Made from the indigestible parts of squid eaten by sperm whales, and usually released only when the poor whale dies from a blocked and ruptured intestine and has been picked apart by the sea’s scavengers, ambergris matures as it floats in the brine from a tarry black dung to a dense, pungent grey substance with the texture of soft, waxy stone.
Because ambergris needs this period of maturation in the open air, it couldn’t be harvested from live sperm whales even in the days when hunting was sanctioned. It could be found occasionally in whale carcasses – in Moby Dick the Pequod’s crew trick a French whaler into abandoning a whale corpse so that they can capture its ambergris. But most finds are fortuitous, and large pieces of ambergris washed ashore can be worth many thousands of dollars.
The perfume industry has long accepted that it can’t rely on such a scarce, sporadic resource, and so it has found alternatives to ambergris that smell similar. One of the most successful is a chemical compound called Ambrox, devised by Firmenich’s fragrance chemists in the 1950s and featured, I am told, in Dolce & Gabbana’s perfume Light Blue. One perfume website describes it, with characteristically baffling hyperbole, as follows: “You're hit with something that smells warm, oddly mineral and sweetly inviting, yet it doesn't exactly smell like a perfumery or even culinary material. It's perfectly abstract, approximating a person's aura rather than a specific component”.
To make Ambrox, chemists start with a compound called sclareol, named after the southern European herb Salvia sclarea (Clary sage) from which it is extracted. In other words, to mimic a sperm whale’s musky ambergris, you start with an extract of sage. This is par for the course in the baffling world of human olfaction. Although in this case Ambrox has a very similar structure to the main smelly molecules in ambergris, that doesn’t always have to be so: two odorant molecules can smell almost identical while having very different molecular structures (they are all generally based on frameworks of carbon atoms linked into rings and chains). That’s true, for example, of two other ambergris-like odorants called timberol and cedramber. Equally, two molecules that are almost identical, even mirror images of one another, can have very different odours. Quite how such molecules elicit a smell when they bind to the proteins in the olfactory membrane of the nasal cavity is still not understood.
Clary sage is easier to get hold of than ambergris, but even so the herb contains only tiny amounts of sclareol, and it is laborious to extract and purify. That’s why Firmenich’s Michel Schalk and his colleagues wanted to see if they could take the sclareol-producing genes from the herb and put them in the gut bacterium Escherichia coli, the ubiquitous single-celled workhorse of the biotechnology industry whose fermentation for industrial purposes is a well-developed art.
Sclareol belongs to a class of organic compounds called terpenes, many of which are strong-smelling and are key components of the essential-oil extracts of plants. Sclareol contains two rings of six carbon atoms each, formed when enzymes called diterpene synthases stitch together parts of a long chain of carbon atoms. The Firmenich researchers show that the formation of sclareol is catalysed in two successive steps by two different enzymes.
Schalk and colleagues extracted and identified the genes that encode these enzymes, and transplanted them into E. coli. That alone, however, doesn’t necessarily make the bacteria capable of producing lots of sclareol. For one thing, it has to be able also to make the long-chained starting compound, which can be achieved by adding yet another gene from a different species of bacteria that happens to produce the stuff naturally.
More challengingly, all of the enzymes have to work in synch, which means giving them genetic switches to regulate their activity. This approach – making sure that the components of a genetic circuit work together like the parts of a machine to produce the desired chemical product – is known as metabolic engineering. This is one level up from genetic engineering, tailoring microorganisms to carry out much more demanding tasks than those possible by simply adding a single gene. It has already been used for bacterial production of other important natural compounds, such as the anti-malarial drug artemisinin.
With this approach, the Firmenich team was able to create an E. coli strain that could turn cheap, abundant glycerol into significant quantities (about 1.5 grams per litre) of sclareol. So far this has just been done at a small scale in the lab. If it can be scaled up, you might get to smell expensively musky without the expense. Or at least, you would if price did not, in the perfume business, stand for an awful lot more than mere production costs.
Reference: M. Schalk et al., Journal of the American Chemical Society doi:10.1021/ja307404u (2012).
Saturday, November 17, 2012
Pseudohistory of science
I have just seen that my article for Aeon, the new online “magazine of ideas and culture”, has been live for some time. This magazine seems a very interesting venture; I hope it thrives. My article changed rather little in editing and is freely available, so I’ll just give the link. All that was lost was some examples at the beginning of scientists being rude about other disciplines: Richard Dawkins suggesting that theology is not an academic discipline at all, and Stephen Hawking saying that philosophy is dead (never have I seen such profundity being attributed to a boy poking his tongue out).
Tuesday, November 13, 2012
Why dissonance strikes a wrong chord in the brain
Here’s the pre-edited version of my latest news story for Nature. There is a lot more one might say about this, in terms of what it does or doesn’t say about our preferences for consonance/dissonance. At face value, the work could be interpreted as implying that there is something ‘natural’ about a preference for consonance. But the authors say that the issue of innateness simply isn’t addressed here, and they suspect learning plays a big role. After all, it seems that children don’t express preferences for consonant chords until the ages of 8-9 (earlier if they have musical training). The experiments which report such preferences in babies remain controversial.
Besides, one would need to test such things in non-Western contexts. McDermott agrees with Trehub’s comments below, saying “It is true that intervals that are consonant to Western ears are prevalent in some other cultures, but there are also instances where conventionally dissonant intervals common (e.g. in some Eastern European folk music; moreoever, major seconds are fairly common in harmony all over the world). So I think the jury is out as of now. There really is a need for more cross-cultural work.”
And the other big question is how much these preferences are modified when the intervals are encountered in a real musical context. McDermott says this: “We measured responses to chords in isolation, but that is obviously not the whole story. Context can clearly shape the way a chord is evaluated, and whether that can be linked to acoustic phenomena remains to be seen. That is a really interesting issue to look at in the future.” Trehub says that context “makes a HUGE difference. The so-called dissonant intervals don't sound dissonant in musical contexts. They generate a sense of motion or tension, creating expectations that something else will follow, and it invariably does. Musical pieces that are considered consonant have their share of dissonant intervals, which create interest, excitement, expectations, and more.”
_____________________________________________________________________
A common aversion to clashing harmonies may not stem from their grating ‘roughness’
Many people dislike the clashing dissonances of modernist composers such as Arnold Schoenberg. But what’s our problem with dissonance? It’s long been thought that dissonant musical chords contain acoustic frequencies that interfere with one another to set our nerves on edge. A new study proposes that in fact we prefer consonant chords for a different reason, connected to the mathematical relationship between the many different frequencies that make up the sound.
Cognitive neuroscientists Josh McDermott of New York University and Marion Cousineau and Isabelle Peretz of the University of Montreal have evaluated these explanations for preferences about consonance and dissonance by comparing the responses of a normal-hearing control group to those of people who suffer from amusia, an inability to distinguish between different musical tones.
In a paper in the Proceedings of the National Academy of Sciences USA [1] they report that, while both groups had an aversion to the ‘roughness’ – a kind of grating sound – that is created by interference of two acoustic tones differing only slightly in frequency, the amusic subjects had no consistent preferences for any interval (two notes played together a certain distance apart on the keyboard) over any other.
Consonant chords are, roughly speaking, made up of notes that ‘sound good’ together, for example like middle C and the G above it (an interval called a fifth). Dissonant chords are combinations that sound jarring, like middle C and the C sharp above (a minor second). The reason why we should like one but not the other has long vexed both musicians and cognitive scientists.
Consonance and dissonance in music have always excited passions, in more ways than one. For one thing, composers use dissonant chords to introduce tension, which may then be relieved by consonant chords, eliciting emotional responses from music.
It has often been suggested that humans (and perhaps some other animals) have innate preferences for consonance over dissonance, so that music in which dissonance features prominently is violating a natural law and bound to sound bad. Others, including Schoenberg himself, have argued that dissonance is merely a matter of convention, and that we can learn to love it.
The question of whether an aversion to dissonance is innate or learnt has been extensively studied, but remains unanswered. Some have claimed that very young infants prefer consonance, but even then learning can’t be ruled out given that babies can hear in the womb.
However, there has long been thought to be a physiological reason why at least some kinds of dissonance sound jarring. Two tones close in frequency interfere to produce a phenomenon called beating: what we hear is just a single tone rising and falling in loudness. The greater the frequency difference, the faster the beating, and within a certain difference range it becomes a kind of rattle, called acoustic roughness, which sounds unpleasant.
Evaluating the role of roughness in musical dissonance is complicated by the fact that real tones made by instruments or voices contain many overtones – frequencies that are whole-number multiples of the basic frequency – so that there are many frequency relationships to take into account. All the same, an aversion to beating has seemed consistent with the common dislike of intervals such as minor seconds.
Yet when McDermott and colleagues asked amusic subjects to rate the pleasantness of a whole series of intervals, their responses varied enormously both from person to person and from test to test, such that on average they showed no distinctions between any of the intervals. In contrast, normal-hearing control subjects rated small intervals (minor seconds and major seconds, such as C-D) and large but sub-octave intervals (minor sevenths C-B flat and major sevenths C-B) much lower than the others.
That wasn’t so unexpected – although the near-equal preferences of the control group for mid-span intervals seems odd to Sandra Trehub, an auditory psychologist at the University of Toronto at Mississauga. “The findings from controls don't replicate the usual pattern of preferences”, she says – where, for example, there tends to be a strong preference for octaves and fifths, and an aversion to the tritone (6 semitones, such as C-F sharp). “Hearing impairment, resulting from the need to have age-matched controls, could have influenced the control ratings somewhat”, McDermott admits.
Then the researchers tested how both groups felt about roughness. They found that the amusics could hear this and disliked it about as much as the control groups. So apparently something else was causing the latter to dislike the dissonant intervals.
These preferences seem instead to stem from the so-called harmonicity of consonant intervals. The relationship between overtone frequencies in these intervals is similar to that between the overtones in a single note: they are whole-number multiples. In contrast, the overtones for dissonant intervals don’t have that relationship, but looks more like the overtones for sounds that are ‘inharmonic’, such as the notes made by striking metal.
The control group preferred consonant intervals with these harmonic relationships over artificial ones in which the overtones were subtly shifted to be inharmonic even while the basic tones remained the same. The amusics, meanwhile, registered no difference between the two cases: they seem insensitive to harmonicity.
McDermott and his coworkers have reported previously that harmonicity seems more important than roughness for dissonance aversion in normal hearers [2]. They argue that the lack of sensitivity both to harmonicity and dissonance in amusics now adds to that case.
But Trehub is not so sure. “Most amusics don't like, or are indifferent to, music”, she says, “so it strikes me as odd to examine this population as a way of understanding the basis of consonance and dissonance.”
Peretz, however, points out that amusia doesn’t necessarily rule out musical appreciation. “A few amusics listen a lot to music”, she says.
Diana Deutsch, a music psychologist at the University of California at San Diego, says that the work is “of potential interest for the study of amusia”, but questions whether it adds much to our understanding of normal hearing. In particular she wonders if many of the findings will survive in the context of everyday music listening, where people seem to display contrary preferences. “Rock bands often deliberately introduce roughness and dissonance into their sounds, much to the delight of their audiences”, she says. “And many composers of contemporary Western art music would disagree strongly with the statement that consonant intervals and harmonic tone complexes are more pleasing in general than are dissonant intervals and inharmonic tones.”
Trehub agrees, saying that there are plenty of musical traditions in which both roughness and dissonance are appreciated. “Indonesian gamelan instruments are designed to generate roughness when played together, and that quality is considered appealing. Some folk-singing in rural Croatia and Bosnia-Herzegovina involves two people singing the same melodic line one semitone apart. Then there's jazz, with lots of dissonance. It's hard to imagine a folk tradition based on something that’s inherently negative,” she says.
But McDermott says the results do not necessarily imply that there is anything innate about a preference for harmonicity, and indeed he suspect that learning plays a role. “The amusic subjects likely had less exposure to music than did the control subjects, and this could in principle contribute to some of their deficits”, he says. “So other approaches will be needed to address the innateness issue,” he says.
References
1. Cousineau, M., McDermott, J. H. & Peretz, I. Proc. Natl Acad. Sci. USA doi:10.1073/pnas.1207989109 (2012).
2. McDermott, J. H., Lehr, A. J. & Oxenham, A. J. Curr. Biol. 20, 1035-1041 (2010).
Besides, one would need to test such things in non-Western contexts. McDermott agrees with Trehub’s comments below, saying “It is true that intervals that are consonant to Western ears are prevalent in some other cultures, but there are also instances where conventionally dissonant intervals common (e.g. in some Eastern European folk music; moreoever, major seconds are fairly common in harmony all over the world). So I think the jury is out as of now. There really is a need for more cross-cultural work.”
And the other big question is how much these preferences are modified when the intervals are encountered in a real musical context. McDermott says this: “We measured responses to chords in isolation, but that is obviously not the whole story. Context can clearly shape the way a chord is evaluated, and whether that can be linked to acoustic phenomena remains to be seen. That is a really interesting issue to look at in the future.” Trehub says that context “makes a HUGE difference. The so-called dissonant intervals don't sound dissonant in musical contexts. They generate a sense of motion or tension, creating expectations that something else will follow, and it invariably does. Musical pieces that are considered consonant have their share of dissonant intervals, which create interest, excitement, expectations, and more.”
_____________________________________________________________________
A common aversion to clashing harmonies may not stem from their grating ‘roughness’
Many people dislike the clashing dissonances of modernist composers such as Arnold Schoenberg. But what’s our problem with dissonance? It’s long been thought that dissonant musical chords contain acoustic frequencies that interfere with one another to set our nerves on edge. A new study proposes that in fact we prefer consonant chords for a different reason, connected to the mathematical relationship between the many different frequencies that make up the sound.
Cognitive neuroscientists Josh McDermott of New York University and Marion Cousineau and Isabelle Peretz of the University of Montreal have evaluated these explanations for preferences about consonance and dissonance by comparing the responses of a normal-hearing control group to those of people who suffer from amusia, an inability to distinguish between different musical tones.
In a paper in the Proceedings of the National Academy of Sciences USA [1] they report that, while both groups had an aversion to the ‘roughness’ – a kind of grating sound – that is created by interference of two acoustic tones differing only slightly in frequency, the amusic subjects had no consistent preferences for any interval (two notes played together a certain distance apart on the keyboard) over any other.
Consonant chords are, roughly speaking, made up of notes that ‘sound good’ together, for example like middle C and the G above it (an interval called a fifth). Dissonant chords are combinations that sound jarring, like middle C and the C sharp above (a minor second). The reason why we should like one but not the other has long vexed both musicians and cognitive scientists.
Consonance and dissonance in music have always excited passions, in more ways than one. For one thing, composers use dissonant chords to introduce tension, which may then be relieved by consonant chords, eliciting emotional responses from music.
It has often been suggested that humans (and perhaps some other animals) have innate preferences for consonance over dissonance, so that music in which dissonance features prominently is violating a natural law and bound to sound bad. Others, including Schoenberg himself, have argued that dissonance is merely a matter of convention, and that we can learn to love it.
The question of whether an aversion to dissonance is innate or learnt has been extensively studied, but remains unanswered. Some have claimed that very young infants prefer consonance, but even then learning can’t be ruled out given that babies can hear in the womb.
However, there has long been thought to be a physiological reason why at least some kinds of dissonance sound jarring. Two tones close in frequency interfere to produce a phenomenon called beating: what we hear is just a single tone rising and falling in loudness. The greater the frequency difference, the faster the beating, and within a certain difference range it becomes a kind of rattle, called acoustic roughness, which sounds unpleasant.
Evaluating the role of roughness in musical dissonance is complicated by the fact that real tones made by instruments or voices contain many overtones – frequencies that are whole-number multiples of the basic frequency – so that there are many frequency relationships to take into account. All the same, an aversion to beating has seemed consistent with the common dislike of intervals such as minor seconds.
Yet when McDermott and colleagues asked amusic subjects to rate the pleasantness of a whole series of intervals, their responses varied enormously both from person to person and from test to test, such that on average they showed no distinctions between any of the intervals. In contrast, normal-hearing control subjects rated small intervals (minor seconds and major seconds, such as C-D) and large but sub-octave intervals (minor sevenths C-B flat and major sevenths C-B) much lower than the others.
That wasn’t so unexpected – although the near-equal preferences of the control group for mid-span intervals seems odd to Sandra Trehub, an auditory psychologist at the University of Toronto at Mississauga. “The findings from controls don't replicate the usual pattern of preferences”, she says – where, for example, there tends to be a strong preference for octaves and fifths, and an aversion to the tritone (6 semitones, such as C-F sharp). “Hearing impairment, resulting from the need to have age-matched controls, could have influenced the control ratings somewhat”, McDermott admits.
Then the researchers tested how both groups felt about roughness. They found that the amusics could hear this and disliked it about as much as the control groups. So apparently something else was causing the latter to dislike the dissonant intervals.
These preferences seem instead to stem from the so-called harmonicity of consonant intervals. The relationship between overtone frequencies in these intervals is similar to that between the overtones in a single note: they are whole-number multiples. In contrast, the overtones for dissonant intervals don’t have that relationship, but looks more like the overtones for sounds that are ‘inharmonic’, such as the notes made by striking metal.
The control group preferred consonant intervals with these harmonic relationships over artificial ones in which the overtones were subtly shifted to be inharmonic even while the basic tones remained the same. The amusics, meanwhile, registered no difference between the two cases: they seem insensitive to harmonicity.
McDermott and his coworkers have reported previously that harmonicity seems more important than roughness for dissonance aversion in normal hearers [2]. They argue that the lack of sensitivity both to harmonicity and dissonance in amusics now adds to that case.
But Trehub is not so sure. “Most amusics don't like, or are indifferent to, music”, she says, “so it strikes me as odd to examine this population as a way of understanding the basis of consonance and dissonance.”
Peretz, however, points out that amusia doesn’t necessarily rule out musical appreciation. “A few amusics listen a lot to music”, she says.
Diana Deutsch, a music psychologist at the University of California at San Diego, says that the work is “of potential interest for the study of amusia”, but questions whether it adds much to our understanding of normal hearing. In particular she wonders if many of the findings will survive in the context of everyday music listening, where people seem to display contrary preferences. “Rock bands often deliberately introduce roughness and dissonance into their sounds, much to the delight of their audiences”, she says. “And many composers of contemporary Western art music would disagree strongly with the statement that consonant intervals and harmonic tone complexes are more pleasing in general than are dissonant intervals and inharmonic tones.”
Trehub agrees, saying that there are plenty of musical traditions in which both roughness and dissonance are appreciated. “Indonesian gamelan instruments are designed to generate roughness when played together, and that quality is considered appealing. Some folk-singing in rural Croatia and Bosnia-Herzegovina involves two people singing the same melodic line one semitone apart. Then there's jazz, with lots of dissonance. It's hard to imagine a folk tradition based on something that’s inherently negative,” she says.
But McDermott says the results do not necessarily imply that there is anything innate about a preference for harmonicity, and indeed he suspect that learning plays a role. “The amusic subjects likely had less exposure to music than did the control subjects, and this could in principle contribute to some of their deficits”, he says. “So other approaches will be needed to address the innateness issue,” he says.
References
1. Cousineau, M., McDermott, J. H. & Peretz, I. Proc. Natl Acad. Sci. USA doi:10.1073/pnas.1207989109 (2012).
2. McDermott, J. H., Lehr, A. J. & Oxenham, A. J. Curr. Biol. 20, 1035-1041 (2010).
Wednesday, November 07, 2012
Hunting number 113
Here’s the pre-edited form of an article on element 113 that appeared in the October issue of Prospect.
_________________________________________________________________
The periodic table of the elements just got a new member. At least, maybe it did – it’s hard to tell. Having run out of new elements to discover, scientists have over the past several decades been making ‘synthetic’ atoms too bloated to exist in nature. But this is increasingly difficult as the atoms get bigger, and the new element recently claimed by a Japanese group – currently known simply as element 113, its serial order in the periodic table – is frustratingly elusive. These artificial elements are made and detected literally an atom at a time, and the researchers claim only to have made three atoms in total of element 113, all of which undergo radioactive decay almost instantly.
That, and competition from teams in the United States and Russia, makes the claim controversial. The first group to sight a new element enjoys the privilege of naming it, an added spur to the desire to be first. Just as in the golden years of natural-element discovery in the nineteenth century, element-naming tends to be nationalistic and chauvinistic. No one could begrudge Marie and Pierre Curie their polonium, the element they discovered in 1989 after painstakingly sifting tonnes of uranium ore, which they named after Marie’s homeland. But the recent naming of element 114 ‘flerovium’ – after the founder of the Russian institute where it was made – and element 116 ‘livermorium’, after the Lawrence Livermore National Laboratory where it originated, display rather more concern for bragging than for euphony.
Perhaps this is inevitable, given that making new elements began in an atmosphere of torrid, even lethal, international confrontation. The first element heavier than uranium (element number 92, which is where the ‘natural’ periodic table stops) was identified in 1940 at the University of California at Berkeley. This was element 93, christened neptunium by analogy with uranium’s naming after the planet Uranus in 1789. It quickly decays into the next post-uranium element, number 94, the discovery of which was kept secret during wartime. By the time it was announced in 1946, enough had been made to obliterate a city: this was plutonium, the explosive of the Nagasaki atom bomb. The ensuing Cold War race to make new elements was thus much more than a matter of scientific priority.
To make new elements, extra nuclear particles – protons and neutrons – have to be crammed into an already replete nucleus. The sequential numbering of the elements, starting from hydrogen (element 1), is more than just a ranking: this so-called atomic number indicates how many protons there are in the nuclei of the element’s atoms. Differences in proton count are precisely what distinguish one element from another. All elements bar hydrogen also contain neutrons in their nuclei, which bind the protons together. There’s no unique number of neutrons for a given element: different neutron totals correspond to different isotopes of the element, which are all but chemically indistinguishable. If a nucleus has too few or too many neutrons, it is prone to radioactive decay, as is the case for example for carbon-14 (six protons, eight neutrons), which provides the basis for radiocarbon dating.
By element 92 (uranium), the nuclei are so swollen with particles that no isotopes can forestall decay. All the same, that process can be very slow: the most common isotope of uranium, uranium-238, has a half-life of about 4.5 billion years, so there’s plenty of it still lying around as uranium ore. Making nuclei more massive than uranium’s involves firing elementary particles at heavy atoms in the hope that some will temporarily stick. That was how Emilio Segrè and Edwin McMillan first made neptunium at Berkeley in 1939, by firing neutrons into uranium. (In the nucleus a neuron can split into a proton, raising the atomic number by 1, and an electron, which is spat out.) McMillan didn’t realise what he’d done until the following year, when chemist Philip Abelson helped him to separate the new element from the debris.
During the Second World War, both the Allies and the German physicists realised that an atomic bomb could be made from artificial elements 93 or 94, created by neutron bombardment of uranium inside a nuclear reactor. Only the Americans managed it, of course. The Soviet efforts in this direction began at the same time, thanks largely to the work of Georg Flerov. In 1957 he was appointed head of the Laboratory of Nuclear Reactions, a part of the Joint Institute of Nuclear Research in Dubna, north of Moscow. Dubna has been at the forefront of element-making ever since; in 1967 the lab claimed to have made element 105, now called dubnium.
That claim exemplifies the ill-tempered history of artificial elements. It was disputed by the rival team at Berkeley, who made 105 in the same year and argued furiously over naming rights. The Soviets wanted, magnanimously but awkwardly, to call it nielsbohrium, after Danish physicist Niels Bohr. The Americans preferred hahnium, after the German nuclear chemist Otto Hahn. Both dug in their heels until the International Union of Pure and Applied Chemistry (IUPAC), the authority on chemical nomenclature, stepped in to resolve the mess in the 1990s. Finally the Russian priority was acknowledged in the name, which after all was a small riposte to the earlier American triumphalism of americium (element 95), berkelium (element 97) and californium (98).
These ‘superheavy’ elements, with atomic numbers reaching into double figures, are generally made now not by piecemeal addition to uranium but by trying to merge together two smaller but substantial nuclei. One – typically zinc or nickel – is converted into electrically charged ions by having electrons removed, and then accelerated in an electric field to immense energy before crashing into a target made of an element like lead. This is the method used by the laboratory in Darmstadt that, since the 1980s, has outpaced both the Americans and the Russians in synthesizing new elements. Called the Institute for Heavy Ion Research (GSI), it has claimed priority for all the elements from 107 to 112, and their names reflect this: element 108 is hassium, after the state of Hesse, and element 110 is darmstadtium. But this crowing is a little less strident now: many of the recent elements have instead been named after scientists who pioneered elemental and nuclear studies: bohrium, mendelevium (after the periodic table’s discoverer Dmitri Mendeleyev), meitnerium (after Lise Meitner), rutherfordium (Ernest Rutherford). In 2010 IUPAC approved the GSI team’s proposal for element 112, copernicium, even though Copernicus is not known ever to have set foot in an (al)chemical lab.
If, then, we already have elements 114 and 116, why the fuss over 113? Although the elements get harder to make as they get bigger, the progression isn’t necessarily smooth: some combinations of protons and neutrons are (a little) easier to assemble than others. Efforts to make 113 have been underway at least since 2003, when a group at the Nishina Center for Accelerator-based Science in Saitama, near Tokyo, began firing zinc ions at bismuth. The Japanese centre, run by the governmental research organization RIKEN, was a relative newcomer to element-making, but it claimed success just a year later. It’s precisely because they are unstable that these new elements can be detected with such sensitivity: the radioactive decay of a single atom sends out particles – generally an alpha particle – that can be spotted by detectors. Each atom initiates a whole chain of decays into successive elements, and the energies and the release times of the radioactive particles are characteristic ‘fingerprints’ that allow the decay chain – and the elements within it – to be identified.
At least, that’s the theory. In practice the decay events must be spotted amidst a welter of nuclear break-ups from other radioactive elements made by the ion collisions. And with so many possible isotopes of these superheavy elements, the decay properties of which are often poorly known, there’s lots of scope for phantoms and false trails – not to mention outright fraud (Bulgarian nuclear scientist Victor Ninov, who worked at Berkeley and GSI, was found guilty of fabricating evidence for the claimed discovery of element 118 at Berkeley in 2001). When you consider the figures, some scepticism is understandable: the Japanese team estimated that only 3-6 out of every 100 quintillion (10**20) zinc ions would produce an atom of 113.
Last year, IUPAC representatives decided the Japanese results weren’t conclusive. But neither were they persuaded by subsequent claims of scientists at Dubna and Berkeley, who have begun collaborating after decades of bitter rivalry. However, on 26 September the RIKEN team released new data that make a stronger case. The team leader Kosuke Morita attests that he is “really confident” they have element 113 pinned. Again they’ve only a single decay chain to adduce – starting from a single atom of 113 – but some experts now find the case convincing. If so, it looks like the name game will get solipsistic again: rikenium and japonium are in the offing.
Given how hard it is to make this stuff, why bother? Plutonium isn’t the only artificial element to find a use: for example, minute amounts of americium are used in some smoke detectors. Yet as the superheavies get ever heavier and less stable, typically decaying in a fraction of a second, it’s harder to imagine how they could be of technological value. But according to calculations, some isotopes of element 114 and nearby elements should be especially stable, with half-lives of perhaps several days, years, even millennia. If that’s true, these superheavies could be gradually accumulated atom by atom. But some other estimates say this ‘island of stability’ won’t appear until element 126; others suspect it may not really exist at all.
There are another, more fundamental motivations for making new elements. They test to destruction the current theories of nuclear physics: it’s still not fully understood what the properties of these massive nuclei are, although they are expected to do weird things, such as take on very deformed, non-spherical shapes.
Artificial elements also pose a challenge to the periodic table itself, chemistry’s organizing scheme. It’s periodic because, as Mendeleyev and others realised, similar chemical properties keep reappearing as the elements’ atomic numbers increase: the halogens chlorine (element 17), bromine (35) and iodine (53) all form the same kinds of chemical compounds, for example. That’s because atoms’ electrons – negatively charged particles that govern chemical behaviour – are arranged in successive shells, and the arrangements for elements in the same column of the periodic table are analogous: all the halogens are one electron short of a filled outermost shell.
But a very massive nucleus starts to undermine this tidy progression of electron shells. The electrons closest to the nucleus feel the very strong electric field of that mass of protons, which makes them very energetic – they circulate around the nucleus at speeds approaching the speed of light. Then they feel the effects of special relativity: as Einstein predicted, particles moving that fast gain mass. This alters the electrons’ energies, with knock-on effects in the outer shells, so that the outermost electrons that determine the atom’s chemical behaviour don’t observe the periodic sequence. The periodic table then loses its rhythm, as such elements deviate from the properties of those with which it shares a column – it might form a different number of chemical bonds, say. Some anomalous properties of natural heavy elements are caused by these “relativistic” effects. They alter the electron energies in gold so that it absorbs blue light, accounting for the yellow tint of the light it reflects. And they weaken the chemical bonds between mercury atoms, giving the metal its low melting point.
Relativistic deviancy is expected for at least some superheavies. To look for it, researchers have to accomplish extraordinarily adroit chemistry: to figure out from just a handful of atoms, each surviving for perhaps seconds to minutes, how the element reacts with others. This could, for example, mean examining whether a particular chemical compound is unusually volatile or insoluble. The teams at GSI, Dubna and Berkeley have perfected methods of highly sensitive, quick-fire chemical analysis to separate, purify and detect their precious few exotic atoms. That’s enabled them to establish that rutherfordium (element 104) and dubnium buck the trends of the periodic table, whereas seaborgium (106) does not.
As they enter the artificial depths of the periodic table, none of these researchers knows what they will find. The Dubna group claims to have been making element 115 since 2003, but IUPAC has not yet validated the discovery. They are on firmer grounds with 117 and 118, which are yet to be named, and both GSI and the RIKEN team are now hunting 119 and 120.
Is there any limit to it? Richard Feynman once made a back-of-the-envelope calculation showing that nuclei can no longer hold onto electrons beyond an atomic number of 137. More detailed studies, however, shows that to be untrue, and some nuclear scientists are confident there is no theoretical limit on nuclear size. Perhaps the question is whether we can think up enough names for them all.
_________________________________________________________________
The periodic table of the elements just got a new member. At least, maybe it did – it’s hard to tell. Having run out of new elements to discover, scientists have over the past several decades been making ‘synthetic’ atoms too bloated to exist in nature. But this is increasingly difficult as the atoms get bigger, and the new element recently claimed by a Japanese group – currently known simply as element 113, its serial order in the periodic table – is frustratingly elusive. These artificial elements are made and detected literally an atom at a time, and the researchers claim only to have made three atoms in total of element 113, all of which undergo radioactive decay almost instantly.
That, and competition from teams in the United States and Russia, makes the claim controversial. The first group to sight a new element enjoys the privilege of naming it, an added spur to the desire to be first. Just as in the golden years of natural-element discovery in the nineteenth century, element-naming tends to be nationalistic and chauvinistic. No one could begrudge Marie and Pierre Curie their polonium, the element they discovered in 1989 after painstakingly sifting tonnes of uranium ore, which they named after Marie’s homeland. But the recent naming of element 114 ‘flerovium’ – after the founder of the Russian institute where it was made – and element 116 ‘livermorium’, after the Lawrence Livermore National Laboratory where it originated, display rather more concern for bragging than for euphony.
Perhaps this is inevitable, given that making new elements began in an atmosphere of torrid, even lethal, international confrontation. The first element heavier than uranium (element number 92, which is where the ‘natural’ periodic table stops) was identified in 1940 at the University of California at Berkeley. This was element 93, christened neptunium by analogy with uranium’s naming after the planet Uranus in 1789. It quickly decays into the next post-uranium element, number 94, the discovery of which was kept secret during wartime. By the time it was announced in 1946, enough had been made to obliterate a city: this was plutonium, the explosive of the Nagasaki atom bomb. The ensuing Cold War race to make new elements was thus much more than a matter of scientific priority.
To make new elements, extra nuclear particles – protons and neutrons – have to be crammed into an already replete nucleus. The sequential numbering of the elements, starting from hydrogen (element 1), is more than just a ranking: this so-called atomic number indicates how many protons there are in the nuclei of the element’s atoms. Differences in proton count are precisely what distinguish one element from another. All elements bar hydrogen also contain neutrons in their nuclei, which bind the protons together. There’s no unique number of neutrons for a given element: different neutron totals correspond to different isotopes of the element, which are all but chemically indistinguishable. If a nucleus has too few or too many neutrons, it is prone to radioactive decay, as is the case for example for carbon-14 (six protons, eight neutrons), which provides the basis for radiocarbon dating.
By element 92 (uranium), the nuclei are so swollen with particles that no isotopes can forestall decay. All the same, that process can be very slow: the most common isotope of uranium, uranium-238, has a half-life of about 4.5 billion years, so there’s plenty of it still lying around as uranium ore. Making nuclei more massive than uranium’s involves firing elementary particles at heavy atoms in the hope that some will temporarily stick. That was how Emilio Segrè and Edwin McMillan first made neptunium at Berkeley in 1939, by firing neutrons into uranium. (In the nucleus a neuron can split into a proton, raising the atomic number by 1, and an electron, which is spat out.) McMillan didn’t realise what he’d done until the following year, when chemist Philip Abelson helped him to separate the new element from the debris.
During the Second World War, both the Allies and the German physicists realised that an atomic bomb could be made from artificial elements 93 or 94, created by neutron bombardment of uranium inside a nuclear reactor. Only the Americans managed it, of course. The Soviet efforts in this direction began at the same time, thanks largely to the work of Georg Flerov. In 1957 he was appointed head of the Laboratory of Nuclear Reactions, a part of the Joint Institute of Nuclear Research in Dubna, north of Moscow. Dubna has been at the forefront of element-making ever since; in 1967 the lab claimed to have made element 105, now called dubnium.
That claim exemplifies the ill-tempered history of artificial elements. It was disputed by the rival team at Berkeley, who made 105 in the same year and argued furiously over naming rights. The Soviets wanted, magnanimously but awkwardly, to call it nielsbohrium, after Danish physicist Niels Bohr. The Americans preferred hahnium, after the German nuclear chemist Otto Hahn. Both dug in their heels until the International Union of Pure and Applied Chemistry (IUPAC), the authority on chemical nomenclature, stepped in to resolve the mess in the 1990s. Finally the Russian priority was acknowledged in the name, which after all was a small riposte to the earlier American triumphalism of americium (element 95), berkelium (element 97) and californium (98).
These ‘superheavy’ elements, with atomic numbers reaching into double figures, are generally made now not by piecemeal addition to uranium but by trying to merge together two smaller but substantial nuclei. One – typically zinc or nickel – is converted into electrically charged ions by having electrons removed, and then accelerated in an electric field to immense energy before crashing into a target made of an element like lead. This is the method used by the laboratory in Darmstadt that, since the 1980s, has outpaced both the Americans and the Russians in synthesizing new elements. Called the Institute for Heavy Ion Research (GSI), it has claimed priority for all the elements from 107 to 112, and their names reflect this: element 108 is hassium, after the state of Hesse, and element 110 is darmstadtium. But this crowing is a little less strident now: many of the recent elements have instead been named after scientists who pioneered elemental and nuclear studies: bohrium, mendelevium (after the periodic table’s discoverer Dmitri Mendeleyev), meitnerium (after Lise Meitner), rutherfordium (Ernest Rutherford). In 2010 IUPAC approved the GSI team’s proposal for element 112, copernicium, even though Copernicus is not known ever to have set foot in an (al)chemical lab.
If, then, we already have elements 114 and 116, why the fuss over 113? Although the elements get harder to make as they get bigger, the progression isn’t necessarily smooth: some combinations of protons and neutrons are (a little) easier to assemble than others. Efforts to make 113 have been underway at least since 2003, when a group at the Nishina Center for Accelerator-based Science in Saitama, near Tokyo, began firing zinc ions at bismuth. The Japanese centre, run by the governmental research organization RIKEN, was a relative newcomer to element-making, but it claimed success just a year later. It’s precisely because they are unstable that these new elements can be detected with such sensitivity: the radioactive decay of a single atom sends out particles – generally an alpha particle – that can be spotted by detectors. Each atom initiates a whole chain of decays into successive elements, and the energies and the release times of the radioactive particles are characteristic ‘fingerprints’ that allow the decay chain – and the elements within it – to be identified.
At least, that’s the theory. In practice the decay events must be spotted amidst a welter of nuclear break-ups from other radioactive elements made by the ion collisions. And with so many possible isotopes of these superheavy elements, the decay properties of which are often poorly known, there’s lots of scope for phantoms and false trails – not to mention outright fraud (Bulgarian nuclear scientist Victor Ninov, who worked at Berkeley and GSI, was found guilty of fabricating evidence for the claimed discovery of element 118 at Berkeley in 2001). When you consider the figures, some scepticism is understandable: the Japanese team estimated that only 3-6 out of every 100 quintillion (10**20) zinc ions would produce an atom of 113.
Last year, IUPAC representatives decided the Japanese results weren’t conclusive. But neither were they persuaded by subsequent claims of scientists at Dubna and Berkeley, who have begun collaborating after decades of bitter rivalry. However, on 26 September the RIKEN team released new data that make a stronger case. The team leader Kosuke Morita attests that he is “really confident” they have element 113 pinned. Again they’ve only a single decay chain to adduce – starting from a single atom of 113 – but some experts now find the case convincing. If so, it looks like the name game will get solipsistic again: rikenium and japonium are in the offing.
Given how hard it is to make this stuff, why bother? Plutonium isn’t the only artificial element to find a use: for example, minute amounts of americium are used in some smoke detectors. Yet as the superheavies get ever heavier and less stable, typically decaying in a fraction of a second, it’s harder to imagine how they could be of technological value. But according to calculations, some isotopes of element 114 and nearby elements should be especially stable, with half-lives of perhaps several days, years, even millennia. If that’s true, these superheavies could be gradually accumulated atom by atom. But some other estimates say this ‘island of stability’ won’t appear until element 126; others suspect it may not really exist at all.
There are another, more fundamental motivations for making new elements. They test to destruction the current theories of nuclear physics: it’s still not fully understood what the properties of these massive nuclei are, although they are expected to do weird things, such as take on very deformed, non-spherical shapes.
Artificial elements also pose a challenge to the periodic table itself, chemistry’s organizing scheme. It’s periodic because, as Mendeleyev and others realised, similar chemical properties keep reappearing as the elements’ atomic numbers increase: the halogens chlorine (element 17), bromine (35) and iodine (53) all form the same kinds of chemical compounds, for example. That’s because atoms’ electrons – negatively charged particles that govern chemical behaviour – are arranged in successive shells, and the arrangements for elements in the same column of the periodic table are analogous: all the halogens are one electron short of a filled outermost shell.
But a very massive nucleus starts to undermine this tidy progression of electron shells. The electrons closest to the nucleus feel the very strong electric field of that mass of protons, which makes them very energetic – they circulate around the nucleus at speeds approaching the speed of light. Then they feel the effects of special relativity: as Einstein predicted, particles moving that fast gain mass. This alters the electrons’ energies, with knock-on effects in the outer shells, so that the outermost electrons that determine the atom’s chemical behaviour don’t observe the periodic sequence. The periodic table then loses its rhythm, as such elements deviate from the properties of those with which it shares a column – it might form a different number of chemical bonds, say. Some anomalous properties of natural heavy elements are caused by these “relativistic” effects. They alter the electron energies in gold so that it absorbs blue light, accounting for the yellow tint of the light it reflects. And they weaken the chemical bonds between mercury atoms, giving the metal its low melting point.
Relativistic deviancy is expected for at least some superheavies. To look for it, researchers have to accomplish extraordinarily adroit chemistry: to figure out from just a handful of atoms, each surviving for perhaps seconds to minutes, how the element reacts with others. This could, for example, mean examining whether a particular chemical compound is unusually volatile or insoluble. The teams at GSI, Dubna and Berkeley have perfected methods of highly sensitive, quick-fire chemical analysis to separate, purify and detect their precious few exotic atoms. That’s enabled them to establish that rutherfordium (element 104) and dubnium buck the trends of the periodic table, whereas seaborgium (106) does not.
As they enter the artificial depths of the periodic table, none of these researchers knows what they will find. The Dubna group claims to have been making element 115 since 2003, but IUPAC has not yet validated the discovery. They are on firmer grounds with 117 and 118, which are yet to be named, and both GSI and the RIKEN team are now hunting 119 and 120.
Is there any limit to it? Richard Feynman once made a back-of-the-envelope calculation showing that nuclei can no longer hold onto electrons beyond an atomic number of 137. More detailed studies, however, shows that to be untrue, and some nuclear scientists are confident there is no theoretical limit on nuclear size. Perhaps the question is whether we can think up enough names for them all.
Tuesday, November 06, 2012
Who's bored af?
Here’s my latest piece for BBC Future. This version contains rather more rude words than the one eventually published – the BBC is perhaps surprisingly decorous in this respect (or maybe they figure that their science readers aren’t used to seeing the sort of language that arty types throw around all the time).
My editor Simon Frantz pointed out this other example of how Twitter is being used for linguistic/demographic analysis, in this case to map the distribution of languages in London. I love the bit about the unexpected prevalence of the Tagalog language of the Philippines – because it turns out to contain constructions such as “lolololol” and “hahahahaha”. I hope that in Tagalog these convey thoughts profounder than those of teenage tweeters.
_____________________________________________________________________
This piece contains strong language from the beginning, as they say on the BBC. But only in the name of science – for a new study of how slang expressions spread on Twitter professes to offer insights into a more general question in linguistics: how innovation in language use occurs.
You might, like me, have been entirely innocent of what ‘af’ denotes in the Twittersphere, in which case the phrase “I’m bored af” would simply baffle you. It doesn’t, of course, take much thought to realise that it’s simply an abbreviation for “as fuck”. What’s less obvious is why this pithy abbreviation should, as computer scientist Jacob Eisenstein of the Georgia Institute of Technology in Atlanta and his coworkers Brendan O’Connor, Noah Smith and Eric Xing of Carnegie Mellon University in Pittsburgh report in a preprint as yet unpublished, have jumped from its origin in southern California to a cluster of cities around Atlanta before spreading more widely across the east and west US coasts.
Other neologisms have different life stories. Spelling bro, slang for brother (male friend or peer) as bruh began in cities of the southeastern US (where it reflects the local pronunciation) before finally jumping to southern California. The emoticon “-__-“ (denoting mild discontent) began in New York and Florida before colonizing both coasts and gradually reaching Arizona and Texas.
Who cares? Well, the question of how language changes and evolves has occupied linguistic anthropologists for several decades. What determines whether an innovation will propagate throughout a culture, remain just a local variant, or be stillborn? Such questions decide the grain and texture of all our languages – why we might tweet “I’m bored af” rather than “I’m bored, forsooth”.
There are plenty of ideas about how this happens. One suggestion was that innovations spread by simple diffusion from person to person, like a spreading ink blot. Another idea is that bigger population centres exert a stronger attraction on neologisms, so that they go first to large cities by a kind of gravitational pull. Or maybe culture and demography matters more than geographical proximity: words might spread initially within some minority groups while being invisible to the majority.
It’s now possible to devise rather sophisticated computer models of interacting ‘agents’ to examine these processes. They tell us little, however, unless there are real data to compare them against. Whereas once such data were extremely difficult to obtain, now social media provide an embarrassment of riches. Eisenstein and colleagues collected messages from the public feed on Twitter, which collects about 10 percent of all public posts. They collected around 40 million messages from around 400,000 individuals between June 2009 and May 2011 that could be tied to a particular geographical location in the USA because of the smartphone metadata optionally included with the message.
The researchers then assigned these to the respective “Metropolitan Statistical Areas” (MSAs): urban centres that typically representing a single city. For each MSA, demographic data on ethnicity are available which, with some effort to correct for the fact that Twitter users are not necessarily representative of the area’s overall population, allows a rough estimate of what the ethnic makeup of the messagers is.
Eisenstein and colleagues want to work out how these urban centres influence each other – to tease out the network across which linguistic innovation spreads. This is a challenging statistical problem, since they must distinguish between coincidences in word use in different locations that could arise just by chance. There is, it must be said, a slightly surreal aspect in the application of complex statistical methods to the use of the shorthand ctfu (“cracking the fuck up”) – but after all, expletive and profanity have always offered one of the richest and inventive examples of language evolution.
The result is a map of the USA showing the influence networks of many of the major urban centres: not just how they are linked, but what the direction of that influence is. What, then, are the characteristics that make an MSA likely to spawn successful neologisms? Eisenstein and colleagues have previously found that Twitter has a higher rate of adoption among African Americans than other ethnic groups, and so it perhaps isn’t surprising that they now find that innovation centres, as well as being highly populated, have a higher proportion of African Americans, and that similarity of racial demographic can make two urban centres more likely to be linked in the influence network. There is a long history of adoption of African American slang (cool, dig, rip off) in mainstream US culture, so these findings too accord with what we’d expect.
These are still early days, and the researchers – who hope to present their preliminary findings at a workshop on Social Network and Social Media Analysis in December organized by the Neural Information Processing Systems Foundation – anticipate that they will eventually be able to identify more nuances of influence in the data. The real point at this stage is the method. Twitter and other social media offer records of language mutating in real time and space: an immense and novel resource that, while no doubt subject to its own unique quirks, can offer linguists the opportunity to explore how our words and phrases arise from acts of tacit cultural negotiation.
Paper: J. Eisenstein et al. preprint http://www.arxiv.org/abs/1210.5268.
My editor Simon Frantz pointed out this other example of how Twitter is being used for linguistic/demographic analysis, in this case to map the distribution of languages in London. I love the bit about the unexpected prevalence of the Tagalog language of the Philippines – because it turns out to contain constructions such as “lolololol” and “hahahahaha”. I hope that in Tagalog these convey thoughts profounder than those of teenage tweeters.
_____________________________________________________________________
This piece contains strong language from the beginning, as they say on the BBC. But only in the name of science – for a new study of how slang expressions spread on Twitter professes to offer insights into a more general question in linguistics: how innovation in language use occurs.
You might, like me, have been entirely innocent of what ‘af’ denotes in the Twittersphere, in which case the phrase “I’m bored af” would simply baffle you. It doesn’t, of course, take much thought to realise that it’s simply an abbreviation for “as fuck”. What’s less obvious is why this pithy abbreviation should, as computer scientist Jacob Eisenstein of the Georgia Institute of Technology in Atlanta and his coworkers Brendan O’Connor, Noah Smith and Eric Xing of Carnegie Mellon University in Pittsburgh report in a preprint as yet unpublished, have jumped from its origin in southern California to a cluster of cities around Atlanta before spreading more widely across the east and west US coasts.
Other neologisms have different life stories. Spelling bro, slang for brother (male friend or peer) as bruh began in cities of the southeastern US (where it reflects the local pronunciation) before finally jumping to southern California. The emoticon “-__-“ (denoting mild discontent) began in New York and Florida before colonizing both coasts and gradually reaching Arizona and Texas.
Who cares? Well, the question of how language changes and evolves has occupied linguistic anthropologists for several decades. What determines whether an innovation will propagate throughout a culture, remain just a local variant, or be stillborn? Such questions decide the grain and texture of all our languages – why we might tweet “I’m bored af” rather than “I’m bored, forsooth”.
There are plenty of ideas about how this happens. One suggestion was that innovations spread by simple diffusion from person to person, like a spreading ink blot. Another idea is that bigger population centres exert a stronger attraction on neologisms, so that they go first to large cities by a kind of gravitational pull. Or maybe culture and demography matters more than geographical proximity: words might spread initially within some minority groups while being invisible to the majority.
It’s now possible to devise rather sophisticated computer models of interacting ‘agents’ to examine these processes. They tell us little, however, unless there are real data to compare them against. Whereas once such data were extremely difficult to obtain, now social media provide an embarrassment of riches. Eisenstein and colleagues collected messages from the public feed on Twitter, which collects about 10 percent of all public posts. They collected around 40 million messages from around 400,000 individuals between June 2009 and May 2011 that could be tied to a particular geographical location in the USA because of the smartphone metadata optionally included with the message.
The researchers then assigned these to the respective “Metropolitan Statistical Areas” (MSAs): urban centres that typically representing a single city. For each MSA, demographic data on ethnicity are available which, with some effort to correct for the fact that Twitter users are not necessarily representative of the area’s overall population, allows a rough estimate of what the ethnic makeup of the messagers is.
Eisenstein and colleagues want to work out how these urban centres influence each other – to tease out the network across which linguistic innovation spreads. This is a challenging statistical problem, since they must distinguish between coincidences in word use in different locations that could arise just by chance. There is, it must be said, a slightly surreal aspect in the application of complex statistical methods to the use of the shorthand ctfu (“cracking the fuck up”) – but after all, expletive and profanity have always offered one of the richest and inventive examples of language evolution.
The result is a map of the USA showing the influence networks of many of the major urban centres: not just how they are linked, but what the direction of that influence is. What, then, are the characteristics that make an MSA likely to spawn successful neologisms? Eisenstein and colleagues have previously found that Twitter has a higher rate of adoption among African Americans than other ethnic groups, and so it perhaps isn’t surprising that they now find that innovation centres, as well as being highly populated, have a higher proportion of African Americans, and that similarity of racial demographic can make two urban centres more likely to be linked in the influence network. There is a long history of adoption of African American slang (cool, dig, rip off) in mainstream US culture, so these findings too accord with what we’d expect.
These are still early days, and the researchers – who hope to present their preliminary findings at a workshop on Social Network and Social Media Analysis in December organized by the Neural Information Processing Systems Foundation – anticipate that they will eventually be able to identify more nuances of influence in the data. The real point at this stage is the method. Twitter and other social media offer records of language mutating in real time and space: an immense and novel resource that, while no doubt subject to its own unique quirks, can offer linguists the opportunity to explore how our words and phrases arise from acts of tacit cultural negotiation.
Paper: J. Eisenstein et al. preprint http://www.arxiv.org/abs/1210.5268.
Subscribe to:
Posts (Atom)