I have a review of James Gleick's new book in the Observer today. Here it is. He does an enviable job, on the whole - this is better than Chaos.
________________________________________________________________
The Information: A History, a Theory, a Flood
James Gleick
Fourth Estate, 2011
ISBN 978-0-00-722573-6
Too much information: the complaint du jour, but also toujours. Alexander Pope quipped that the printing press, “a scourge for the sins of the learned”, would lead to “a deluge of Authors [that] covered the land”. Robert Burton, the Oxford anatomist of melancholy, confessed in 1621 that he was drowning in books, pamphlets, news and opinions. All the twittering and tweeting today, the blogs and wikis and apparent determination to archive even the most ephemeral and trivial thought has, as James Gleick observes in this magisterial survey, something of the Borgesian about it. Nothing is forgotten; the world imprints itself on the informatosphere at a scale approaching 1:1, each moment of reality creating an indelible replica.
But do we gain from it, or was T. S. Eliot right to say that “all our knowledge brings us nearer to our ignorance”? Gleick is refreshingly upbeat. In the face of the information flood that David Foster Wallace called Total Noise, he says, “we veer from elation to dismay and back”. But he is confident that we can navigate it, challenging the view of techno-philosopher Jean-Pierre Dupuy that “ours is a world about which we pretend to have more and more information but which seems to us increasingly devoid of meaning”. Yet this relationship between information and meaning is the crux of the matter, and it is one that Gleick juggles but does not quite get to grips with. I’ll come back to that.
This is not, however, a book that merely charts the rising tide of information, from the invention of writing to the age of Google. To grasp what information truly means – to explain why it is shaping up as a unifying principle of science – he has to embrace linguistics, logic, telecommunications, codes, computing, mathematics, philosophy, cosmology, quantum theory and genetics. He must call as witnesses not only Charles Babbage, Alan Turing and Kurt Gödel, but also Borges, Poe and Lewis Carroll. There are few writers who could accomplish this with such panache and authority. Gleick, whose Chaos in 1987 helped to kick-start the era of modern popular science and who has also written acclaimed biographies of Richard Feynman and Isaac Newton, is one.
At the heart of the story is Claude Shannon, whose eclectic interests defy categorization today and were positively bizarre in the mid twentieth century. Having written a visionary but ignored doctoral thesis on genetics, Shannon wound up in the labs of the Bell Telephone Company, where electrical logic circuitry was being invented. There he worked (like Turing, who he met in 1943) on code-breaking during the Second World War. And in 1948 he published in Bell’s obscure house journal a theory of how to measure information – not just in a phone-line signal but in a random number, a book, a genome. Shannon’s information theory looms over everything that followed.
Shannon’s real point was that information is a physical entity, like energy or matter. The implications of this are profound. For one thing, manipulating information in a computer then has a minimum energy cost set by the laws of physics. This is what rescues the second law of thermodynamics (entropy or disorder always increases) from the hypothetical ‘demon’ invoked by James Clerk Maxwell in the nineteenth century to undermine it. By observing the behaviour of individual molecules, Maxwell’s demon seemed able to engineer a ‘forbidden’ decrease in entropy. But that doesn’t undo the sacrosanct second law, since processing the necessary information (more precisely, having to discard some of it – forgetting is the hard part) incurs a compensating entropic toll. In effect the demon instead turns information to energy, something demonstrated last year by a group of Japanese physicists – sadly too late for Gleick.
In quantum physics the role of information goes even deeper: at the level of fundamental particles, every event can be considered a transaction in information, and our familiar classical world emerges from the quantum by the process of erasing information. In quantum terms, Gleick says, “the universe is computing its own destiny.” By this point we are a long way from cuneiform and Morse code, though he makes the path commendably clear.
Moreover, Gleick does so with tremendous verve, which is mostly exhilarating, sometimes exhausting and occasionally coy. He is bracingly ready to use technical terms without definition – nonlinear, thermodynamic equilibrium – rightly refusing any infantilizing hand-holding. What impresses most is how he delves beneath the surface narrative to pull out the conceptual core. Written language, he explains, did not simply permit us to make thoughts permanent – it changed thinking itself, enabling abstraction and logical reasoning. Language is a negotiation whose currency is information. A child learning to read is not simply turning letters into words but is learning how to exploit (often recklessly) the redundancies in the system. She reads ‘this’ as ‘that’ not because she confuses the phonemes but because she knows that only a few of them may follow ‘th’, and it’s less effort to guess. Read the whole word, we tell her, but we don’t do it ourselves. That’s why we fail to spot typos: we’ve got the message already. Language elaborates to no informational purpose; the ‘u’ after ‘q’ could be ditched wholesale. Text messaging now lays bare this redundancy: we dnt nd hlf of wht we wrt.
Shannon’s take on language is disconcerting. From the outset he was determined to divorce information from meaning, making it equivalent to something like surprise or unpredictability. That’s why a random string of letters is more information-rich, in Shannon’s sense, than a coherent sentence. There is a definite value in his measure, not just in computing but in linguistics. Yet to broach information in the colloquial sense, somewhere meaning must be admitted back into all the statistics and correlations.
Gleick acknowledges the tension between information as Shannon’s permutation of bits and information as agent of meaning, but a reconciliation eludes him. When he explains the gene with reference to a Beethoven sonata, he says that the music resides neither in acoustic waves nor annotations on paper: ‘the music is the information’. But where and what is that information? Shannon might say, and Gleick implies, that it is in the pattern of notes that Beethoven conceived. But that’s wrong. The notes become music only in the mind of a listener primed with the cognitive, statistical and cultural apparatus to weave them into coherent and emotive forms. This means there is no bounded information set that is the music – it is different for every listener (and every performance), sometimes subtly, sometimes profoundly. The same for literature.
Lest you imagine that this applies only to information impinging on human cognition, it is equally true of the gene. Gleick too readily accepts the standard trope that genes – the abstract symbolic sequence – contain the information needed to build an organism. That information is highly incomplete. Genes don’t need to supply it all, because they act in a molecular milieu that fills in the gaps. It’s not that the music, or the gene, needs the right context to deliver its message – without that context, there is no message, no music, no gene. An information theory that considers just the signal and neglects the receiver is limited, even misleading.
It is the only serious complaint about what is otherwise a deeply impressive and rather beautiful book.
Sunday, April 24, 2011
Tuesday, April 19, 2011
Universal blues
I have written a news story and a leader for Nature on a new paper examining the notion that there are universal grammatical principles in language. Here they are, in that order. But I must say that, much as the results reported by Dunn et al. chime with my instinctive resistance to universal theories of anything, the comments I’ve received on the paper make me a little sceptical that it does what it claims. Time will tell, I suppose.
__________________________________________________________________________
Linguists debate whether languages share universal grammatical features.
Languages evolve in their own idiosyncratic fashion, rather than being governed by universal rules. That’s the conclusion of a new study which compares the grammar of several hundred languages in the light of their evolutionary trees.
Psychologist Russell Gray of the University of Auckland in New Zealand and his coworkers examine the relationships between traits such as the ordering of verbs and nouns in four families representing more than 2,000 languages, and find no sign of any persistent, universal guiding principles [1].
It’s already proving to be a controversial claim. “There is nothing in the paper that brings into question the views that they are arguing against”, says linguist Matthew Dryer of the State University of New York at Buffalo.
There is thought to be around 7,000 languages in the world, which show tremendous diversity in structure. Some have complex ways of making composite words (such as Finnish), others have simple, short and invariant words (such as Mandarin Chinese). Some put verbs first in a sentence, others in the middle and others at the end.
But many linguists suspect there be some universal logic behind this bewildering variety – common cognitive factors that underpin grammatical structures. Two of the most prominent ‘universalist’ theories of language have been proposed by American linguists Noam Chomsky and Joseph Greenberg.
Chomsky tried to account for the astonishing rapidity with which children assimilate complicated and subtle grammatical rules by supposing that we are all born with an innate capacity for language, presumably housed in brain modules specialized for language. He suggested that this makes children able to generalize the grammatical principles of their native tongue from a small set of ‘generative rules’.
Chomsky supposed that languages change and evolve when children reset the parameters of these rules. A single change should induce switches in several related traits in the language.
Greenberg took a more empirical approach, enumerating many observed shared traits between languages. Many of these concerned word order. For example, a conditional clause normally precedes its conclusion: “if he’s right, he’ll be famous.” Greenberg argued that these universals reflect fundamental biases, probably for cognitive reasons. “The Greenbergian word order universals have the strongest claim to empirical validity of any universalist claim about language”, says Gray’s coauthor Michael Dunn of the Max Planck Institute for Psycholinguistics at Nijmegen.
Both of these ideas have implications for the family tree of language evolution. In Chomsky’s case, as languages evolve, certain features should co-vary because they are products of the same underlying parameter. Greenberg’s idea also implies co-dependencies between certain grammatical features of a language but not others. For example, the word order for verb-subject pairs shouldn’t depend on that for object-verb pairs.
To test these predictions, Gray and colleagues used the methods of phylogenetic analysis developed for evolutionary biology to reconstruct four family trees representative of more than 2,000 languages: Austronesian, Indo-European, Bantu and Uto-Aztecan. For each family they looked at eight word-order features and used statistical methods to calculate the changes that each pair of features had evolved independently or in a correlated way. This allowed them to deduce the webs of co-dependence among the features and compare them to what the theories of Chomsky and Greenberg predict.
They found that neither of these two models matched the evidence. Not only do the co-dependencies differ from those expected from Greenberg’s word-order ‘universals’, but they are different for each family. In other words, the deep grammatical structure of each family is different from that of each of the others: each family has evolved its own rules, so there is no reason to suppose that these are governed by universal cognitive factors.
What’s more, even when a particular co-dependency of traits was shared by two families, the researchers could show that it came about in different ways for each – that the commonality may be coincidental. They conclude that the languages – at least in their word-order grammar – have been shaped in culture-specific ways and not by universals.
Other experts express some skepticism about the new results, albeit for rather different reasons. Martin Haspelmath at the Max Planck Institute for Evolutionary Anthropology in Leipzig, Germany, says he agrees with the conclusions but that “for specialists they are nothing new”. “It’s long been known that grammatical properties and dependencies are lineage-specific”, he says.
Meanwhile, Dryer, who has previously presented evidence that supports Greenberg’s position, is not persuaded that the results make a convincing case. “There are over a hundred language families that the authors ignore but which provide strong support for the views they are arguing against”, he says. There is no reason to expect a consistent pattern of word-order relationships within families, he adds, regardless of whether they are shaped by universal constraints.
Haspelmath feels it may be more valuable to look for what languages share in common than how they (inevitably) differ. Even if cultural evolution is the primary factor in shaping them, he says, “it would be very hard to deny that cognitive biases play no role at all.”
“Comparative linguists have focused on the universals and cognitive explanations because they wanted to explain something”, he adds. “Saying that cultural evolution is at play basically means that we can’t explain why languages are the way they are – which is largely true, but it’s not the whole truth.”
1. Dunn, M., Greenhill, S. J., Levinson, S. C & Gray, R. D. Nature 10.1038/nature089923 (2011).
*********************
A search for universals has characterized the scientific enterprise at least since Aristotle. In some ways, this quest for common principles underlying the diversity of the universe defines science: without it there is no order and pattern, but merely as many explanations as there are things in the world. Newton’s laws of motion, the oxygen theory of combustion and Darwinian evolution each united a host of different phenomena in a single explicatory framework.
One view takes this impulse for unification to its extreme: to find a Theory of Everything that offers a single generative equation for all we see. It is becoming ever less clear, however, that such a theory – if it exists – can be considered a simplification, given the proliferation of dimensions and universes it might entail. Nonetheless, unification of sorts remains a major goal.
This tendency in the natural sciences has long been evident in the social sciences too. Darwinism seems to offer justification: if all humans share common origins, it seems reasonable to suppose that cultural diversity must also be traceable to more constrained origins. Just as the bewildering variety of courtship rituals might all be considered forms of sexual selection, so perhaps the world’s languages, music, social and religious customs and even history could be governed by universal features. Filtering out what is contingent and unique from what is shared in common might enable us to understand how complex cultural behaviours arose and what ultimately guides them in evolutionary or cognitive terms.
That, at least, is the hope. But a comparative study of linguistic traits by Dunn et al. (online publication doi:10.1038/nature09923) supplies a sharp reality check on efforts to find universality in the global spectrum of languages. The most famous of these was initiated by Noam Chomsky, who postulated that humans are born with an innate language-acquisition capacity – a brain module or modules specialized for language – that dictates a universal grammar. Just a few generative rules are then sufficient to unfold the entire fundamental structure of a language, which is why children can learn it so quickly. Languages would diversify through changes to the ‘parameter settings’ of the generative rules.
In contrast, Joseph Greenberg took a more empirical approach to universality, identifying a long list of traits (particularly in word order) shared by many languages, which are considered to represent biases that result from cognitive constraints. Chomsky’s and Greenberg’s are not by any means the only theories on the table for how languages evolve, but they make the strongest predictions about universals. Dunn et al. have put them to the test by using phylogenetic methods to examine the four family trees that between them represent over 2,000 languages. A generative grammar should show patterns of language change that are independent of the family tree or the pathway tracked through it, while Greenbergian universality predicts strong co-dependencies between particular types of word-order relations (and not others). Neither of these patterns is borne out by the analysis, suggesting that the structures of the languages are lineage-specific and not governed by universals.
This doesn’t mean that cognitive constraints are irrelevant, nor that there are no other universals dictated by communication efficiency. It’s surely inevitable that cognition sets limits on, say, word length or the total number of phonemes. But such ‘universals’ seem likely to be relatively trivial features of languages, just as may be the case for putative universals in music and other aspects of culture. We should perhaps learn the lesson of Darwinism: a ‘universal’ mechanism of adaptation says little of interest, in itself, about how a particular feature got to be the way it is, or how it works. This truth has dawned on physicists too: universal equations are all very well, but particular solutions are what the world actually consists of, and those particulars are generally the result of contingent history.
__________________________________________________________________________
Linguists debate whether languages share universal grammatical features.
Languages evolve in their own idiosyncratic fashion, rather than being governed by universal rules. That’s the conclusion of a new study which compares the grammar of several hundred languages in the light of their evolutionary trees.
Psychologist Russell Gray of the University of Auckland in New Zealand and his coworkers examine the relationships between traits such as the ordering of verbs and nouns in four families representing more than 2,000 languages, and find no sign of any persistent, universal guiding principles [1].
It’s already proving to be a controversial claim. “There is nothing in the paper that brings into question the views that they are arguing against”, says linguist Matthew Dryer of the State University of New York at Buffalo.
There is thought to be around 7,000 languages in the world, which show tremendous diversity in structure. Some have complex ways of making composite words (such as Finnish), others have simple, short and invariant words (such as Mandarin Chinese). Some put verbs first in a sentence, others in the middle and others at the end.
But many linguists suspect there be some universal logic behind this bewildering variety – common cognitive factors that underpin grammatical structures. Two of the most prominent ‘universalist’ theories of language have been proposed by American linguists Noam Chomsky and Joseph Greenberg.
Chomsky tried to account for the astonishing rapidity with which children assimilate complicated and subtle grammatical rules by supposing that we are all born with an innate capacity for language, presumably housed in brain modules specialized for language. He suggested that this makes children able to generalize the grammatical principles of their native tongue from a small set of ‘generative rules’.
Chomsky supposed that languages change and evolve when children reset the parameters of these rules. A single change should induce switches in several related traits in the language.
Greenberg took a more empirical approach, enumerating many observed shared traits between languages. Many of these concerned word order. For example, a conditional clause normally precedes its conclusion: “if he’s right, he’ll be famous.” Greenberg argued that these universals reflect fundamental biases, probably for cognitive reasons. “The Greenbergian word order universals have the strongest claim to empirical validity of any universalist claim about language”, says Gray’s coauthor Michael Dunn of the Max Planck Institute for Psycholinguistics at Nijmegen.
Both of these ideas have implications for the family tree of language evolution. In Chomsky’s case, as languages evolve, certain features should co-vary because they are products of the same underlying parameter. Greenberg’s idea also implies co-dependencies between certain grammatical features of a language but not others. For example, the word order for verb-subject pairs shouldn’t depend on that for object-verb pairs.
To test these predictions, Gray and colleagues used the methods of phylogenetic analysis developed for evolutionary biology to reconstruct four family trees representative of more than 2,000 languages: Austronesian, Indo-European, Bantu and Uto-Aztecan. For each family they looked at eight word-order features and used statistical methods to calculate the changes that each pair of features had evolved independently or in a correlated way. This allowed them to deduce the webs of co-dependence among the features and compare them to what the theories of Chomsky and Greenberg predict.
They found that neither of these two models matched the evidence. Not only do the co-dependencies differ from those expected from Greenberg’s word-order ‘universals’, but they are different for each family. In other words, the deep grammatical structure of each family is different from that of each of the others: each family has evolved its own rules, so there is no reason to suppose that these are governed by universal cognitive factors.
What’s more, even when a particular co-dependency of traits was shared by two families, the researchers could show that it came about in different ways for each – that the commonality may be coincidental. They conclude that the languages – at least in their word-order grammar – have been shaped in culture-specific ways and not by universals.
Other experts express some skepticism about the new results, albeit for rather different reasons. Martin Haspelmath at the Max Planck Institute for Evolutionary Anthropology in Leipzig, Germany, says he agrees with the conclusions but that “for specialists they are nothing new”. “It’s long been known that grammatical properties and dependencies are lineage-specific”, he says.
Meanwhile, Dryer, who has previously presented evidence that supports Greenberg’s position, is not persuaded that the results make a convincing case. “There are over a hundred language families that the authors ignore but which provide strong support for the views they are arguing against”, he says. There is no reason to expect a consistent pattern of word-order relationships within families, he adds, regardless of whether they are shaped by universal constraints.
Haspelmath feels it may be more valuable to look for what languages share in common than how they (inevitably) differ. Even if cultural evolution is the primary factor in shaping them, he says, “it would be very hard to deny that cognitive biases play no role at all.”
“Comparative linguists have focused on the universals and cognitive explanations because they wanted to explain something”, he adds. “Saying that cultural evolution is at play basically means that we can’t explain why languages are the way they are – which is largely true, but it’s not the whole truth.”
1. Dunn, M., Greenhill, S. J., Levinson, S. C & Gray, R. D. Nature 10.1038/nature089923 (2011).
*********************
A search for universals has characterized the scientific enterprise at least since Aristotle. In some ways, this quest for common principles underlying the diversity of the universe defines science: without it there is no order and pattern, but merely as many explanations as there are things in the world. Newton’s laws of motion, the oxygen theory of combustion and Darwinian evolution each united a host of different phenomena in a single explicatory framework.
One view takes this impulse for unification to its extreme: to find a Theory of Everything that offers a single generative equation for all we see. It is becoming ever less clear, however, that such a theory – if it exists – can be considered a simplification, given the proliferation of dimensions and universes it might entail. Nonetheless, unification of sorts remains a major goal.
This tendency in the natural sciences has long been evident in the social sciences too. Darwinism seems to offer justification: if all humans share common origins, it seems reasonable to suppose that cultural diversity must also be traceable to more constrained origins. Just as the bewildering variety of courtship rituals might all be considered forms of sexual selection, so perhaps the world’s languages, music, social and religious customs and even history could be governed by universal features. Filtering out what is contingent and unique from what is shared in common might enable us to understand how complex cultural behaviours arose and what ultimately guides them in evolutionary or cognitive terms.
That, at least, is the hope. But a comparative study of linguistic traits by Dunn et al. (online publication doi:10.1038/nature09923) supplies a sharp reality check on efforts to find universality in the global spectrum of languages. The most famous of these was initiated by Noam Chomsky, who postulated that humans are born with an innate language-acquisition capacity – a brain module or modules specialized for language – that dictates a universal grammar. Just a few generative rules are then sufficient to unfold the entire fundamental structure of a language, which is why children can learn it so quickly. Languages would diversify through changes to the ‘parameter settings’ of the generative rules.
In contrast, Joseph Greenberg took a more empirical approach to universality, identifying a long list of traits (particularly in word order) shared by many languages, which are considered to represent biases that result from cognitive constraints. Chomsky’s and Greenberg’s are not by any means the only theories on the table for how languages evolve, but they make the strongest predictions about universals. Dunn et al. have put them to the test by using phylogenetic methods to examine the four family trees that between them represent over 2,000 languages. A generative grammar should show patterns of language change that are independent of the family tree or the pathway tracked through it, while Greenbergian universality predicts strong co-dependencies between particular types of word-order relations (and not others). Neither of these patterns is borne out by the analysis, suggesting that the structures of the languages are lineage-specific and not governed by universals.
This doesn’t mean that cognitive constraints are irrelevant, nor that there are no other universals dictated by communication efficiency. It’s surely inevitable that cognition sets limits on, say, word length or the total number of phonemes. But such ‘universals’ seem likely to be relatively trivial features of languages, just as may be the case for putative universals in music and other aspects of culture. We should perhaps learn the lesson of Darwinism: a ‘universal’ mechanism of adaptation says little of interest, in itself, about how a particular feature got to be the way it is, or how it works. This truth has dawned on physicists too: universal equations are all very well, but particular solutions are what the world actually consists of, and those particulars are generally the result of contingent history.
Newton's Rainbow
Here’s the pre-edited text of my review for Nature of a new play about Isaac Newton, which I saw recently at the Royal Society and enjoyed more than I thought I might. But you’ll only catch it now if you’re in Toronto or Boston, I believe.
__________________________________________________________
Let Newton Be!
A play by Craig Baxter, directed by Patrick Morris and produced by the Menagerie Theatre Company
Touring until 30 April
Isaac Newton perplexes and fascinates not just because he was a transitional figure in the history of science but because he was a very odd man. The difficulty has been in distinguishing those two things. The temptation to portray him as a man torn between science and religion, or flitting from mathematical physics to superstitious alchemy, is the modern legacy of a tradition of positivistic science history that today’s historians are still working to dispel. In his passion for none of these things was Newton particularly unusual in his time. What made him odd was not so much what he believed but how he lived: isolated from intimate relationships, sensitive to every slight, at the same time vain and yet so indifferent to adulation that he could barely be persuaded to write the Principia.
All that, quite apart from his towering status in science, naturally makes him an attractive figure for biographers. Among those who have grappled with his story are the leading science historians Richard Westfall (whose 1980 biography is still the standard reference) and A. Rupert Hall, and the science writer James Gleick. It also supplies fertile soil for more inventive explorations of his life, of which Let Newton Be! is one. This new play by Craig Baxter was commissioned by the Faraday Institute for Science and Religion at Cambridge University, and has benefited from the input of, among others, Rob Iliffe, the head of the Newton Project to place all of Newton's writings online, and the astrophysicist John Barrow.
To conjure up this mercurial man, Baxter elected to use almost entirely Newton’s own words, or those of some of his contemporaries, such as his rival and critic Gottfried Leibniz. Moreover, Newton – the only character in the piece, apart from brief appearances by the likes of Leibniz and Edmond Halley – is played here simultaneously by three actors, one of them a woman. It sounds like a gimmick, but isn’t: the device allows us to see different facets of the man, though happily not as reductively conflicting voices.
The play’s structure is largely chronological. We see Newton as a boy in the family home at Woolsthorpe, an undergraduate at Trinity College Cambridge, then as Lucasian professor of mathematics (appointed in 1669 at the age of 27). We see him take his retractable telescope to the Royal Society and, stung by what he perceived as the antagonism of the London virtuosi, retreat into religious exegesis, until Halley cajoles him to write down his proof of elliptical planetary orbits – a treatise that expands into the Principia. Feted and now somewhat pompous, he becomes Warden of the Royal Mint and the President of the Royal Society.
The original material is well used. There is a reconstruction of Newton’s famous prism experiment (or roughly so – his experimentum crucis of around 1666, when he reconstituted white light from the spectrum, is notoriously difficult to reproduce). But we also get Newton’s sometimes surreal, obsessive lists of sins committed (“I lied about a Louse”), and the only time we are lectured to is in one of the real lectures on optics that Newton was obliged to provide in the Lucasian chair, at which he proves to be hilariously inept.
In such ways, the play delivers an impressive quantity of Newton’s thought. In particular, it sets out to emphasize just how much of his work was religious – as Iliffe confirmed in a post-performance panel discussion, Newton considered this his central mission, with the seminal scientific works on light, motion and gravity being almost tossed off before breakfast. The natural theology that motivated much of the science – the idea that by exploring the natural world we deepen our appreciation of God’s wisdom and power – was the conventional position of most seventeenth-century scientists, most notably Robert Boyle, and was their defence against accusations of materialistic atheism. Newton was anything but a materialist: that his gravity was an occult force acting at a distance was precisely what Leibniz considered wrong with it, while for Newton this force was actively God’s doing.
But I’m not sure how much would be comprehensible to anyone coming new to Newton. It is characteristic of the play’s intelligence that we don’t get any nonsense with falling apples, but neither are we really told what distinguished Newton’s ideas on gravity from the many that went before (especially Descartes’ vortices and the belief that it is a form of magnetism, both of which ideas Newton shared at some point). His work on the additive colour mixing of light is beautifully illustrated but not actually alluded to; likewise his laws of motion. Moreover, the play lacks a real narrative – there is no tension, nothing to be resolved, for in the end it is a biography, however inventively told. But that was, after all, its brief, and it is probably a more enjoyable hour and a half with Newton than anyone ever had in his lifetime.
Tuesday, April 12, 2011
The Naked Oceans
The Naked Scientists is (are?) running a series of podcasts called The Naked Oceans. The latest one has an interview with me about Ernst Haeckel and his images of radiolarians.
Monday, April 11, 2011
Chaos promotes prejudice
Here’s my latest news story for Nature, pre-editing.
_______________________________________________________________
A disorderly environment makes people more inclined to put others in boxes.
Messy surroundings make us more apt to stereotype people, according to a new study by a pair of social scientists in the Netherlands.
Diederik Stapel and Siegwart Lindenberg of Tilburg University asked subjects to complete questionnaires that probed their judgements about certain social groups while in everyday environments (a street and a railway station) that were either messy or clean and orderly. They found small but significant and systematic differences in the responses: there was more stereotyping in the former cases than the latter.
The researchers say that social discrimination could therefore be counteracted by diagnosing and removing signs of disorder and decay in public environments. They report their findings in Science today [1].
Psychologist David Schneider of Rice University in Houston, Texas, a specialist in stereotyping, calls this “an excellent piece of work which speaks not only to a possibly important environmental cause, but also supports a major potential theoretical explanation for some forms of prejudice.”
The influence of environment on behaviour has long been suspected by social scientists and criminologists. The ‘broken windows’ hypothesis of sociologists James Q. Wilson and George Kelling supposes that people are more likely to commit criminal and anti-social acts when they see evidence of others having done so – for example, in public places with signs of decay and neglect.
This idea motivated the famous zero-tolerance policy on graffiti on the New York subway in the late 1980s (on which Kelling acted as a consultant), which is credited with a role in improving the safety of the network. Lindenberg and his coworkers conducted experiments in Dutch urban settings in 2008 that supported an influence of the surroundings on people’s readiness to act unlawfully or antisocially [2].
But could evidence of social decay, even at the mild level of littering, also affect our unconscious discriminatory attitudes towards other people? To test that possibility, Stapel and Lindenberg devised a variety of disorderly environments in which to test these attitudes.
In their questionnaires, participants were asked for example to rate Muslims, homosexuals and Dutch people according to various positive, negative and unrelated stereotypes. For example, the respective stereotypes for homosexuals were (creative, sweet), (strange, feminine) and (impatient, intelligent).
In one experiment, passers-by in the busy Utrecht railway station were asked to participate by coming to sit in a row of chairs, for the reward of a candy bar or an apple. The researchers took advantage of a cleaners’ strike, which had left the station dirty and litter-strewn. They then returned to do the same testing after the strike was over and the station was clean.
As well as probing these responses, the experiment examined unconscious negative responses to race. All the participants were white, while one place at the end of the row of chairs was already taken by a black or white Dutch person. In the messy station, people sat on average further from the black person than the white one, while in the clean station there was no statistical difference in these distances.
In another experiment, the researchers aimed to eliminate differences in cleanliness of the environments while preserving the disorder. The participants were approached on a street in an affluent Dutch city. But in one case the street had been made more disorderly by the removal of a few paving slabs and the addition of a badly parked car and an ‘abandoned’ bicycle. Again, disorder boosted stereotyping.
Stapel and Lindenberg suspect that stereotyping may be an attempt to compensate for mess: it could be, they say, “a way to cope with chaos, a mental cleaning device” that partitions other people neatly into predefined categories.
In support of that idea, they showed participants pictures of disorderly and orderly situations, such as a bookcase with dishevelled and regularly stacked books, before asking them to complete both the stereotyping survey and another one that probed their perceived need for structure, including questions such as “I do not like situations that are uncertain”. Both stereotyping and the need for structure were higher in people viewing the disorderly pictures.
Sociologist Robert Sampson of Harvard University says that the study is “clever and well done”, but is cautious about how to interpret the results. “Disorder is not necessarily chaotic’, he says, “and is subject to different social meanings in ongoing or non-manipulated environments. There are considerable subjective variations within the same residential environment on how disorder is rated – the social context matters.”
Therefore, Sampson says, “once we get out of the lab or temporarily induced settings and consider the everyday contexts in which people live and interact, we cannot simply assume that interventions to clean up disorder will have invariant effects.”
Schneider agrees that the implications of the work for public policy are not yet clear. “One question we’d need to answer is how long these kinds of effects last”, he says. “There is a possibility that people may quickly adapt to disorder. So I would be very wary of concluding that people who live in unclean and disordered areas are more prejudiced because of that.” Stapel acknowledges this: “people who constantly live in disorder get used to it and will not show the effects we find. Disorder in our definition is something that is unexpected.”
References
1. D. A. Stapel & S. Lindenberg, Science 332, 251-253 (2011).
2. K. Keizer, S. Lindenberg & L. Steg, Science 322, 1681 (2008).
Tuesday, April 05, 2011
Fattening up Schrödinger's cats
Here’s my latest story for Nature News.
__________________________________________________________
Huge molecules can show the wave-particle duality of quantum theory.
Researchers in Austria have made what they call the “fattest Schrödiner cats realized to date”. They have demonstrated quantum superpositions – objects in two or more states simultaneously – of molecules with up to 430 atoms each, several times larger than those used in previous experiments of this sort [1].
In the famous thought experiment conceived by Erwin Schrödinger in 1935 to illustrate the apparent paradoxes of quantum theory, a cat will be poisoned or not depending on the state of an atom, governed by quantum rules. Because the recently developed quantum theory insisted that these rules allowed for superpositions, it seemed that Schrödinger’s cat could itself be placed in a superposition of ‘live’ and ‘dead’ states.
The paradox highlights the question of how the rules of the quantum world – where objects like atoms can be in several positions at once – give way to the ‘classical’ mechanics that governs the macroscopic world of our everyday experience, in which things must be one way or the other but not both at the same time. This is called the quantum-to-classical transition.
It is now generally thought that the ‘quantumness’ is lost in a process called decoherence, where disturbances from the surrounding environment make the quantum wavefunction describing many-state superpositions appear to collapse [note to subs: we have to keep this ‘appear to’. The precise relationship between decoherence and wavefunction collapse is complicated and too tricky fully get into here] into a well-defined and unique classical state. This decoherence tends to become more pronounced as objects get bigger and the opportunities for interacting with the environment multiply.
There is still no consensus on how Schrödinger’s thought experiment will play out if the cat-and-atom system could be perfectly protected from decoherence. Some physicists are happy to believe that in that case the cat could indeed be in a live-dead superposition. But we couldn’t see it directly because the act of looking would destroy the superposition.
One manifestation of quantum superpositions is the interference that can occur between quantum particles passing through two or more narrow slits. In the classical world the particles just pass through with their trajectories unchanged, like footballs rolling through a doorway.
But quantum particles can behave like waves, which interfere with one another as they pass through the slits, either enhancing or cancelling to produce a series of bright and dark bands. This interference of quantum particles, first seen for electrons in 1927, is effectively the result of each particles passing through more than one slit: a quantum superposition.
At some point as the experiment is scaled up in size, quantum behaviour (interference) should give way to classical behaviour (no interference). But how big can the particles be before that happens?
In 1999 a team at the University of Vienna in Austria demonstrated interference in a many-slit experiment using beams of 60-atom carbon molecules (C60) shaped like hollow spheres [2]. Now Markus Arndt, one of the researchers in that experiment, and his colleagues in Austria, Germany and Switzerland have shown much the same effect for considerably larger molecules tailor-made for the purpose, up to 6 nanometres (millionths of a millimetre) across and composed of up to 430 atoms. These are bigger than some small protein molecules in the body, such as insulin.
In their experiment, the beams of molecules are passed through three sets of slits. The first of them, made from a slice of the hard material silicon nitride patterned with a grating of 90-nm-wide slits, prepares the molecular beam in a coherent state, in which the matter waves are all in step. The second, a ‘virtual grating’ made from laser light formed by mirrors into a standing wave of light and dark, causes the inference pattern. The third grating, also of silicon nitride, acts as a mask to admit parts of the interference pattern to an instrument called a mass spectrometer, which counts the number of molecules that pass through.
The researchers report in Nature Communications that this number rises and falls periodically as the outgoing beam is scanned from left to right, showing that interference, and therefore superposition, is present.
Although this might not sound like a Schrödinger cat experiment, it probes the same quantum effects. It is essentially like firing the cats themselves at the interference grating, rather than making a single cat’s fate contingent on an atomic-scale event.
Quantum physicist Martin Plenio of the University of Ulm in Germany calls the study part of an important line of research. “We have perhaps not gained deep new insights into the nature of quantum superposition from this specific experiment”, he admits, “but there is hope that with increasing refinement of the experimental technique we will eventually discover something new.”
Arndt says that such experiments might eventually enable tests of fundamental aspects of quantum theory, such as how wavefunctions are collapsed by observation. “Predictions such as that gravity might induce wavefunction collapse beyond a certain mass limit should become testable at significantly higher masses in far-future experiments”, he says.
Can living organisms – perhaps not cats, but maybe microscopic ones such as bacteria – be placed in superpositions? That has been proposed for viruses [3], the smallest of which are just a few nanometres across – although there is no consensus about whether viruses should be considered truly alive. “Tailored molecules are much easier to handle in such experiments than viruses”, says Arndt. But he adds that if various technical issues can be addressed, “I don’t see why it should not work.”
References
1. Gerlich, S. et al., Nat. Commun. online publication doi:10.1038/ncomms1263.
2. Arndt, M. et al., Nature 401, 680-682 (1999).
3. Romero-Isart, O., Juan, M. L., Quidant, R. & Cirac, J. I. New J. Phys. 12, 033105 (2010).
Subscribe to:
Posts (Atom)