Here’s another take on the recent paper on modelling of the evolution of colour terms – this time, published in Prospect.
___________________________________________
Languages are extremely diverse, but not arbitrary. Behind the bewildering diversity and the apparently contradictory ways in which different tongues elect to conceptualise the world, we can sometimes discern order and regularity. Many linguists have assumed that this reflects a hard-wired linguistic aptitude of the human brain. Some recent studies propose, however, that language ‘universals’ aren’t simply prescribed by genes but arise from the interaction between the biology of human perception and the bustle, exchange and negotiation of human culture.
Language has a perfectly logical job to do—to convey information— and yet is seemingly riddled with irrationality. Why all those irregular verbs, those random genders, those silent vowels and ambiguous homophones? You’d think languages would evolve towards some optimal model of clarity and concision, but instead they accumulate quirks that hinder learning, not only for foreigners but also in native speakers.
Traditionally, linguists have tended to explain the peculiarities of language through the history of the people who speak it. That’s often fascinating, but does not yield general principles about how languages have developed in the past—or how they will develop in future. As languages evolve and diverge, what guides their form?
Linguists have long suspected that language is like a game, in which individuals in a group or culture vie to impose their way of speaking. We adopt words and phrases we hear from others, and by using them, help them to propagate. Through face-to-face encounters, language evolves to reconcile our conflicting impulses as speakers or listeners. When speaking, we want to say our bit with minimal effort: we want language to be simple. As listeners, we want the speaker to make the meaning clear: we want language to be informative. In other words, speakers try to shift the effort onto listeners, and vice versa.
All this makes language what scientists call a complex system, meaning that it involves many agents interacting with each other via fairly well-defined rules. From these interactions there typically emerges an organised, global mode of behaviour that could not be deduced from local rules alone. Complex social systems have in recent years become widely studied by computer modelling: you define a population of agents, set the rules of engagement, and let the system run. Here the methods and concepts of the hard sciences—not so different to those used to model the behaviour of fundamental particles or molecules—are being imported into the traditionally empirical or narrative-dominated subjects of the social sciences. This approach has notched up successes in areas ranging from traffic flow to analysis of economic markets. No one pretends that a cultural artefact like language will ever be as tightly rule-bound or predictive as physics or chemistry, yet a complex-systems view might prove key to understanding how it evolves.
A significant success was recently claimed by and Italian group led by physicist Vittorio Loreto of the University of Rome La Sapienza. They looked at the paradigmatic example among linguists of how language segments and labels the objective world: the naming of colours.
As early anthropologists began to study non-Western languages in the nineteenth century, particularly those of pre-literate “savages”—they discovered that the familiar European colour terms of red, yellow, blue, green and so on are not as obvious and natural as they seem. Some indigenous people have far fewer colour terms. Many get by with perhaps three or four, so that for example “red” could refer to anything from green to orange, while blue, purple and black are all lumped together as types of black.
Inevitably, this was at first considered sheer backwardness. Researchers even concluded that such people were at an earlier stage of evolution, with a defective sense of colour vision that left them unable to tell the difference between, say, black and blue. Once they started testing natives using colour charts, however, they found them perfectly capable of distinguishing blue from black—they just saw no need to assign them different colour words. Uncomfortably for Western supremacists, we are in the same boat when it comes to blue, for Russians find it odd that an Englishman uses the same basic term for light blue (Russian goluboy) and dark blue (siniy).
Then in the 1860s the German philologist Lazarus Geiger proposed that the subdivision of colour always follows the same hierarchy. The simplest colour lexicons (such as the Dugerm Dani language of New Guinea) distinguish only black/dark and white/light. The next colour to be given a separate word is always centred on the red part of the visible spectrum. Then, according to Geiger, comes yellow, then green, then blue. Lazarus’s colour hierarchy was forgotten until restated in almost the same form in 1969 by US anthropologists Brent Berlin and Paul Kay, when it was hailed as one of the most significant discoveries in modern linguistics. Here was an apparently universal regularity underlying the way language is used to describe the world.
Berlin and Kay’s hypothesis has since fallen in and out of favour, and certainly there are exceptions to the scheme they proposed. But the fundamental colour hierarchy, at least in terms of the ordering black/white, red, yellow/green (either may come first) and blue, remains generally accepted. The problem is that no one could explain it.
Why, for example, do the blue of sky and sea, or the green of foliage, not register as distinct before the far less common red? It’s true that our visual system has evolved to be particularly sensitive to yellow (that’s why it appears so bright), probably because this enabled our pre-human ancestors to spot ripe fruit among foliage. But we have no trouble distinguishing purple, blue and green in the spectrum.
There are several schools of thought about how colours get named. “Nativists”, who include Berlin and Kay and Steven Pinker, argue that the concepts to which we attach words are innately determined by how we perceive the world. As Pinker has put it, “the way we see colours determines how we learn words for them, not vice versa”. In this view, often associated with Noam Chomsky, our perceptual apparatus has evolved to ensure that we make “sensible”—that is useful—choices of what to label with distinct words: we are hard-wired for particular forms of language. “Empiricists”, in contrast, argue that we don’t need this innate programming, but just the capacity to learn the conventional (but arbitrary) labels for things we can percieve.
In both cases, the categories themselves are deemed “obvious”: language just labels them. But the conclusions of Loreto and colleagues fit with a third possibility: the “culturist” view, which says that shared communication is needed to help organise category formation, so that categories and language co-evolve in an interaction between biological predisposition and culture. In other words, the starting point for colour terms is not some inevitably distinct block of the spectrum that we might decide to call ‘red’, ‘rouge’ and so on – but neither do we just divide up the spectrum any old how, because the human eye has different sensitivity to different parts of it. Given this, we have to arrive at some consensus not just on which label to use, but on what it labels.
The Italian team devised a computer model of language evolution in which new words arise through the game played by pairs of ’agents’, a speaker and a listener. The speaker uses words to refer to objects in a scene, and if she uses a word that is new to the listener (for a new colour, say), there’s a chance that the listener will figure out what the word refers to and adopt it. Alternatively, the listener might already have a word for that colour, but choose to replace it with the speaker’s word anyway. The language of this population of agents emerges and evolves from many such exchanges.
For colour, our visual physiology biases this process, picking out some parts of the spectrum as more worthy of a distinct colour term than others. The crucial factor is how well we can discriminate between very similar colours – we do that most poorly in the red, yellowish green and purple-violet. So we can’t distinguish two closely related reds as we can blues, say.
When the researchers included this bias in the colour-naming game, they found that colour terms emerged over time in their population of agents in much the same order proposed by Berlin and Key: first red, then violet, yellow, green, blue and orange. Violet doesn’t quite fit, but Loreto and colleagues think this is just an artefact of the way reddish hues crop up at both ends of the spectrum. Importantly, they don’t get the correct sequence unless they incorporate the colour sensitivity of actual human vision, but neither could the sequence be predicted from that alone, without the inter-agent negotiations that generate a consensus on colour words. You need both biology and culture to get it right.
The use of agent-based models to explore language evolution has been pioneered by Luc Steels of the Free University of Brussels, who is motivated by artificial intelligence: he wants to know how best to design robots so that they might develop a shared language. Steels and his coworkers have also favoured the acquisition of colour terms as their test case, and have previously argued in favour of the “cultural” picture that Loreto’s team now supports. The computer modelling of Steels’ group deserves much of the credit for starting to change the prevailing view of language acquisition, and the existence of near-universal patterns like Berlin and Kay’s colour hierarchy, from the influence of inherent, genetic factors to that of culture and environment.
Steels and his colleagues Joris Bleys and Joachim de Beule, for example, have presented an agent-based model of language negotiation, similar to that used by Loreto’s team, which purports to explain how a colour-language system can change from one based mostly on differences in brightness, using words like ‘dark’, ‘light’ and ‘shiny’, to one that makes distinctions of hue. (There are more ways to think about colour than Berlin and Kay’s rainbow-slicing.) The brightness system was used in Old English between around 600 and 1150, while Middle English (1150-1500) used hue-related words. A coeval switch was seen in other European languages, coinciding with the development of textile dyeing. This technology altered the constraints on what needed to be communicated: people now had to talk about a wider range of colours of similar brightness but different hue. Steels and colleagues showed that this sort of environmental pressure could tip the balance from a brightness-based colour terminology to a hue-based one. Again, it is one thing to tell that story, another to show that it really works in (a model of) the complex give and take of daily discourse. It increasingly seems, then, that language is determined not simply by ‘how we are’, but how it is used: by what we need to say.
No comments:
Post a Comment