Here's my latest piece for "Under the Radar" at BBC Future.
____________________________________________________
Listen, I’m going to be straight with you. Well, that’s what I’d intended, but already language has got in the way – you’re not “listening” at all, and “straight” has so many meanings that you should be unsure what is going to follow. All the same, I doubt if any of you thought this meant I was going to stand to attention or be rigorously heterosexual. Language is ambiguous – and yet we cope with it.
But surely that’s a bit of a design flaw, right? We use language to communicate, so shouldn’t it be geared towards making that communication as clear and precise as possible, so that we don’t have to figure out the meaning from the context, or are forever asking “Say that again?” Imagine a computer language that works like a natural language – would the silicon chips have a hope of catching our drift?
Yet the ambiguity of language isn’t a problem foisted on it by the corrupting contingencies of history and use, according to complex-systems scientists Ricard Solé and Luís Seoane of the Pompeu Fabra University in Barcelona, Spain. They say that it is an essential part of how language works. If real languages were too precise and well defined, so that every word referred to one thing only, they would be almost unusable, the researchers say, and we’d struggle to communicate ideas of any complexity.
That linguistic ambiguity has genuine value isn’t a new idea. Cognitive scientists Ted Gibson and Steven Piantadosi of the Massachusetts Institute of Technology have previously pointed out that a benefit of ambiguity is that it enables economies of language: things that are obvious from the context don’t have to be pedantically belaboured in what is said. What’s more, they argued, words that are easy to say and interpret can be “reused”, so that more complex ones aren’t required.
Now Solé and Seaone show that another role of ambiguity is revealed by the way we associate words together. Words evoke other words, as any exercise in free association will show you. The ways in which they do so are often fairly obvious – for example, through similarity (synonymy) or opposition (antonymy). “High” might make you think “low”, or “sky”, say. Or it might make you think “drugs”, or “royal”, which are semantic links to related concepts.
Solé and Seoane look at the intersecting networks formed from these sematic links between words. There are various ways to plot these out – either by searching laboriously through dictionaries for associations, or by asking people to free-associate. There are already several data sets of semantic networks freely available, such as WordNet, which use fairly well-defined rules to determine the links. It’s possible to find paths through the network from any word to any other, and in general there will be more than one connecting route. Take the case of the words “volcano” and “pain”: on WordNet they can be linked via “pain-ease-relax-vacation-Hawaii-volcano” or “pain-soothe-calm-relax-Hawaii-volcano”.
A previous study found that WordNet’s network has the mathematical property of being “scale-free”. This means that there is no real average number of links per word. Some words have lots of links, most have hardly any, and there is everything in between. There’s a simple mathematical relationship between the probability of a word having k connections (P(k)) and the value of k itself: P(k) is proportional to k raised to some power, in this case approximately equal to 3. This is called a power law.
A network in which the links are apportioned this way has a special feature: it is a “small world”. This means that it’s just about always possible to find shortcuts that will take you from one node of the network (one word) to any other in just a small number of hops. It’s the highly connected, common words that provide these shortcuts. Some social networks seem to have this character too, which is why we speak of the famous “six degrees of separation”: we can be linked to just about anyone on the planet through just six or so acquaintances.
Solé and Seoane now find that this small-world feature of the semantic network is only a small world when it includes words that have more than one meaning (in linguistic terms this is called polysemy). Take away polysemy, the researchers say, and the route between any pair of words chosen at random will be considerably longer. By having several meanings, polysemic words can connect clusters of concepts that otherwise might remain quite distinct (just as “right” joins words about spatial relations to words about justice). Again, much the same is true of our social networks, which seem to be “small” because we each have several distinct roles or personas – as professionals, parents, members of a sports team, and so on, meaning that we act as a link between quite different social groups - the web is easy to navigate.
The small-world character of social networks helps to make them efficient at spreading and distributing information. For example, it makes them “searchable”, so that if we want advice on bee-keeping, we might well have a friend who has a bee-keeping friend, rather than having to start from scratch. By the same token Solé and Seaone think that small-world semantic networks make language efficient at enabling communication, because words with multiple meanings make it easier to put our thoughts into words. “We browse through semantic categories as we build up conversations”, Seoane explains. Let’s say we’re talking about animals. “We can quickly retrieve animals from a given category (say reptiles) but the cluster will soon be exhausted”, he says. “Thanks to ambiguous animals that belong to many categories at a time, it is possible to radically switch from one category to another and resume the search in a cluster that has been less explored.”
What’s more, the researchers argue that the level of ambiguity we have in language is at just the right level to make it easy to speak and be understood: it represents an ideal compromise between the needs of the speaker and the needs of the listener. If every single object and concept has its own unique word, then the language is completely unambiguous – but the vocabulary is huge. The listener doesn’t have to do any guessing about what the speaker is saying, but the speaker has to say a lot. (For example, “Come here” might have to be something like “I want you to come to where I am standing.”) At the other extreme, if the same word is used for everything, that makes it easy for the speaker, but the listener can’t tell if she is being told about the weather or a rampaging bear.
Either way, communication is hard. But Solé and Seoane argue that with the right amount of polysemy, and thus ambiguity, the two can find a good trade-off. What’s more, it seems that this compromise brings the advantage also of “collapsing” semantic space into a denser net that allows us to make fertile connections between disparate concepts. We have even arguably turned this small-world nature of ambiguity into an art form – we call it poetry. Or as you might put it,
Words, after speech, reach
Into the silence. Only by the form, the pattern,
Can words or music reach
The stillness.
Reference: R. V. Solé & L. F. Seoane, preprint http:/www.arxiv/org/1402.4802 (2014).
Does this mean that some traffic violations help increase the traffic flow; or that some tax evasion helps the economy; or that too much maths spoils the theory?
ReplyDeleteIt reminds me of the Arrhenius relationship, in Chemical kinetics; in that actual reactions very rarely, if at all, followed the relationship exactly; but the theory is maintained simply because it 'fits' neatly into other theories, such as Boltzman's statistics, and the statistical interpretation of entropy.
Indeed, if ambiguity were abolished in the physical sciences, would there ever have been a Chemistry department?