Here’s a piece I wrote for BBC Future this week, before discovering that the blighters on their science news desk were covering the work already. So there will be something else from me on Under the Radar later this week…
______________________________________________________________
Attempts to measure and define intelligence are always controversial and open to interpretation. But none, perhaps, is quite as recondite as that now proposed by two mathematical physicists. They say that there’s a kind of rudimentary intelligence that comes from acting in a way that maximizes your future options.
Alex Wissner-Gross of Harvard University in Cambridge, Massachusetts, and Cameron Freer of the University of Hawaii at Manoa have figured out a ‘law’ that enables inanimate objects to behave this way, in effect allowing them to glimpse their own future. If they follow this law, they can show behaviour reminiscent of some of the things humans do: for example, cooperating or using ‘tools’ to conduct a task.
The researchers think that their mathematical principle might help to provide a “physics of intelligence”: an explanation of smart actions rooted in the laws of thermodynamics.
Central to their claim is the concept of entropy. Popularly described as a measure of disorder, entropy more properly describes the number of different equivalent states a system can adopt. Think of a box full of gas molecules. There are lots more ways that they can disperse uniformly throughout the available space than there are ways they can all congregate in one corner. The former situation has greater entropy.
In principle, either arrangement could arise purely from the random motions of the molecules. But there are so many more configurations of the uniformly spread gas that it is much more likely, and in practice we never see all the gas shift into one corner. This illustrates the second law of thermodynamics, which states that the total entropy of the universe always increases – simply because that’s more probable than the alternatives.
Some scientists have generalized this idea to propose that all processes of change happen in a way that has the greatest rate of entropy production. Not only do things head for the highest-entropy state, but they do so along a route that produces entropy at the greatest rate. There’s no rigorous proof that all things must happen this way, but the hypothesis of maximum entropy production has been used to account for processes such as the appearance of life, and also to design artificial-intelligence strategies that allow computers to become adept at complex games such as Go.
Wissner-Gross and Freer wondered if this hint at a link between maximal entropy production and intelligence could be made more concrete. They hit on the idea that ‘true’ intelligence is not, as they put it, “just greedily maximizing instantaneous entropy production”, but involves foresight: looking for a path that maximizes its production between now and some distant time horizon. For example, a good computer algorithm for playing Go might seek a strategy that offers the player the greatest number of options at all points into the future, rather than playing itself into a corner.
But how would an inanimate particle find that strategy? The researchers show that it can be defined via a mathematical expression for what they call the ‘causal path entropy’: the entropy production for all possible paths the particle might take. How would a particle behave if governed by the law that it must, at every instant, maximize this casual path entropy – which means, in effect, planning ahead?
Objects whose motions are guided solely by the conventional laws of motion are doomed to a blind, dumb presentism – they just go where the prevailing forces take them. Think again of those gas molecules in a box: each particle wanders aimlessly in a random walk, exploring the confining space without prejudice.
Yet when Wissner-Gross and Freer impose on such a meandering particle the demand that it move in a way that maximizes the casual path entropy, its behaviour is quite different: it tends to hover around in the centre of the box, where it suffers the least constraints on its future motion. They then explored the consequences of their new law for a so-called ‘cart and pole’ – a pendulum attached to a mobile cart, which can be stabilized in an inverted, head-up position by moving the cart back and forth, like balancing a stick on your palm. Early hominids are thought to have mastered such a delicate balancing act when they learnt to stand upright – and it’s a trick achieved by a cart-and-pole obeying the ‘maximum causal entropy’ law.
Weirder things become possible too. Wissner-Gross and Freer looked at a system composed of three disks in a box: a large one (I), a small one (II), and another small one (III) trapped inside a tube too large for I to enter. Suppose now that the movements of disk I are dictated by causal entropic ‘forcing’. In this case, the disk conspires to collide with II so that II can bounce into the tube and eject disk III. Liberating III means that the disks now have more ways to arrange themselves than when it was confined – they have more entropy. But to gain access to that entropy, disk I essentially uses II as a tool.
Similarly, two small disks governed by the causal entropic force showed a kind of social collaboration to collectively drag down a large disk into a space where they could ‘play’ with it, offering more possible states in total – another behaviour that looks strangely ‘intelligent’.
In these cases there is no real reason why the particles should be controlled by the causal entropic force – the researchers just imposed that property. But they suggest that, in a Darwinian evolving system, objects that are able this way to ‘capture’ a greater slice of the future might gain an adaptive advantage, so that such a force might be naturally selected. Not only could this offer clues about the emergence of intelligent, adaptive behaviour in the living world, but the general principle might also be useful for designing artificial intelligent systems and perhaps even for understanding problems in economics and cosmology.
Reference
A. D. Wissner-Gross & C. E. Freer, Physical Review Letters 110, 168702 (2013).
1 comment:
This is similar to MERW: https://en.wikipedia.org/wiki/Maximal_entropy_random_walk
Regarding its interpretation, let's look at the simplest situation: dynamics in [0,1] range.
Standard diffusion, chaos leads to uniform stationary probability distribution rho=1.
Such entropy maximization leads to rho~sin^2 instead - indeed localized in the center ... and this is the same as ground state probability distribution in quantum mechanics in this infinite potential well.
There is no magic in such entropy maximization - just repairing the difference between standard classical diffusion and quantum behavior - leading e.g. to Anderson localization effects, e.g. preventing semiconductor from being a conductor.
Post a Comment