I recently protested at criticisms published in the New Yorker by Gary Marcus and Ernie Davis of a paper published in Physical Review Letters that claimed to extract a kind of ‘intelligence’ from a simple rule governing the dynamics of particles. I’d written an unpublished account of this work, which I found interesting.
Well, I may have spoken too hastily. The paper, by Alex Wissner-Gross of Harvard and Cameron Freer of the University of Hawaii, does seem to me to be interesting and quite soberly presented. But it seems that the main target of Gary and Ernie’s criticisms was the extra-curricular claims that Alex was making for the work, especially in a video presentation for his new start-up Entropica. I’ve now taken a look at this, and I do think it seems rather over the top.
One criticism Gary and Ernie make is that the physics of the paper is ‘made up’. The idea is that, if one imposes a constraint on the particle’s dynamics that it maximizes the rate of entropy production over the entire course of its history – which means giving it an ability to look ahead – then one finds it doing all sorts of interesting things, such as cooperating with other particles or using them like ‘tools’. The objection is that real particles don’t obviously behave this way. But I’d maintain that there is a long and healthy tradition in physics of applying this sort of ‘what if’ thinking: what if the system were governed by this rule rather than that one? That’s interesting for exploring the range of possibilities that a system has access to. It’s a particularly common habit in cosmology and the outer reaches of fundamental physics such as string theory, but happens throughout physics – among other things, it’s a way of exploring what is essential and what is not for the phenomena you’re interested in. I can see that this might look a little odd to other scientists – what’s the point of inventing laws that might not be real? – but it’s a useful way of helping physicists develop intuitions.
Besides, this particular choice of ‘what if’ is well motivated. For one thing, we’re familiar with the idea that the trajectories of photons in quantum electrodynamics are determined by a kind of integration over all possible paths. What’s more, the principle of maximum entropy production – albeit in the moment, not in the future – has been invoked (e.g. by Jaynes) as a criterion for the behaviour of non-equilibrium systems. So this seems an interesting parameter space to explore, and I don’t agree with Ernie that the paper hardly seems to belong in a physics journal.
Ernie says, I think rightly, that “To some extent, I think the difference between your viewpoint and Wissner-Gross' on the one side and Gary's and mine on the other reflects the difference in disciplines. Physicists may be taken with this theory as a parsimonious equation that gives rise to behavior that looks like an elementary approximation of intelligence. As a psychologist and AI researcher, we look for theories at the state of the art in terms of explanatory or computational power, and we care very little about parsimony, which neither useful psychological theories nor useful AI programs generally manifest to any marked degree.”
But then there’s the question of whether this toy system has anything to tell us about ‘real’ intelligence of the sort one sees in the living world. And even if it doesn’t, might the approach be useful in other ways, for example in artificial intelligence?
On both of these issues, the paper itself is modest and largely silent (as it should be) and that is why I felt Gary and Ernie were being harsh. But that Entropica video seems to want to make the analogies direct – comparing the cooperating particles to cooperating animals, say, and claiming that “Entropica is a powerful new kind of artificial intelligence that can reproduce complex human behaviors”. It says that Entropica can “earn money trading stocks, without being told to do so,” and shows it commanding a fleet of ships (though it’s not too clear what they are supposed to be doing). There is an awful lot of “just as… so…” talk here, and once you start showing real animals using tools and cooperating in a task, you’re starting to imply that this is the kind of thing your model explains.
Now, maybe Alex has more concrete results than he is disclosing. But I’m not convinced on the basis of what I’ve seen so far. For example, did Entropica actually “make money”, or just perform OK in some simple simulation of a stock market? Will the ‘tool use’ results really have applications for “agriculture”, and if so, what on earth would those be?
It’s not clear from this video that Alex thinks his ‘causal entropic law’ tells us anything about actual human intelligence or animal behaviour, rather than producing behaviours that just look a bit like it. Gary and Ernie have interpreted some of his comments as making such claims, but I’m not so sure – it seems possible that he is just suggesting he has a simple framework that offers a different way of thinking about the issue, just as some simple biomechanical models can produce something that looks like bipedal walking. But I admit that a comment like “Our hypothesis is that causal entropic forces provide a useful—and remarkably simple—new biophysical model for explaining sophisticated intelligent behavior in human and nonhuman animals” could be interpreted either way, and Alex might need to be careful how he phrases things to avoid a misleading impression.
I don’t know that this is such a big deal. If I were an investor seeing the Entropica video, I’d be unimpressed by the lack of evidence to support grand claims, and indeed the lack of any indication of how Entropica works. It’s a long way from the hype that accompanies some big science projects. Yet I do now understand Gary and Ernie’s scepticism: it does rather look as if Alex is trying to jump much too far ahead too quickly. And perhaps that is part of a broader problem in science, which can’t any longer be allowed to advertise its own merits but instead has to spawn a start-up right away. In the end, Entropica will of course stand or fall on its ability to address real problems. I’ll be curious to see if it does so. In the meantime, it remains no more and no less than an interesting bit of exploratory physics.