Thursday, May 09, 2013

Entropy strikes at the New Yorker

Well, here is a curious thing. On this blog I wrote recently about a paper in Physical Review Letters – my piece was originally written for BBC Future, but had to be dropped when the main BBC news team picked up on the same work.

Now psychologist Gary Marcus and computer scientist Ernest Davis have commented on the work in the New Yorker, criticizing it for making overblown and unsupported claims about AI and intelligence. And they cite my piece as evidence of media hype.

I’m flattered, of course, that my humble blog should be awarded such status, as I have always assumed that it is read solely by its 43 faithful followers. I’m even more flattered that Marcus and Davis generously call me ‘well respected’. And I generally enjoy this sort of piece, which punctures the habitual hype of scientific PR and the media’s parroting of it.

But I think they are utterly mistaken in their criticisms. They seem to have misunderstood totally what the paper is saying. They are apparently under the impression that the authors think they have discovered a new law which makes inanimate particles do amazing things in the real world. But “the physics is make-believe”, they complain – “inanimate objects simply do not behave in the way that the theory of causal entropic forces asserts”. So this ‘causal entropic force’ makes a particle stay in the middle of a box – but hey, real gas particles don’t do that, they move randomly! They can’t all go to the centre, because then the gas would condense spontaneously (and incidentally, the second law of thermodynamics would crumble)! So what makes this one particle so special?

Oh lord, where to begin? Wissner-Gross and Freer are not saying that this is something that real particles do, and that no one noticed before. They are saying that if one were to assume this kind of physics, what emerges are weirdly ‘intelligent-looking’ behaviours, which even seem to have something instrumental about them. A genuinely valid complaint would be not “But that’s not how things are!”, but rather, “What’s the point in invoking a law like this, if there’s no good reason to think it is ever manifested?” But that’s to totally miss the interest here, which is that a constraint that seems very dry and abstract (the capacity to integrate over all possible futures, so as to maximize the rate of entropy production over an entire trajectory) produces behaviour that has some very striking characteristics. The point is that one would not guess those outcomes by looking purely at the law that produces them – it is an emergence-like phenomenon. When Marcus and Davis say that “There is no evidence yet that that causal entropic processes play a role in the dynamics of individual neurons or muscular motions”, they seem to be under the impression that the authors have claimed otherwise.

They build another straw man in what they say about AI: “Wissner-Gross’s work promises to single-handedly smite problems that have stymied researchers for decades.” No, it really doesn’t. There’s nothing in the paper about AI, aside from some introductory remarks about how maximum-entropy methods have been used in some approaches.

Where I might have some sympathy with Marcus and Davis is in regard to a fairly loopy piece about the work on the scifi website io9 (“We come from the future”), which says “the theory offers novel prescriptions for how to build an AI — but it also explains how a world-dominating superintelligence might come about.” Here Wissner-Gross does expand on what he has in mind about AI. He is mostly reasonably reserved about that, implying only that their approach might suggest a new angle. But then we get into Terminator territory: “one of the key implications of Wissner-Gross’s paper is that this long-held assumption [that intelligent machines will decide to take over the world] may be completely backwards — that the process of trying to take over the world may actually be a more fundamental precursor to intelligence, and not vice versa.” Huh? Well, you see, Wissner-Gross talks about particles “trying to take control of the world”. By this, I would assume he means that the causal entropic force directs the particle’s behaviour along particular trajectories that may involve a tendency to arrange the immediate environment. But for the io9’ers, “the world” becomes our planet, and “take control” becomes “impose its remorseless robotic mind”. Now, I can’t tell how much of this came from a degree of injudiciousness in Wissner-Gross’s comments to the reporter, and how much was a post hoc arranging of quotes to fit a narrative. But it seems harsh to criticize a scientific paper on the basis of what a sensationalist news account says about it.

The basic problem here seems to be that Marcus and Davis assume that, when Wissner-Gross and Freer talk about “intelligence”, they must be talking about the same thing that psychologists see day to day in humans. So it’s “intelligent” always to maximize your future options, huh? Well, then, what about this? – “Apes prefer grapes to cucumbers. When given a choice between the two, they grab the grapes; they don’t wait in perpetuity in order to hold their options open. Similarly, when you get married, you are deliberately restricting the options available to you; but that does not mean that it is irrational or unintelligent to get married.” It’s a little bit like saying that bacteria don’t show rudimentary cognition in climbing up chemical gradients because, hey, we sometimes decide not to move towards smells that we really like. The authors were not claiming that all “intelligent” behaviour must be governed by the causal entropy principle, but simply that this remarkably simple rule can produce what look like intelligent behaviours.

“What Wissner-Gross has supplied is, at best, a set of mathematical tools, with no real results beyond a handful of toy problems”, they say. And yes, that is really all the paper claims to do. “Toy problems” is here meant to be dismissive – the authors don’t seem to know that physicists talk about “toy models” all the time, meaning mnimal, obviously too-simple ones that have illustrative, heuristic and suggestive value, rather than ones that are pointless and silly. “There is no reason to take it seriously as a contribution, let alone a revolution, in artificial intelligence”, Marcus and Davis continue, “unless and until there is evidence that it is genuinely competitive with the state of the art in A.I. applications.” Can they really believe the authors think they have a way of doing AI that will beat the state of the art (but that they forgot to mention it in their paper)?

Sure, “it would be grand, indeed, to unify all of physics, intelligence, and A.I. in a single set of equations”, they jeer. To unify all of physics (let alone the rest of it)??! Come on chaps, now you’re really just making it up.

3 comments:

Ernest Davis said...

Philip --
First, have you seen Wissner-Gross' video?
http://www.entropica.com/
He claims there that the cart simulation indicates that there are promising applications of Entropica (his software) to upright walking, and that other simulations show applications to manufacturing, social cooperation, social networking, military deployment, games playing, and financial investment. So "single-handedly smite problems that have stymied researchers for decades” is hardly hyperbole.

Second, if all he wants to claim is that the constraint leads to intelligent behavior, then why bring in a connection to physics? He could have presented it as a paper in AI or cognitive science, claiming that this is a useful search heuristic. The claim that this principle as physics is relevant to intelligence in organisms is only meaningful if you believe that
biophysics depends on causal entropic forces.

Third, in Wissner-Gross's simulations the "weirdly intelligent behavior" is built in to the way he defines the option space. In the simulation with three disks that imitates the ape using a tool getting food out of a tube, he defines the option space as maximized when the disk labelled "ape" is in contact with the disk labelled "food". Causal entropic forces are defined as those that lead to a state of maximum options. Not surprisingly, therefore, if you apply the causal entropic forces, you attain what you have defined as the state of maximum options, namely the ape disk is in contact with the food disk. If he had wanted to device a simulation that imitated someone putting a lid on a jar to keep the contents safe, he could have defined the option space in such a way that that is maximized when the lid is on the jar. It's perfectly circular.

-- Ernie Davis

Gary Marcus said...

Dear Philip,

We are baffled by your post, and can't help but wonder whether you forgot to do your homework.

In particular, you write above that "The basic problem here seems to be that Marcus and Davis assume that when Wissner-Gross and Freer talk about “intelligence”, they must be talking about the same thing that psychologists see day to day in humans.", as if W-G and Freer weren't making such claims.

But they were! Please take few seconds view this video, linked in our story, from a newly-launched Wissner-Gross company that was spawned by the paper) at http://www.entropica.com.

wherein you will see (and hear) this dramatic opening

"Entropica is a powerful new kind of artificial intelligence that can reproduce complex human behaviors ... Entropica can walk upright, use tools, cooperate, play games, make useful social introductions, globally deploy a fleet, even earn money trading stocks, all without being told how to do so."

It is this hype that we were responding to, and the same hype that undermines your counterargument. Entropica's claims simply aren't modest.

-- Gary Marcus

Ernest Davis said...

I may add that in an email to us, which we quoted in our article, Wissner-Gross wrote that he believes that causal entropic forces are involved in biophysics. He wrote:

Our hypothesis is that causal entropic forces provide a useful—and remarkably simple—new biophysical model for explaining sophisticated intelligent behavior in human and nonhuman animals.

So your statement "When Marcus and Davis say that 'There is no evidence yet that that causal entropic processes play a role in the dynamics of individual neurons or muscular motions', they seem to be under the impression that the authors have claimed otherwise," does not correspond to Wissner-Gross' own understanding of his own work.