Sunday, August 07, 2022

The Spectator's review of The Book of Minds: a response

 

There is a review of The Book of Minds in The Spectator by philosopher Jane O’Grady. I have some thoughts about it.

First, it is always nice to have a review that engages with the book rather than just describes it. And O’Grady says some nice things about it. So I’m not unhappy with the review. 

But it does, I must say, seem to me a little odd, and occasionally wrong or misleading.

Odd primarily because it talks about so little of the book itself. But is more an exegesis on the reviewer’s thoughts. The review focuses almost entirely on the question of definitions of mind and what these imply for putative “machine minds”. There is barely any mention of the substance of the book: the account of how to regard the human mind, the discussion of the minds of animals and other living things, thoughts on alien minds, and a chapter on free will. I suspect the reader of the review would struggle to get any real sense of what the book is about. 

In terms of what the review does cover, there are some misrepresentations both of what the book says and of thinking in the respective fields.

O’Grady says that in defining a mind thus – “For an entity to have a mind, there must be something it is like to be that entity” – I am reprising philosopher Thomas Nagel, essentially implying that I am using Nagel’s definition of mind. But I am not. Nagel did not define mind this way, and I never suggest he did. So the suggestion that I have somehow misunderstood Nagel in this respect is way off beam. 

Besides, I suggest my definition as a basis to work with and nothing more. I state explicitly that it is neither scientifically nor philosophically rigorous – because no definition of mind is. One can propose other definitions with equal justification. But the key point of the book is that thinking about a space of possible minds obviates any gatekeeping: we do not need to obsess or argue about whether something has a mind (by some definition) or not (although we can reasonably suppose that some things do (us) and some don’t (a screwdriver)). Rather, we can ask about the qualities that then seem to define mind: does this entity have some of them, and to what degree? We can find a place for machines and organisms of all sorts in this space, even if we decide that their degree of mindedness is infinitesimally small. In other words, we avoid the kind of philosophical tendentiousness in this review. 

O’Grady writes: “To use quiddity of consciousness as a criterion of mindedness, as Ball does, excludes machines at the outset.” 

This is simply wrong. My working definition only excludes today’s machines, which is consistent with what most people who design and build and theorize about those machines think. I do not exclude the possibility of conscious machines, but I explain why they will not simply arise by making today’s AI more powerful along the same lines. It will require something else, not just a faster deep-learning algorithm trained on more data. That is the general view today, and it is important to make it clear. To make a conscious machine – a genuine “machine mind” in my view – is a tremendous challenge, and we barely know yet how to begin it. But it would be foolish, given the present state of knowledge, to exclude the possibility, and I do not. 

Of course, one could adopt another definition of “mind” that will encompass today’s computers too (and presumably then also smartphones and other devices). That’s fine, except that I don’t think most AI researchers or computer scientists would regard it as advisable. 

O’Grady writes: “Nor are ‘internal models of the world’ – another ‘feature of mind’ Ball suggests – open to outside observation.”

But they are. That is precisely what some of the careful work on animal cognition aiming to do: to go beyond mere observation of responses by figuring out what kind of reasoning the animal is using. It is difficult work, and hard to be sure we have made the right deductions. But it seems to be possible.

She asks: “And how could any method at all be used to discern if matter is suffused with mind (panpsychism)?”

Indeed – that would be very hard to prove, and I’m not sure how one could do it. I don’t rule out that some ingenious method could be devised to test the idea, but it’s not obvious to me what that might be, and it is one of the shortcomings of the hypothesis: it is not obviously testable or falsifiable. This does not mean it is wrong, as I say.

She asks: “But is the mind, rather than being any sort of entity, nothing other than what it does (functionalists’ solution)?”

Well, that’s a possible view. Is it O’Grady’s? I simply can’t tell – in that paragraph, I can’t figure out if she is talking about the positions I espouse (and which she quotes), or challenging them. Can you? At any rate, I mention the functionalist position as one among others.

O’Grady writes: “He misunderstands the Turing Test. ‘Thinking’ and ‘intelligence’ in Turing’s usage (which is now everyone’s) are not mere faute-de-mieux substitutes but the real thing. The boundaries of mind have (exactly as Ball urges) been extended, so that mind-terms which once needed to be used as metaphors, or placed in inverted commas, are treated as literal.” 

This is untrue. We have no agreed definition of “thinking” or “intelligence”. Many in AI question whether “artificial intelligence” is really a good term for the field at all. What Turing meant by these terms has been debated extensively, and still is. But you’ll have to search hard to find anyone knowledgeable about AI today who thinks that today’s algorithms can be said to “think” in the same sense (let alone in the same way) as we “think”, or to be “intelligent” in the same way as we are “intelligent”.

O’Grady writes: “Minds are themselves declared to be kinds of computer.” Yes, and as I point out in the book, that view has also been strongly criticized. 

She concludes that “Ball gives us an enjoyable ride through different perspectives on the mind but seems unaware of how jarringly incommensurate these are, nor that, by enlarging the parameters of mind, we have simultaneously shrunk them.”

I simply don’t understand what she is trying to say here. I discuss different perspectives on some issues – biopsychism, say, or consciousness – and try to indicate their strengths and weaknesses. I’ve truly no idea what O’Grady intends by these “jarringly incommensurate” differences. I explain that there are differences between many of these views. I’m totally in the dark about what point is being made here, and I suspect the reader will be. As for “by enlarging the parameters of mind, we have simultaneously shrunk them” – well, do you catch the meaning of that? I’m afraid I don’t. 

The basic problem, it seems to me, is that O’Grady has definite views on what minds are, and what machine minds can be, and my book does not seem to her to reflect those – or rather, she cannot find them explicitly stated in the book (although in all honesty I’m still unclear what O’Grady does think in this regard). And therein lies the danger – for she seems to be presenting her view as the correct one, even though a myriad of other views exist. Of course, I anticipated this potential problem, because the philosophy of mind can be very dogmatic even though (or perhaps precisely because) it enjoys no consensus view. What I have attempted to do in my book is to lay out some of the range of thinking in this area, and to assess strengths and weaknesses as well as to be frank about what we don’t know or agree about. To do so is inevitably to invite disagreement from anyone who thinks we already have the answers. Yet again I think this illustrates the pitfalls of books written by specialists on topics that are still very much work in progress (and both the science and the philosophy of mind are surely that). There is no shortage of books claiming to “explain” the mind, and many have very interesting things to say. But we don’t know which of them, if any, is correct, or even on the way to being correct. What I have attempted to do instead is to suggest a framework for thinking about minds, and moreover one that does not need to be too dogmatic about what a mind is or where it might be found. I hope readers will read it with that perspective in mind.

No comments: