Sunday, August 07, 2022

The Spectator's review of The Book of Minds: a response


There is a review of The Book of Minds in The Spectator by philosopher Jane O’Grady. I have some thoughts about it.

First, it is always nice to have a review that engages with the book rather than just describes it. And O’Grady says some nice things about it. So I’m not unhappy with the review. 

But it does, I must say, seem to me a little odd, and occasionally wrong or misleading.

Odd primarily because it talks about so little of the book itself. But is more an exegesis on the reviewer’s thoughts. The review focuses almost entirely on the question of definitions of mind and what these imply for putative “machine minds”. There is barely any mention of the substance of the book: the account of how to regard the human mind, the discussion of the minds of animals and other living things, thoughts on alien minds, and a chapter on free will. I suspect the reader of the review would struggle to get any real sense of what the book is about. 

In terms of what the review does cover, there are some misrepresentations both of what the book says and of thinking in the respective fields.

O’Grady says that in defining a mind thus – “For an entity to have a mind, there must be something it is like to be that entity” – I am reprising philosopher Thomas Nagel, essentially implying that I am using Nagel’s definition of mind. But I am not. Nagel did not define mind this way, and I never suggest he did. So the suggestion that I have somehow misunderstood Nagel in this respect is way off beam. 

Besides, I suggest my definition as a basis to work with and nothing more. I state explicitly that it is neither scientifically nor philosophically rigorous – because no definition of mind is. One can propose other definitions with equal justification. But the key point of the book is that thinking about a space of possible minds obviates any gatekeeping: we do not need to obsess or argue about whether something has a mind (by some definition) or not (although we can reasonably suppose that some things do (us) and some don’t (a screwdriver)). Rather, we can ask about the qualities that then seem to define mind: does this entity have some of them, and to what degree? We can find a place for machines and organisms of all sorts in this space, even if we decide that their degree of mindedness is infinitesimally small. In other words, we avoid the kind of philosophical tendentiousness in this review. 

O’Grady writes: “To use quiddity of consciousness as a criterion of mindedness, as Ball does, excludes machines at the outset.” 

This is simply wrong. My working definition only excludes today’s machines, which is consistent with what most people who design and build and theorize about those machines think. I do not exclude the possibility of conscious machines, but I explain why they will not simply arise by making today’s AI more powerful along the same lines. It will require something else, not just a faster deep-learning algorithm trained on more data. That is the general view today, and it is important to make it clear. To make a conscious machine – a genuine “machine mind” in my view – is a tremendous challenge, and we barely know yet how to begin it. But it would be foolish, given the present state of knowledge, to exclude the possibility, and I do not. 

Of course, one could adopt another definition of “mind” that will encompass today’s computers too (and presumably then also smartphones and other devices). That’s fine, except that I don’t think most AI researchers or computer scientists would regard it as advisable. 

O’Grady writes: “Nor are ‘internal models of the world’ – another ‘feature of mind’ Ball suggests – open to outside observation.”

But they are. That is precisely what some of the careful work on animal cognition aiming to do: to go beyond mere observation of responses by figuring out what kind of reasoning the animal is using. It is difficult work, and hard to be sure we have made the right deductions. But it seems to be possible.

She asks: “And how could any method at all be used to discern if matter is suffused with mind (panpsychism)?”

Indeed – that would be very hard to prove, and I’m not sure how one could do it. I don’t rule out that some ingenious method could be devised to test the idea, but it’s not obvious to me what that might be, and it is one of the shortcomings of the hypothesis: it is not obviously testable or falsifiable. This does not mean it is wrong, as I say.

She asks: “But is the mind, rather than being any sort of entity, nothing other than what it does (functionalists’ solution)?”

Well, that’s a possible view. Is it O’Grady’s? I simply can’t tell – in that paragraph, I can’t figure out if she is talking about the positions I espouse (and which she quotes), or challenging them. Can you? At any rate, I mention the functionalist position as one among others.

O’Grady writes: “He misunderstands the Turing Test. ‘Thinking’ and ‘intelligence’ in Turing’s usage (which is now everyone’s) are not mere faute-de-mieux substitutes but the real thing. The boundaries of mind have (exactly as Ball urges) been extended, so that mind-terms which once needed to be used as metaphors, or placed in inverted commas, are treated as literal.” 

This is untrue. We have no agreed definition of “thinking” or “intelligence”. Many in AI question whether “artificial intelligence” is really a good term for the field at all. What Turing meant by these terms has been debated extensively, and still is. But you’ll have to search hard to find anyone knowledgeable about AI today who thinks that today’s algorithms can be said to “think” in the same sense (let alone in the same way) as we “think”, or to be “intelligent” in the same way as we are “intelligent”.

O’Grady writes: “Minds are themselves declared to be kinds of computer.” Yes, and as I point out in the book, that view has also been strongly criticized. 

She concludes that “Ball gives us an enjoyable ride through different perspectives on the mind but seems unaware of how jarringly incommensurate these are, nor that, by enlarging the parameters of mind, we have simultaneously shrunk them.”

I simply don’t understand what she is trying to say here. I discuss different perspectives on some issues – biopsychism, say, or consciousness – and try to indicate their strengths and weaknesses. I’ve truly no idea what O’Grady intends by these “jarringly incommensurate” differences. I explain that there are differences between many of these views. I’m totally in the dark about what point is being made here, and I suspect the reader will be. As for “by enlarging the parameters of mind, we have simultaneously shrunk them” – well, do you catch the meaning of that? I’m afraid I don’t. 

The basic problem, it seems to me, is that O’Grady has definite views on what minds are, and what machine minds can be, and my book does not seem to her to reflect those – or rather, she cannot find them explicitly stated in the book (although in all honesty I’m still unclear what O’Grady does think in this regard). And therein lies the danger – for she seems to be presenting her view as the correct one, even though a myriad of other views exist. Of course, I anticipated this potential problem, because the philosophy of mind can be very dogmatic even though (or perhaps precisely because) it enjoys no consensus view. What I have attempted to do in my book is to lay out some of the range of thinking in this area, and to assess strengths and weaknesses as well as to be frank about what we don’t know or agree about. To do so is inevitably to invite disagreement from anyone who thinks we already have the answers. Yet again I think this illustrates the pitfalls of books written by specialists on topics that are still very much work in progress (and both the science and the philosophy of mind are surely that). There is no shortage of books claiming to “explain” the mind, and many have very interesting things to say. But we don’t know which of them, if any, is correct, or even on the way to being correct. What I have attempted to do instead is to suggest a framework for thinking about minds, and moreover one that does not need to be too dogmatic about what a mind is or where it might be found. I hope readers will read it with that perspective in mind.

Saturday, June 04, 2022

What do we mean when we say that science is political?

In commenting on the commonly voiced view that “science is political”, Stuart Ritchie makes an excellent point: we must ask “And then what?” Stuart lists some of the reasons why the claim is made, and agrees with all of them (as do I).


Where he and I disagree is with “then what?” Stuart says “I don’t think the people who always tell you that “science is political” are just idly chatting sociology-of-science for the fun of it. They want to make one of two points.” Either they are saying “It’s inevitable; just accept it”, or “It’s actually a good thing.”


Now, in fact Stuart himself effectively agrees that it is inevitable – and given his list, it is hard to see how he could say otherwise. But he says this doesn’t mean we just have to shrug and say “This is the best we can do.” I think he is right, insofar as we can and should seek to eliminate the biases – both cognitive and ideological – that sneak into efforts to gain objective, reliable knowledge, in ways that Stuart himself has written admirably about.


But I fear Stuart has fallen into that same trap. In wanting to make his point, he is succumbing to a subjective belief without checking out whether it is so. I believe I am one of the people quoted anonymously (via Chemistry World) as saying that science is political - but do I really want to make one of those two points? No, I don’t.


Rather, I want us to recognize and examine the ways in which science becomes political. One of the most insidious of these is via those who seek to defend the status quo from allegedly “politicized” tampering. That’s the case for the article by chemist Anna Krylov that prompted my piece for Chemistry World, as well as thispiece for the journal in which Krylov’s article was published. Krylov’s piece is riddled with ideology, for example in her suggestion that reconsidering and updating scientific language and the individuals we choose to celebrate when social mores change is an impulse that comes from “extreme left ideology” and amounts to “spend[ing] the rest of our lives ghost-chasing and witch-hunting, rewriting history.” Her piece has been applauded by some who imply is that science is being “politicized” if its institutions implement affirmative-action programs to improve diversity. Such views assume that the situation we have now is simply the natural order – a totally apolitical state of affairs that must resist any politicized interference. How absurd, they say, to suggest, say, that Imperial College London was so named because the entire South Kensington complex of which it was a part was constructed from the fruits of an empire built on exploitation! How absurd to suggest that the fact that Imperial has five Black academics out of a total of 1600 has anything to do with social inequalities with deep historical roots, or indeed with the message that the very name of the college, or walls bedecked with image of white men, sends to people of colour who might consider applying there! Why should we imagine that the race and gender ratios in the sciences are anything other than the natural optimum for the progress of science? And so on. I wish people who have such views would expend some effort talking to students and staff of colour who are affected by this heritage.


I have no doubt that Stuart will see the absurdity of all that too. My impression is that he would regard efforts to correct these injustices as ways of making science less political, in the sense of being less shaped and compromised by the political and social injustices of the past. If so, I’d agree. Which is precisely why I felt it was important to call out those who wish to sustain a highly politicized status quo on the grounds that it is already somehow “apolitical”.


The pandemic has surely shown us how political science sometimes has to be. I suspect few would argue that scientists have a duty, especially in such extreme circumstances, to offer their advice to policy-makers. In the UK at least, some scientists have taken that to mean that they must offer such advice as objectively and accurately as they can, and accept this as the sole extent of their formal obligations. But it has become clear that, the moment science walks onto the political stage, it is inherently political.


For example, scientists were asked to provide modelling forecasts of how the pandemic was likely to play out if various policy options were implemented. They could have taken the view that their duty extends only to performing such modelling as accurately and reliably as possible, and conveying the findings clearly and honestly. This is certainly essential. But as members of the Covid modelling advisory have explained, they only modelled the scenarios they were asked to model. This does not – and did not – necessarily provide a scientifically satisfactory answer to the question the modelling was supposed to address. To predict the consequences of relaxing restrictions, say, it would be necessarily also to model the scenario in which they were not relaxed. This was not done, because it was not asked for. Should the scientists have anyway modelled that case and published the results with the rest? That might have been seen as a political act. But to not do so – and more generally, to not model all reasonable  policy options – could compromise the scientific rigour of the process. That too is a political decision.


What is the poor modeller to do? Damned if they do, damned if they don’t! But this isn’t the right way to see it. Rather, involvement in the political process comes means that “the science” is necessarily political – there is no longer an “objective”, apolitical position.


The same applied when the news broke of government adviser Dominic Cummings having broken lockdown rules with his Durham trip in March 2020. On that occasion, the government chief scientists were questioned by reporters for their views, and declined to comment on the grounds that they had “no desire to get involved in politics”. But Cummings’ violation of the rules was not purely a political matter, for it would obviously have implications for trust in governance and compliance with lockdown measures. By failing to affirm – as deputy chief medical officer Jonathan Van-Tam later did – that the rules applied to everyone, and that by implication Cummings should not have broken them, Chris Whitty and Patrick Vallance were making a choice with implications for public health. Their silence was, in other words, political too. Whether it was the right or wrong decision is another discussion; the point is that they did not have the luxury of an objective, apolitical position, as they seemed to believe.


Is it, indeed, really "apolitical" for the science advisers to remain silent in the light of the revelation that the prime minister, via the culture of governance that we now know he encouraged, was essentially playing them for fools all the time they were stressing the importance of observing lockdown rules? Will that silence truly serve the long-term status of the scientific advisory roles?


Very well then: this is pandemic science, and hard to imagine it could ever be free from politics. (That’s to say: some evidently do imagine this, but it is not hard to see that it is mistaken.) But surely most science is free from politics, or should be? The mass of the Higgs boson doesn’t depend on your political ideology!


Indeed not, and thank goodness. Some who fear the idea that science is political seem to worry that the Higgs mass might be at risk of being revised to conform to Maoist principles or some such. But here’s a real question. What if the CERN teams that tracked down the Higgs boson by 2012 had ceased collaborating with any Russian scientists on political grounds, slowing down progress to their goal? An outrageous thought? CERN has indeed just taken such a decision in the wake of the invasion of Ukraine. Was this right, or should science stay aloof from politics? The answer is not self-evident; I certainly do not profess to know what is right in that case. Again, a decision either way is political – because science happens in societies, and societies are political.


Research on climate change needs to be conducted as accurately and as free from bias and political ideology as possible. But what happens if its finding suggest that we face catastrophe if we do not significantly change our behaviour and energy economy, and yet political leaders ignore the warnings? Do scientists shrug and say “well, we did our part of the job as best we could”? One thing they absolutely must not do, of course, is to change their figures to make them even more alarming. But everyone knows that the impact of one’s research findings can be made more or less impactful by how they are presented. Are climate scientists right if they look for ways to make the dire implications of their work more evident and perhaps more alarming to the public?


The same is true for any scientific issue with political implications – embryo research and abortion, say, or statisticians speaking to issues of gun regulation. As climate change has shown, a bare and dispassionate presentation of the facts doesn’t necessarily have much impact. What then are the scientists to do to make their voice heard? Obviously, any distortion of facts, no matter in how noble a cause, ceases to be science. But should a scientist marshal the evidence to discredit an ideology that habitually traduces them? Again, I don’t claim to know the answer. But I do know that the question speaks to the broader responsibilities of science and scientists, beyond the simple (in principle) duty to get the facts as right as possible.


I don’t imagine Stuart disagrees with any of this, just as I fully support his suggestion that we must strive to make the results of scientific research as free from bias (including political) as possible. But that is the easy part. I don’t mean it is easy to do – far from it. But it is easy to see what the objective is, and how we can try to make it “as apolitical as it can be”.


It is all the rest that is the problem: how the scientific workforce is recruited, selected, promoted and celebrated; how we choose which scientific problems to work on (I don’t see how medical science can ever be free from political factors, for example in the choices of what gets prioritized); how scientists think about their social responsibilities beyond the narrow confines of the technical quality of their work – the uses to which it might be put, or how it might be abused, say; how science plays out within a capitalistic, market-driven political economy.  


I am not suggesting that we must shrug and accept that all this stuff is irredeemably political, far less proclaiming on whether this is a good or bad thing. The questions “Politics in science: more or less? Good or bad?” don’t seem to me to be the right ones. We must simply examine how politics impinges on science (and vice versa), be aware of it and not in denial about it, and think about whether or not we are happy with the answers, and how to change them if not. My big fear is that scientists, conducting their research as objectively and transparently as possible, tell themselves “Ah, now we’re truly apolitical, and free to just get on with our important work!” I wrote a book about where, in the worst case, that attitude can lead. It was called Serving the Reich.