Sunday, August 29, 2021

More on the "politicization of science"

 

Biomedical scientist Andreas Bikfalvi has responded to my own responses to Anna Krylov’s article in the Journal of Physical Chemistry Letters. So here is another round of the debate.

 

Bikfalvi criticizes me mainly for “largely miss[ing] the points” of Krylov’s piece. He says “Nowhere in Krylov’s viewpoint is the issue of improving diversity in science discussed.” I find it hard to figure out if this is disingenuous, or if Bikfalvi is arguing in good faith but simply does not understand his own terms of reference. Krylov criticizes an “ideology” that “cancels” Newton for being white, and which calls for “decentering whiteness”, “decolonizing” the curriculum, and removing from use terms associated with a racist past or that are deemed to promote racism and colonialism. We can argue about the rights and wrongs of particular cases in that enterprise; personally I suspect will not be hard to find examples where that effort has been taken into rather fanciful territory. Parading such extreme examples in order to argue a general case is, however, a strategy better suited to the tabloid press than to serious discourse.*

 

But part of the motivation behind such attempts to reconsider the way we use language – a practice that has always been necessary and important as social mores and boundaries evolve – is that there are clear links between the lack of diversity in science (and other areas of academia) and the unwelcome environment perceived by some people of colour, from ethnic minorities, or women or LGBTQ people because of the way outmoded or alienating terms persist in use. (Take, for example, the continuing technical use of “Causasian” as a racial group, which originated within a racist assumed racial hierarchy.**) In my article I mentioned the example of the “dude wall”: a wall covered with images of illustrious alumni of the past, all of them white men. Bikfalvi, and I believe Krylov, seem to me to be arguing that the gender and race of those scientists should be irrelevant, and that this sort of situation is therefore fine. Plenty of women and people of colour will disagree. So yes, of course these are matters relevant to diversity and inclusion in science.

 

So too is the problem of racial or sexual harassment – a problem recently shown to be rife in the astronomy community, although that is by no means unique. Bikfalvi quotes Yves Gingras, who has criticized the NSF’s policy of potentially withdrawing funding from scientists found guilty of sexual harassment. (I have to wonder why Bikfalvi does not explain that this is what he means by “[inappropriate] social behaviour”.) We must assume that Bikfalvi is, then, unhappy at seeing scientists seriously penalized for engaging in behaviour that is known to have driven some women out of their research positions and sometimes out of science altogether. I guess we must assume that he feels the same way about racial harassment – that, perhaps, it’s a terrible thing but should not for a moment become a reason why scientists who perpetrate it should be inhibited from continuing their precious science. And if science loses some women or people of colour from its ranks as a result, I suppose that is the price we must pay for genius.

 

No, please do not tell me this is not an issue about diversity.

 

Bikfalvi, like Krylov, is in fact deeply policitized in his comments. Krylov, for example, suggests we have two choices:

 

“We can succumb to extreme left ideology and spend the rest of our lives ghost-chasing and witch-hunting, rewriting history, politicizing science, redefining elements of language, and turning STEM (science, technology, engineering, and mathematics) education into a farce. Or we can uphold a key principle of democratic society—the free and uncensored exchange of ideas—and continue our core mission, the pursuit of truth, focusing attention on solving real, important problems of humankind.”

 

To suggest that a call to examine the biases that evidently exist (as shown in the studies I cited, and many more) in the demographic and hiring practices of science is to “succumb to extreme left ideology” is absurd and offensive. Krylov is prepared to offer no middle way: for example, to re-examine the “scientific idols” of the past, as I did for Peter Debye (and other physicists working in Nazi Germany) in my book Serving the Reich, in a way that does not seek to simplistically condemn them with presentist purism, but instead to honestly and even sympathetically recognize their personal and political failings.

 

Similarly, Bikfalvi asks whether I want “want racial discrimination based on the importation of critical race theory (CRT) in the medical praxis or… a socially egalitarian evidence-based medicine preserved?” This invocation of the much contested CRT (which I never mentioned myself) is itself thoroughly politicized and at odds with his posture of objectivity. Of course I believe “socially egalitarian evidence-based medicine” would be a good thing. But sadly, there is unequivocal evidence that medicine today suffers from racial biases, as for example documented in Angela Saini’s book Superior – or indeed, as made abundantly clear in the Covid-19 pandemic. This is not some pernicious trait of medicine – it has the same roots as the racism, bias and discrimination that exists in our societies generally. I believe it would be a good thing to acknowledge that, and to tackle it.

 

Bikfalvi’s article (like Krylov’s) is filled with these false choices. “For instance, should a professor attempt to indoctrinate their mentees (students and post-docs) and transform them into activists, or should the professor instead teach them how to think?” Well now, let me think about that difficult choice! Perhaps we might ask too, for example, “Should professors teach their students to be Marxists who banish from consideration any ideas that do not conform to their rigid extreme-left ideology, or should they teach them to be good scientists?” I genuinely don’t understand how anyone can expect this mode of debate to be taken seriously; it is, in fact, profoundly anti-intellectual. 

 

Bikfalvi then asserts that I “want to imbue science with a homogeneous political ideology”. I am not sure which ideology he means, but I suppose I must assume this is the “extreme left ideology” that Krylov seems to perceive in any effort to re-examine the barriers that exist to improving diversity in science. It is total nonsense, but a kind of dog-whistle nonsense attuned to a particular audience, with whom I see it has already resonated.

 

Let me, though, answer one of these contrived questions Bikfalvi poses. “Should scientists be judged on their scientific merits alone, or “cancelled” when failings — as judged by deviance from contemporary moral values — occur?” No and no. Well, that was easy, wasn’t it?

 

For this is of course a ridiculous as well as an ambiguous question, which evidently doesn’t present two mutually incompatible options. Bikfalvi, like Krylov, is blurring two separate issues here. In her original article, Krylov says:

“Particularly relevant is Merton’s principle of universality, which states that claims to truth are evaluated in terms of universal or impersonal criteria, and not on the basis of race, class, gender, religion, or nationality. Simply put, we should evaluate, reward, and acknowledge scientific contributions strictly on the basis of their intellectual merit and not on the basis of personal traits of the scientists or a current political agenda.”

 

But the second claim is not Merton’s principle, “simply put”. It is entirely different. Merton is talking about “claims to truth”; Krylov is talking not just about evaluating” but “rewarding” scientific contributions – presumably by naming conventions, memorials, commemorations, icons and hagiographies, and all the traditional paraphernalia that is surplus to “claims to truth” but which the scientific community has for some reason chosen to adorn itself with.

 

Should we, say, deny that the Stark effect is “true” because Stark was a Nazi? I won’t dignify a question that silly (which I hope is not the queston Bikfalvi intended to ask) with an answer. Should we judge Stark as a person because he supported Hitler and was virulently antisemitic? Yes, I believe we are justified in doing so. Should we honour Stark by naming a moon crater after him, on the grounds that he made an important scientific discovery? I would like to see Bikfalvi’s answer to that.

 

The IAU has made its own decision on the matter, which is: “Oops, no we shouldn’t, but we didn’t realise he was a Nazi.” Is the IAU’s renaming of Stark crater a “cancelling of Stark”? If so, Bikfalvi should also be petitioning to have the Lenard Institute at Heidelberg, named after Stark’s fellow Nazi Philipp Lenard, reinstated. Will he do so? If anyone were to be calling for Stark’s and Lenard’s Nobel prizes to be rescinded, or for the “Stark effect” to be expunged from textbooks, that would be more controversial – and I would not support it, even though it pains me to see Stark commemorated in that way. What I want is for Stark’s past to be better known (and not euphemized in the way Krylov did it), so that people don’t again make the mistake the IAU recognizes it made. And I dislike the way science fetishizes its individuals with all this naming, which, as I said in my article, seems to me to run counter to the spirit of science. We have to live with (and debate) the dilemmas of the past; it seems foolish to create new dilemmas for the future.

 

“Should Einstein be cancelled because of his disparaging remarks in his private diary about the Chinese?”, Bikfalvi asks (implicitly, of me). Well, he could have just taken the trouble to read what I’ve written about that question. (Trigger warning: contains nuance.)

 

The final reference to Savonarola makes me smile, in the way Krylov’s references to Galileo and Bruno make me smile. Which is to say, I will smile to avoid screaming at this trivialization of history. More seriously, such abuse of history to make cheap rhetorical points seems to me an egregiously common practice in science, and displays a shoddy attitude to history as an intellectual discipline. I’m sorry if that seems “inappropriate and patronizing”, but frankly it is kinder than the response such remarks will get from historians.

 

*****************************************************************

 

*Krylov cites the example of an American university professor suspended for voicing in his class a Chinese expression with a phonic similarity to a racial slur in English. Frankly I found that example so extreme – even (as a student of Chinese myself) offensively so – that I wondered if it was apocryphal. As far as I have been able to ascertain, it is not. (I contacted the university concerned for more information, but have not been given a response.) If the information I have found about this incident is correct, I fully agree that it seems outrageously inappropriate to treat it as a kind of misconduct.

 

**Krylov’s comment on “quantum advantage” is another example where we seem to be faced with a choice between attributing ignorance or bad faith. We are invited to imagine poor quantum scientists, having invented a perfectly innocent term, being petitioned by banner-waving critical race theorists for having committed the crime of celebrating violence and racism. But the truth is that those scientists themselves took a look at the political climate developing under the Trump administration and decided – rightly in my view – “you know what, perhaps this is not the best time to be bandying about words like ‘supremacy’.”

Sunday, May 09, 2021

The problematic themes of Modern Myths

 

In her review of my book The Modern Myths in the New York Times Review of Books, Sophie Gee asks why “post-Enlightenment Anglophone tales are so obsessed with themes of domination, self-reliance, privilege and supremacy.” Of the “myths of individual power and mastery” that I consider, and which “still exert a significant hold in the mainstream imagination and culture”, she asks: “whose voices have they overlooked?”

 

These are excellent questions. I don’t pretend to have comprehensive answers, but an interrogation of them is one of the key themes of my book.

 

“Themes of domination, self-reliance, privilege and supremacy” are, as I explain, nowhere more apparent than in the first of the modern myths I consider in detail: Robinson Crusoe. In many ways this tale was Defoe’s justification for the then-burgeoning colonialist project: it was written to appeal to the merchant middle classes whose rising wealth and aspirations often depended on colonial trade. James Joyce had the measure of Crusoe, calling him

the true prototype of the British colonist, as Friday… is the symbol of the subject races. The whole Anglo-Saxon spirit is in Crusoe: the manly independence; the unconscious cruelty; the persistence; the slow yet efficient intelligence; the sexual apathy; the practical, well-balanced religiousness; the calculating taciturnity.

 

As I write in The Modern Myths, “The microcosmic society that Crusoe constructs on his island can be read as a miniature version of the sovereignty that, in Defoe’s view, the British ought to enjoy.” Crusoe is a slave-owner, growing rich from his plantations; I say that his attitude “fits with the sense of entitlement and hierarchy that, for Defoe and most of his contemporaries, rendered European imperialism unproblematic.” His story shows its readers “how an Englishman responds to adversity: with the mental, moral and intellectual resources that his superior breeding has conferred on him.” Crusoe is, in short, an apologia for empire. (Of course, it is much more than that, but that is one of its key functions not just for its contemporaneous readers but throughout the nineteenth century too.

 

Themes of Anglophone domination and supremacy recur in many of these myths. As I explain, Dracula is in some ways a supernatural recasting of the late-Victorian invasion literature: a decadent foreigner comes to England to exploit and prey on its people, only to be repulsed by the steadfast and noble spirit of a band of (mostly English) Westerners. Sherlock Holmes and his doughty assistant Watson pit English decency and ingenuity against innately corrupt foreign criminals. Over the late Victorian myths in particular hangs the fear of degeneration expressed in Max Nordau’s 1892 book. If, as I suggest, myths attain that status because they are good vehicles for prevailing cultural anxieties, the Anglophone anxieties of the fin de siècle were partly about the fragility of empire and the need to assert a pseudo-Darwinian superiority over “lower races”.

 

They were also about shifts in gender status: Dracula, for example, is pervaded with a terror of the assertive New Woman, as exemplified by Lucy Westenra, whose wanton waywardness is not so much induced as revealed by the Count’s bloodsucking predations. The retribution is brutal: as I explain, her staking by the group of men who were once her suitors has all the qualities of a retributive gang rape; it is one of the most disturbing scenes in the novel. Jekyll and Hyde, meanwhile, seethes with hints of homoerotic and homophobic anxieties (as does Dracula). Myths acquire that status because of their capacity to express fears that can barely be articulated. They might assert values of, say, self-reliance, privilege and innate superiority conferred by race, class and gender (Crusoe, Holmes) – but Hyde, Moriarty, and poor Lucy remind us that a mere gossamer veil separates “us” (the bourgeois target audience) from the abyss.  

 

It is precisely because these stories have become myths that these purposes can be subverted: the myth can be seized and reinvented by and for those it overlooks. Thus we see Crusoe rewritten by Michel Tournier to give Friday real agency (and make him the title character), or used by J. M Coetzee (Foe) to critique the modern remnants of colonialism; even by the late nineteenth century, the Frankenstein narrative was being used in tales sympathetic to the suffering of Black Americans. Even H. G. Wells’ repulsive aliens in The War of the Worlds become the victims of apartheid prejudice in Neill Blomkamp’s District 9.

 

The fear in the conformist America of the 1950s that Batman and Robin might be in a gay relationship was satirized in the following decade by the high camp of the Adam West TV series, winking over the heads of the children who could not understand why their parents were either laughing or squirming at the antics of their heroes. In today’s Sherlock TV series, Holmes and Moriarty can finally consummate (even if just in fantasy) their mutual attraction, while Watson can be gently mocked for his embarrassment at repeatedly being taken for Holmes’ lover. Today, at last, a Black Batman in a hooded mask can turn American racism’s potent symbol back on itself.  

 

Here too, though, we should resist becoming dogmatic about the “message” of a modern myth. Today it is almost obligatory to take the monster’s side – but the rich ambivalence of Mary Shelley’s text may be obliterated by a critical insistence that we consider Victor Frankenstein the real monster. As Lawrence Lipking points out, some critics are frustrated by students who steadfastly refuse to see Frankenstein this way: “Despite the consensus of sophisticated critics,” he writes ironically, “ordinary readers keep looking at the wrong evidence and coming to the wrong conclusions.” Not all readings of a myth will be equally useful or illuminating, but probably the only “wrong” way to read them is to insist on a unique interpretation.

 

You’ll find all this discussed in my book. Modern myths are valorized because they are by their nature versatile and protean enough to still do valid, even vital cultural work, sometimes being reimagined to give a voice to those who they originally ignored, denigrated or obliterated. They can’t be contained by the prejudices that created them, and their very familiarity and cultural gravity makes an inversion all the more potent. So yes, we should ask whose voices they overlooked – and then find out what happens when those voices are entrusted with the retelling.

Monday, March 22, 2021

What we have seen: a year of lockdown

 

What we have seen is that global calamity can come in a strange and perplexing form, at the same time apocalyptic and weirdly domestic. The numbers who have died from the coronavirus, the scenes and reports from hospitals, mass graves, overwhelmed and decimated communities, have the shape of eschatological science fiction. But for some of us – the lucky ones – this meant staying at home with the spring sunshine and the birdsong, making bread. Everything changed, and seems unlikely to revert, but we never quite imagined that global transformation would be like what we have seen.

What we have seen is that the world today cannot persist with any stability without science, but that science cannot be its saviour. We have seen scientists come up with the goods as never before: understanding, tests, data, medical procedures, vaccines. If we look carefully, what we have seen is that these things are not created overnight but become possible only with sustained and committed support for basic scientific research.

What we have seen is that there are no technological solutions to social crises. Knowledge and know-how count for little if the social fabric is too thin and patchy to hold them. Social crises, especially if they involve public health, find and exploit weaknesses, most of all those that involve inequalities of opportunity, resources, employment, stability and safety. What we have seen is that things will get worse if these issues do not get better, locally and globally.

What we have seen is that political failings too become the flaws along which cracks will open in times of crisis. Lies, corruption, self-interest, laziness and complacency, and sheer ineptitude have all created such fissures. Where they are present, it does not matter how advanced and superior you think your society is. It will crack.

What we have seen is that such failings do not make much difference to political popularity. They are not reflected in the polls. What matters much more is who controls the narrative. What we have seen is that this is a deep problem for the ability of democracy to create good governance.

What we have seen is that our habit of mocking former ages for their delusions and superstitions is nothing more than a projection of our own anxieties and self-deception. We have seen that we are no less capable of and drawn to denial of what is in front of our noses, what is undeniable, yet what is inconvenient to our worldview. Our technologies simply become new places for delusion and fantasy to reside: in radio masts, medicines and vaccines. Our new technologies create new channels for lies and deceptions to spread; they create contagion at the speed of light. 

What we have seen is that powerful parts of the media are heavily invested in and encourage voices whose entire worldview is based on behaving as they like, not just disregarding the well-being of others but being positively contemptuous of any imploration to do so. Such people will lie incessantly to argue a “rational” case for their position. They will be invited onto broadcast media and into public debates, and awarded newspaper columns to put their “controversial” views forth, often by media editors who share them. What we have seen is that there are powerful sectors of the media that will prefer to see people die rather than moderate these libertarian views. What we have seen is that they will always find maverick scientists to support them.

What we have seen is that we are morally lost if we allow political and tribal affiliations to take precedence over a sense of decency, compassion and justice and a demand for competence. We all have a sense of how we should like our society to be run; we can recognize that others will have different visions and that we can debate and argue about those differences. But if in the end our vision is not tethered a moral compass that values fairness and respect for others, it is a mere posture.

What we have seen is that scientists become political the moment they take political appointments. They will not thereafter necessarily be able to separate scientific and technical advice and comment from its political implications. Scientists should not accept such roles unless they are willing to recognize this. They will fail in their duty only if they withhold expert judgement for fear that it will have political ramifications. What we have seen is that science and scientists too have moral obligations beyond their professional ones.

What we have seen is that people are resilient, brave, selfless, compassionate, extraordinary. They will bear hardship and risk for the sake of others. What we have seen is that some of the biggest dangers come from underestimating people and their readiness to help, to heed, and to find creative solutions in the most desperate circumstances.

What we have seen is that we will change our lives when it becomes imperative, and that those who insist that such change to avoid future catastrophe is impossible are wrong. What we have seen is that we have the social capital, the ingenuity and determination to do better than we have done so far. But only if we can find the right story, and if we can learn from what we have seen.

 

Tuesday, January 12, 2021

Free will and physics: the next instalment

 

I’m sorry that I seem to have forced Jerry Coyne to write about a subject he is avowedly tired of, namely free will. But my piece in Physics World inspired him to do so, if only to suggest it is all wrong.

Needless to say, I don’t agree. I’m happy to say why, although it must be at a regrettably even greater length, given that just about every paragraph in his comments is misconceived.

But I’ll give you the short version first. If Coyne really is tired of writing about free will, he could have saved himself a lot of effort. He could have dropped the simple restatements of the “deterministic” case against free will (which were my starting point), and cut all the misrepresentations of what I said, and cut to the chase as follows:

“I don’t understand the scientific basis for Ball’s claim, but my hunch is that a couple of physicists I know would disagree with it. I’ll let readers argue that out.”

So that’s the executive summary. Here’s the rest.

First, a little flavour of the kind of thing that’s to come. At the start of the second half of his critique, Coyne says that my attacks on free will [sic – he means attack on attacks on free will] are misguided because I “do not appreciate that naturalism (determinism + quantum uncertainty) absolutely destroys the libertarian notion of free will held by most people.” This is such a peculiar statement, because my article was suggesting that this notion of naturalism doesn’t undermine free will. It’s not that I don’t “appreciate” that argument; it’s that I don’t agree with it. (I’m not sure quite what the “libertarian notion of free will held by most people” is precisely, because I haven’t asked them.) Surely Coyne of all people knows that convincing arguments are not simply made by declaring them correct by fiat? Isn’t that what he lambasts religious people for doing?

Now, let’s get this bit out of the way: “To say that psychological and neurological phenomena are different from physical phenomena is nonsense,” Coyne declares. This is the first of many plain misrepresentations of what I say. What I say – he even quotes it! – is that psychological and neurological phenomena are not meaningfully adjudicated by microphysics, by which I mean theories that begin with (say) subatomic particles. This is not the same as saying that the neural circuits involved in psychological and behavioural phenomena are not ultimately composed of such particles. The point of my article is to explain that distinction. As we’ll see, Coyne later admits that he doesn’t understand the scientific arguments that underpin the distinction. Hence my abridged version of his diatribe above.

Incidentally, Coyne alludes to experiments that allow us to predict “via brain monitoring what someone will do or choose.” This is presumably a reference to Libet-style experiments, conducted since the 1980s. As he has written on this topic before, I must assume that Coyne knows there has been a great deal of debate in the neurobiological and philosophical literature on whether they pronounce on free will at all. Only those who believe Coyne is correct about free will will absolve him of all responsibility for not mentioning that fact.

Coyne complains that I don’t define free will at the outset (although he seems oddly confident that whatever definition I choose, it is wrong). I don’t define it because I think it is a terrible term, which we seem lumbered with for historical reasons. A key aim of my article is in fact to suggest it is time to jettison the term and to talk instead about how we (and other creatures) make volitional decisions. This is an issue for cognitive neurobiology, and others have made an excellent start on outlining what such an endeavour might look like: for example here and here. I’m not sure if Coyne knows about this work; he makes no reference to it so perhaps I should assume he does not.

But is there really volitional behaviour at all, or is it all predetermined? That’s the key. Coyne admits that we can’t predict “with complete accuracy” what someone will do. Of course, there are lots of situations in life where a great deal of prediction is possible, sometimes simply on statistical grounds, sometimes on behavioural ones, and so on. No one disputes that.

So what do we mean by “with complete accuracy”? This is very clear. It means that, if Coyne is right, an all-seeing deity with complete knowledge of the universe could have predicted yesterday every action I took today, right down to, say, the precise moments I paused in my typing to sip my tea. It was all predetermined by the configuration of particles. 

If that were so, the unavoidable corollary is that everything that currently exists - including, say, the plot of Bleak House – was already determined in the first instants of the Big Bang. 

Now, as far as we know, this is not the case. That’s because quantum mechanics seems to be fundamentally indeterminate: there is an unpredictability about which outcomes we will see, because all we can predict is probabilities. But that just adds randomness, not anything that can be construed as will. So we can say that the plot of Bleak House was determined by the initial conditions of the Big Bang, plus some unpredictable randomness.

As it is unprovable, this is a metaphysical statement. It’s hard to see how we can advance beyond it one way or another. What I’m suggesting is that, rather than get stuck in that barren place, we might choose more profitably to talk about causes. That way, we can actually raises some useful and even answerable questions about why we do what we do, including why Dickens wrote Bleak House.

But Coyne says “Screw cause and effect… as they are nebulous, philosophical, and irrelevant to determinism.” Well, I could just stop here - because it means Coyne has said “Oh, your argument that rests on cause and effect? I’m not even going to think about it.” I’m not sure why he didn’t have the honesty to admit that, but hey. It’s true that causation is a very thorny philosophical issue indeed - but it also happens to be at the core of my notion of free will. Because it seems to me that the only notion of free will that makes much sense is not “I could have done otherwise” (which is also metaphysical, because you could never prove it - if your argument depends on working up from the exact microphysics of the situation, you can never conduct the same experiment twice) but “I - my mind, me as an organism - caused that to happen. Not the conditions in the Big Bang plus some randomness, but me.” And then of course we can argue about what “me” means, and how the mind is constructed, and all the rest of it, and we’ll find that it’s terribly complicated, but we’re arguing and constructing hypotheses and testing them in the right place, which is neuroscience and not microscopic physics.

So everything that follows that statement by Coyne that he’s not interested in debating causation is a sideshow, though it goes on for a very long time. (Later he returns to causation by saying I have confused notions about it. But he forgets to say why, or elects not to.) Still, let’s proceed.

“Is there anything we know about science that tells us that we can “will” ourselves to behave differently from how we did? The answer is no. We know of nothing about physics that would lead to that conclusion.” This is a restatement of the tired old idea that to posit “free will” means evoking some mysterious force outside of physics. I hope I have made it clear that I don’t do that. But let me say it again: I don’t believe there is anything operating when I make a decision beyond (as far as we know them) the fundamental forces of nature acting between particles. What I am saying is that it is wrong, perhaps even meaningless, to speak of all those countless interactions as the “cause” of the behaviour. What caused Dickens to write Bleak House? “Well, in the end, it has to be the Big Bang plus quantum randomness.” Really, that’s the hill you want to die on?

So when Coyne expresses outrage that I say it is “metaphysical” that “underlying our behavior are unalterable laws of physics?”, he has created an obvious straw man. What I in fact said - as careful readers might have noted - is that arguments that “free will is undermined by the determinism of physical law… claim too much jurisdiction for fundamental physics [and] are not really scientific but metaphysical.” This is not the same thing at all - precisely because of my assertion that we must judge such jurisdiction on the grounds of causation.

But straw men are about to appear in abundance. Coyne accuses me of one when I say:

“If the claim that we never truly make choices is correct, then psychology, sociology and all studies of human behaviour are verging on pseudoscience. Efforts to understand our conduct would be null and void because the real reasons lie in the Big Bang.”

This is a strawman, he says, “because none of us deny that there can be behavioral science, and that one can study many aspects of human biology, including history, using the empirical tools of science: observation, testing, falsification, and a search for regularities… Although the “laws” of human behavior, whether collective or instantiated in an individual, may not be obeyed as strictly  as the laws of physics, all of us determinists admit that it is fruitful to look for such regularities on the macro level—at the same time we admit that they must comport with and ultimately derive from the laws of physics.”

I find the extent of Coyne’s miscomprehension here astonishing. He goes on: how dare I call behavioural or social sciences pseudoscience, or history “just making up stories”, or say that behavioural regularities are just “peculiar coincidences” and nothing to do with evolution!

Now, there are a few clues that perhaps this is not what I’m saying or believing - like for example the fact that I wrote an entire book (more than one, actually) on how ideas from physics about how regularities and patterns arise in complex systems can be of value in understanding social science and economics. If Coyne had given a damn about who this chap he was criticising actually was, he might have discovered that and - who knows? - perhaps experienced a moment of cognitive dissonance that led him to wonder if he was actually understanding this article at all. That could have saved him some trouble. Still, onward.

In any case, he says, none of us determinists believe all those terrible things about the behavioural sciences and all the rest! It’s a straw man!

But my point is this: Sure, you don’t think those things. You all (I suspect) recognise the value of the behavioural and social sciences and so forth. But that’s because you haven’t really examined the implications of your belief.

Here’s why. If you believe that everything that happens (lets put aside the complication of quantum indeterminism for now) was preordained in the Big Bang - that the universe unfolds inexorably from that point as particle hits particle - then you really cannot sustain a genuine belief in behavioural sciences as true sciences. Let’s say that a behavioural scientist deduces that people behave a certain way, Y, in the presence of influence X, and so goes on to conduct an experiment in which X is withheld from the subjects, to see if their behaviour changes. And it does! So, there’s a fair case to be made that X is a causal influence on behaviour. 

But it’s not really so, is it? What you have to believe is that the conditions in the Big Bang caused a universe with people in it that are of the nature that behaviour Y tends statistically to be correlated with condition X. When we say “X causes Y”, we don’t mean that. There’s no genuine causal relationship involved; it’s just, as I say, “an enumeration of correlations”. I don’t care about dictionary definitions of “pseudoscience” (and Coyne only does, it seems, because he thinks I’m calling behavioural science a pseudoscience and wants to prove me wrong). But I do know that it is very common in pseudoscience to mistake correlation for causation. 

I guess it might be possible to imagine a kind of science that, while it employs “observation, testability, attempts at falsification, and consensus” while never rising above the level of documenting correlations, and never imputing any sort of causal mechanism. But I’m not sure I can think of one. What I am saying is that, if Coyne’s vision of determinism were true, behavioural sciences could never talk factually about mechanism and causation - or if they did, they’d not be speaking any kind of truth, but just a convenient story.

Still, I guess the best way is to find out. We could ask behavioural and social scientists if they are content to regard the objects of their studies as automata blindly carrying out computations – which is what Coyne’s view insists – or whether (at least sometimes) we should regard them as agents making genuine decisions. I’m pretty sure I know already the answer many neuroscientists would give, because some have told me.

At any rate, the basic point should be clear now: you don’t refute a reductio ad absurdum by crying “But that’s absurd!”

Well, on with the cognitive dissonance. Coyne says I “give the game away” by betraying that I can’t believe in free will after all, because I say:

“Classical chaos makes prediction of the future practically impossible, but it is still deterministic. And while quantum events are not deterministic – as far as we can currently tell – their apparently fundamental randomness can’t deliver willed action.”

“In other words” Coyne says, “physics, which Ball admits has to comport with everything at a “higher level”, can’t deliver willed action. Thus, if you construe free will in the libertarian, you-could-have-done-otherwise sense, then Ball’s arguments show that we don’t have it.” I’m not sure what to make of this. Does Coyne not realise that, by stating these things at the outset I am aiming to lay out the case to be addressed, and to avoid some spurious defences of free will that pin it all on some kind of fundamental indeterminacy? Does he not realise that, when one starts off presenting an argument by saying “Well, here’s the thing I’m seeking to challenge”, it is not a very impressive counter-argument to say “Ah but you just said that very thing, so you must believe it too!”?

Next. Evolution: I could have guessed this would be a sticking point! (Actually I did; that’s why I raised it.) 

I say:

“What “caused” the existence of chimpanzees? If we truly believe causes are reducible, we must ultimately say: conditions in the Big Bang. But it’s not just that a “cause” worthy of the name would be hard to discern there; it is fundamentally absent.”

In response, Coyne says:

“If Ball thinks biologists can figure out what “caused” the evolution of chimps, he’s on shaky ground. He has no idea, nor do we, what evolutionary forces gave rise to them, nor the specific mutations that had to arise for evolution to work. We don’t even know what “caused” the evolution of bipedal hominins, though we can make some guesses. We’re stuck here with plausibility arguments, though some assertions about evolution can be tested (i.e., chimps and hominins had a common ancestor; amphibians evolved from fish, and so on). And yes, that kind of testing doesn’t involve evoking the laws of physics, but so what?”

It’s hard to know where to begin with this. What he is talking about in terms of efforts to understand the evolution of chimps is precisely the same as what I’m talking about: one might look, for example, at morphological changes in the fossil record, and if possible at changes in genomics, and how they correlate. One does comparative genomics. One might frame hypotheses about changes in habitat and adaptations to them. In other words, I raise the notion of a “theory of chimp formation” as another reductio ad absurdum. I don’t believe biology should be aiming for such a thing, or that it is even meaningful. Rather I think it should be doing precisely what it is: making hypotheses about how chimps evolved on the basis of the available evidence.

The issue, though, is whether one regards this as renormalised physics. Coyne does. I am not sure all his colleagues would agree. I don’t mean that they would say (as he might), “Well, what we’re doing is just a more useful higher-level abstraction of the basic physics.” I suspect many would say that thinking about evolution as coarse-grained physics is of no value to what they do, and so they (rightly) don’t bother even to give it any thought.

But this does NOT mean there is anything except physics operating at the microscopic level of particles.

What does it mean then? That gets to the crux of the matter. What I’m suggesting is that it means that we shouldn’t be considering causation as only and entirely top-down. 

That is the point of the piece. And finally, after much huffing over straw men, Coyne gets to it. What does he have to say about it?

It is, he says, “something I don’t fully understand”. 

OK, so perhaps it would be best for him to leave it there. Sadly, he does not.

“As far as I do understand it”, he says, “it doesn’t show that macro phenomena result from the laws of physics, both deterministic and indeterministic, acting at lower levels. To me the concept is almost numinous.”

I don’t even know what this means. “It doesn’t show that macro phenomena result from the laws of physics acting at lower levels.” Huh? What then does he think it does show? That there’s some mysterious non-physical force at work? I’ve really no idea what he is trying to say here.

The idea of top-down causation, in the forms I’ve seen it, shows in fact that systems in which there are nothing but the laws of physics acting at lower levels nevertheless display causation that can’t be ascribed to those lower levels. 

Remember causation? That thing my argument was based on? Does Coyne agree with the arguments for the existence of top-down causation in complex systems? If not, why not? 

But it seems he doesn’t much care: he’ll “let readers argue this out”. Still, he adds, “if physicists like Sean Carroll and Brian Greene are not on board with this—and as far as I know, they aren’t—then I have reason to be skeptical.”

Really? An “argument from authority” – and one moreover that discounts the authority of Nobel laureates such as Phil Anderson? That’s the basis of his case?

Does he even know the position of Sean Carroll and Brian Greene on this? Has he asked them? Is there any evidence that they have considered such arguments? (Greene doesn’t mention it in his book.)

(By the way, I don’t think I “denigrate” (=“criticise unfairly”) Greene’s view in Until the End of Time. I simply disagree with it. If Coyne had more curiosity, it would have been very easy to discover that, while I bring up this point in my review of Greene’s book, I also had some good things to say about it.)

(And incidentally, Sean Carroll has written on top-down causation, but not in a way that is germane here. In The Big Picture, he dismisses the need to invoke it in snowflake formation - and I agree with him there. And in his blog here, he criticises John Searle’s view of consciousness from this perspective. But Searle believes consciousness is somehow a non-physical entity beyond science. That has nothing to do with the work I allude to. Where top-down causation matters is in discussing questions of agency.)

Truly, I had to ask myself, this is it? The reason Coyne thinks my piece is wrong is because (part from reasserting the same tired old arguments about determinism) he doesn’t fully understand the science on which they’re based, but he suspects a couple of his pals might not buy it and so that’s good enough for him?

Oh well. Onward.

Coyne says I’m wrong to say that dispelling the idea of free will has no implications for anything. Actually I don’t say that at all (I think I’m sensing a pattern here). I say it is rather telling that those who claim to have dispelled free will seem oddly keen to say we should go on acting as though it really is a thing.

No we don’t, Coyne says! We say that because there’s no free will, we should be “less retributive, more forgiving.” And this is precisely my point. If you don’t believe in free will, why should you be retributive or forgiving at all? In that case, none of what we do is our fault, because it was ordained in the Big Bang (plus randomness). That’s all there is to it. 

This is what I mean: those who deny free will don’t have the courage of their convictions. They feel obliged to resurrect it, or the ghost of it, to avoid having to absolve us of all responsibility. But they don’t seem to know how to do that, other than with arm-wavy statements like this: “I still think people are “responsible” for their actions, but the idea of “moral” responsibility is connected with “you-could-have-chosen-to-do-otherwise.”” So they are responsible but not morally responsible? Then responsible in what way, exactly? What kind of responsibility can stem from predeterminism? He doesn’t say.

Why, if there’s no free will, would we take any action at all to try to change people’s behaviour? After all, we can’t then have a genuinely causal influence on what they do. I guess in this case free-will deniers will say to themselves: “well, I know I’m not really deciding to do this, it’s just my automaton-brain playing out the 13.8-bn-year stage of the Big Bang, but then again, if I don’t then I suspect that 13.8-bn-year-old plan will include this person reoffending, and so I guess I’d better, but all the same I’m not choosing this but just telling myself I am because that’s what brains do, and so I guess I’m stuck with this belief that I personally have a causal effect on the future, but I don’t, and I must deny it, but there’s actually no must about it because that concept doesn’t exist either…” Or something. God knows what their narrative is. Perhaps it’s just “well I still have this gut feeling that that person is responsible in some way for what they do but I don’t really know what that means.”

What Coyne is talking about, I suspect, is the recognition that people vary in the degree to which they can truly decide on their actions. There are all kinds of influences that determine this: their past history, their social circumstances, the specific nature of their brain (part innate, part conditioned), whether they’ve just eaten… There’s a gradation from volitional to totally non-volitional (like reflexes). In a fair and just society, we already recognise this. So we try to make our rules and judgements by considering such factors, and trying to make a fair assessment of degrees of culpability, and thinking about what - if we punish someone for their actions - we might hope to achieve by it. We work at the macro level at which we can think meaningfully about cause and effect. We don’t argue about physics and the Big Bang. We don’t do that not because that would be an awfully hard way to reach a judgement about the situation, or because we lack the computational resources, but because we know it would be meaningless.

Because this is by no means the first time I’ve seen smart people transmuted into abysmal readers, I’m genuinely curious about what makes that happen. I have a hypothesis, though it would be hard to test. I think they start by reading the title or headline, thinking “Well I profoundly disagree with that”, and then let that preconceived judgement prevent them from actually reading the argument and assessing the rhetorical or logical trajectory of the piece. Instead they just read each sentence at a time and – without asking “Is this part of the author’s position, or the position he/she is setting out to attack?”, “Is this a rhetorical structure?” and so on – just decide for themselves what they think the sentence means and then consider how they can disagree with it. In Coyne’s case I fear that situation is compounded by his evident conviction that dismantling free will is part of his crusade against “religionists.”

Sometimes when I see this happen, I’m forced to wonder how science sustains any discourse at all. But fortunately, it seems to manage.

I guess I have been harsh here in some places, but I’m happy to take responsibility for that. I do think it was me that chose to write this, and not the Big Bang. And you do too really, don’t you?

PS If you read Coyne’s second article and go looking for my piece in Physics Today, you won’t find it. It was in Physics World. To judge from a glance at his comments thread, that’s a moot point anyway, as I saw little sign that most commenters were bothering to look at the article anyway. The one chap who evidently did, agreed with me.

Sunday, December 13, 2020

More on free will, and why quantum mechanics can't help you understand football

 

I’ve had some stimulating further discussion with Philip Goff and Kevin Mitchell on whether quantum mechanics can illuminate the free-will problem. Kevin has responded to our comments here; Philip’s have been on Twitter. Here’s where it all leaves me at this point.

First, here’s where I think we all agree:

(1) Events at the quantum scale can be adequately described by quantum mechanics – for our purposes, nothing more is needed.

(2) There’s no missing “force of nature” that somehow intervenes in matter as a result of “free will”.

(3) The future is not predetermined, because of quantum randonmess: at any given moment, various futures are possible.

Kevin’s argument is, as I understand it, that agents with free will are able to select from these possible futures.

Philip’s objection is that this is not how quantum mechanics works: those futures are determined by the probabilities they can be assigned from the Born rule.

I’m sympathetic to that observation: it isn’t at all clear to me how anything called free will can somehow intervene in a quantum process, however complex, to “select” one of its possible futures.

My objection to Philip’s point was, however, to the scenario he uses to illustrate it – where he decides whether or not to water his plant (called Susan). It seems to me to be ill-posed. I’m averse in general to thought experiments that don’t stack up in principle, and this seems to me to be one.

To calculate the Born probabilities for this situation, you would need to know the complete initial state of the system and the Hamiltonian that determines how its wavefunction evolves in time. Now, it is no good supposing we can define some generic “state of Philip confronted with thirsty Susan”. I’m not even sure what that could mean. How do we know what we need to include in the description to make a good prediction? What if Philip’s cell phone goes off just before he is about to water Susan, and calls him away on an emergency? How much of the world must we include for this calculation? And we’re looking to calculate the probability of outcome X, which quantum mechanics can enable us to do – so long as we know the target state X. But what is this? Is it one in which Susan stands in damp soil and Philip’s watering can is empty? But how do we know that he added the water of his own free will? What if in the initial state he know someone would shoot him later if he didn’t add the water? Does that still count as “free will”? I mean, he could in principle still refuse to water Susan, but it’s not what we would usually consider “free will”. But perhaps then our initial state needs to be one in which Philip has no such thought in his head. Had we better have a list of which thoughts are and aren’t allowed in that initial state? But whichever initial state we choose, we can never do the experiment anyway to see if the predictions are borne out, because we could never recreate it exactly.

My point is that we should not be talking about scenarios like this in terms of quantum states and wavefunctions, because that’s not what quantum mechanics is for. We can run an experiments many times that begins with a photon in a well-defined state and ends with it in another well defined state as it evolves under a well-defined Hamiltonian, and quantum mechanics will give us good predictions. But people are not like photons. Even though fundamentally their components are of course quantum particles obeying quantum rules, it is not just ludicrous but meaningless to suppose that somehow we can use quantum theory to make predictions about them – because the kind of states we care about (does Philip do this?) are not well defined quantum states, and the trajectories of any such putative states are not determined by well-defined Hamiltonians.

It seems to me the distinction here is really between quantum physics as a phenomenon and quantum mechanics as a theory. I don’t think anyone would dispute that quantum physics is playing out in a football match. But it seems to me a fundamental mistake to suppose that the formalism of quantum mechanics can (let alone should) be used to describe it, because that formalism does not involve the kinds of things that are descriptors of football matches, and vice versa. (Philip’s “watering a plant” scenario is of course much closer to a football match than to a Stern-Gerlach experiment.) It’s not just that the quantum calculations are too complex; the machinery of calculation is not designed for that situation. Indeed, we are only just beginning to figure out how to use that machinery to describe the simplest couplings of quantum systems to their environment, and these are probably probing the limits not just of what is tractable but what is meaningful.

Does all this objection, though, negate Philip’s point that free will can’t determine the outcome of a quantum process, as (ultimately) all processes are? In one sense, no. But my point is really that the answer to this is not legitimately yes or no, because I’m not sure the question has any clear meaning. The scenario Philip is depicting is one in which there is some massively complex wavefunction evolving in time that describes the whole system – him with watering can and potted plant – and somehow that evolution is steered by free will. But – and I think this is where I do agree with Kevin – I don’t believe this is the right way to describe the causation in the system.

 I don’t just mean it is not an operationally useful way to do that. I think it is fundamentally the wrong way to do it.

Here’s an example of what I mean. Imagine a tall tower of Jenga bricks. Now imagine it with one of the bottom brick removed, so that it’s unstable. The tower topples. What caused it to topple? Well, gravity and the laws of mechanics. Fine.

Now here’s the same tower, but this time we see what brought it to the state with the bottom brick removed: a child came along and took the brick. What caused it to fall? You could say exactly the same: gravity and mechanics. But we’re actually asking a different question. We’re asking not what caused the tower with the brick missing to fall, but what caused the tower with the brick still in place to fall – and the answer is that the child turned it into the unstable version. The child’s action was the cause.

When we try to speak of free will in terms of microphysics, we are confusing these two types of causal stories. We’re saying, Ah, the child acting is really just like the tower minus brick falling: physics says that’s the only thing that can happen. But what physics says that, exactly? Unlike the case of the tower falling, we can’t actually give an account of the physics behind it. So we just say, Ah, it’s somehow all there in the particles (why not the quarks? The strings, or whatever your choice of post-standard-model theory? But no matter), and I can’t say how this leads to that exactly, but if I had a really big computer that could calculate all the interactions, and I knew all the initial conditions, I could predict it, because there’s nothing else in the system. But that’s not a causal explanation. It is just a banal statement that everything is ultimately just atoms and forces. Yes it is – but at that level the true cause of the event has vanished, rather in the way that, by the time you have reduced a performance of Beethoven’s Eroica to acoustic vibrations, the music has vanished.

(This analogy goes deeper, because in truth the music is not in the acoustic waves at all, but in the influence they have on the auditory system of people attuned to hearing this kind of music so that they have the appropriate expectations. There is music because of the history of the system, including the deep evolutionary history that gave us pattern-seeking minds. So it makes sense to explain the effects of the music in terms of violations of expectation, enharmonic shifts and so on, but not in terms of quantum chromodynamics. You will simply not get a causal explanation that way, but just an (absurdly, opaquely complicated) description of underlying events.)

And you see that this argument has nothing to do with quantum mechanics, which is why I think quantum indeterminacy is a bit of a red herring. Free will – or better, volition – needs to be discussed at the level on which mental processes operate: in terms of the brain systems involved in decision-making, attention, memory, intention and so on.

The basic problem, then, is in the notion that causation always works from the bottom up, aggregating gradually in a sort of upwards cascade. There is good reason to suppose that it doesn’t – and that it is especially apt not to in very complex systems. Looked at this way, the microphysics is irrelevant to the issue, because the issue itself is not meaningful at the quantum level. At that level, I’m not sure that the matter of whether “things could have been otherwise” is really any different from the fact that things only turn out one way. (It could be interesting to pose all this in a Many Worlds context – but not here, other than to say I think Many Worlds makes the same mistake of supposing that quantum mechanics can somehow be casually welded onto decision theory.) Beyond quantum randomness, the notion that “things could have been otherwise” is a metaphysical one, because you could never prove it either way. Best, then, to jettison all of that and simply consider how decision-making works in cognitive and neurological terms. That’s how to make sense of what we mean by free will.

Friday, December 11, 2020

Does quantum mechanics rescue free will?

 

Philip Goff has challenged Kevin Mitchell’s interesting supposition that the indeterminacy of quantum physics creates some “causal slack” within which free will can operate. In essence, Kevin suggests (as I understand it) that quantum effects create a huge number of possible outcomes of any sufficiently complex scenario (like human decision-making), among which higher-level mechanisms of organismic agency can act to select one.

Philip responds that this won’t do the trick, because even though quantum mechanics can’t pronounce on which outcome will be observed for a quantum process with several possible outcomes, it does pronounce on the probabilities. He gives the example of his decision to water his dragon tree Susan (excellent name):

“Let’s say the Born rule determines that there’s a 90% chance my particles will be located in the way they would be if I watered Susan and a 10% chance they’ll be located in the way that corresponds to not watering Susan (obviously this is a ludicrously over-simplistic example, but it serves to make the point). Now imagine someone duplicated me a million times and waited to see what those million physical duplicates would decide to do. The physics tells us that approximately 900,000 of the duplicates will water Susan and approximately 100,000 of them will not. If we ran the experiment many times, each time creating a million more duplicates and waiting for them to decide, the physics tells us we would get roughly the same frequencies each time. But if what happens is totally up to each duplicate – in the radical incompatibilist sense – then there ought to be no such predictable frequency.”

It’s a good point, insofar as it needs an answer. But I think one exists: specifically, Philip’s scenario doesn’t really have any meaning. In this respect, it suffers from the same defect that applies to all attempts to reduce questions of human behaviour (such as those that invoke “free will”, a historically unfortunate term that deserves to have scare quotes imposed on it) to microphysics. The example Philip chooses is not “ludicrously over-simplistic” but in fact ill-defined and indeterminate. I don’t believe we could ever determine what is the configuration of Philip’s particles that predisposes him to water Susan. It’s not a question of this being just very, very difficult to ascertain; rather, I don’t see how such a configuration can be defined at the quantum level. We would presumably need to exclude all configurations that lead to other outcomes entirely – but how? What are the quantum variables that correspond to <watering Susan> or <not watering Susan (but otherwise doing everything else the same, so not cutting Susan in half either)>? What counts as “watering Susan”? Does a little water count? Is watering Susan before lunch the same as watering Susan after? This is not a simple binary issue that can be assigned Born probabilities – and neither can I see how any other human decision-making process is. (“Oh come on: what about ‘Either I press a button or I don’t’”? But no, that's not the issue as far as free will is concerned – it’s ‘Either I decide of my own volition to press the button, and I do it, and the botton works’ or not. And what then is the quantum criterion for ‘of my own volition’? How do we know it was that? What if I was bribed to do it?... and so on.)

Obviously such scenarios could go on ad infinitum, and the reason is that quantum mechanics is the wrong level of theoretical description for a problem like this. We simply don’t know what the right variables are: where the joints should be carved in an astronomically complex wavefunction for many particles that correspond to the macroscopic descriptions. And again, I don’t think this is (as physicists often insist) just a problem of lack of computational power; it’s simply a question of trying to apply a scientific theory in a regime where it isn’t appropriate. The proper descriptors of whether Philip waters Susan are macroscopic ones, and likewise the determinants of whether he does so. At the quantum scale they don’t just get intractably hard to discern, but in fact vanish, because one is no longer speaking at the right causal level of description.

This is, in fact, the same reason why Schrödinger’s cat is such an unhelpful metaphor. No one has ever given the vaguest hint at what the wavefunctions of a live and dead cat look like, and I would argue that is because “live” and “dead” can’t be expressed in quantum-mechanical terms: they are not well-defined quantum states.

I don’t necessarily argue that this rescues Kevin’s idea that quantum indeterminacy creates space for free will. I’m agnostic about that, because I don’t think what we generally mean by free will (which we might better call volitional behaviour) has any meaning at the quantum level, and vice versa. It’s best, I think, to explain phenomena at the conceptual/theoretical level appropriate to it. As Phil Anderson said years ago, it’s wrong to imagine that just because there’s reducibility of physical phenomena, this implies a reductive hierarchy of causation.

You’ll see very soon in Physics World why I’m thinking about this…

Tuesday, August 25, 2020

Is the UK ready for a Covid winter?

To prepare my article for The Guardian on whether the UK is prepared for a Covid winter, I spoke to many experts who gave a great deal of helpful information and advice. Only a small part of that could be fitted into the article, and I thought it would be helpful to put some more of it out there. So here is the longer version of that article.

_________________________________________________________________________

No one knows what Covid-19 holds in the coming months, but no one well-informed takes seriously Boris Johnson’s claim that it could all be back to normal by Christmas. With local outbreaks already prompting lockdowns in Leicester, Manchester and Preston, and cases rising at an alarming rate in Spain and Germany, it’s entirely possible that there will be grim days ahead. The faster spreading of the coronavirus and greater difficulty of maintaining social distancing as the weather gets colder, coupled to a return of schools and a desperate need to get the economy moving again, will increase the challenge of keeping a lid on the threat. So are we ready?

The good news is that some of what was lacking in March, and which led to such a disastrous outcome in the UK, is now in place. By no means all of that shortfall can be blamed on the present government; political leaders had for years ignored the warnings of specialists in infectious disease that a pandemic was a near certainty, the frightening lack of preparedness exposed by the 2016 Cygnus flu simulation was ignored while the nation was in the grip of Brexit-mania, the UK had no industrial infrastructure for generating testing capacity at short notice, and the NHS had been worn ragged by years of austerity. Besides, this was an entirely new virus, and little was known about how it spreads and harms the human body.

Significant headway has been made on some of those problems over the summer. The bad news is that it still might not be enough, and the outcome depends on many factors that are still all but impossible to predict. “We’ve got to up our game for the autumn”, says Ewan Birney, deputy director of the European Molecular Biology Laboratory, who heads its Bioinformatics Institute in Cambridgeshire. “We’ll be inside more. Universities and schools will be running. There will be a whole bunch of contacts that we don’t have now.”

“We can anticipate a lot more infections over the next few months”, says virologist Jonathan Ball of the University of Nottingham. The prime minister has advised hoping for the best and preparing for the worst, pledging that by the end of October there will be at least half a million tests for the virus conducted every day, and that the NHS will receive £3 bn of extra funding. But as Chris Hopson, chief executive of NHS Providers says, much more is likely to be needed in the next month or two to keep Covid-19 under control.

The nightmare scenario, he says, is a combination of a second surge of Covid-19 with a particularly difficult outbreak of winter flu, alongside the normal pressures that winter puts on health services, while they are trying to restart services put on hold during the crisis period – and all this being faced by an exhausted staff.

“The NHS would struggle if all of that came together at once”, Hopson says. “We struggle with winter pressures at the best of times, with insufficient bed capacity and community care capacity to deal with the levels of demand that we get”. Covid-19 creates a capacity loss because of the need to keep people infected by virus on separate wards from those who aren’t.

It’s not all gloom. The situation with personal protective equipment is now a lot better than in March, as is the availability of ventilators for severe cases (which turned out not to be so central anyway). What matters most, however, both for health services and for controlling the virus in the community, is the capacity for testing.

The lack of testing in the population was what largely hamstrung the response top the first wave – scientists and public health authorities were flying blind, not knowing how widespread the virus was or where it was concentrated. It was lack of testing that created the appalling spread of infection in care homes.

The situation now is very different. The UK is conducting tests as widely and as fast as most European countries: around 200,000 each day. Most of these are analysed in the Lighthouse Labs that were quickly set up for the task; repurposed academic labs throughout the country are also helping. “We’re in a much better position than we were at the start of the pandemic”, says molecular geneticist Andrew Beggs, who leads testing efforts at the University of Birmingham. “The government has massively increased the capacity for testing in a short space of time, and I’m more confident than I was two months ago that we’re got a really good chance of successfully testing people.”

What we need, says Ball, is “sentinel surveillance”: actively going out and working out where infections are occurring, particularly in high-risk populations such as hospitals and care homes, but also schools and universities. The Office for National Statistics is collaborating with other bodies in a pilot survey that will test a representative sample of households in the general population – up to 150,000 people a fortnight by October – to gauge the extent of infection.

Most testing uses swabs to collect samples that detect the presence of the virus, but it’s also possible to get an antibody test that reveals if you have had the virus without knowing it. Test results are almost always returned within 48 hours – much longer than that and they become of little value – and often within a day.

That’s important for several reasons. It alerts public health services and epidemiologists to dangerous hotspots of infections, so that they can be contained locally. It lets hospital staff know which patients can be safely kept on general wards, and whether they themselves are safe to be at work. Regular testing will be essential for frontline workers such as those operating public transport; at schools and offices it should not only tell people with suspicious symptoms whether they need to self-isolate but reveal whether the colleagues they came into contact with should do so.

Tests can also show how many people have now had the virus and are likely to have some level of immunity. Ball says that while its currently thought that perhaps 10% of the population have had Covid-19, some antibody results imply that the infection rater may have been much higher – as much as 50%. He suspects that actual number is somewhere in between. The more people have already been infected, the slower the virus might spread – and also, the lower the actual mortality rate is likely to be.

What’s more, new types of test being developed by British companies such as Oxford Nanopore and DNANudge could reduce the waiting time to a few hours, or less, from a procedure as simple as spitting into a cup. They can also be much more portable. “That gives you a lot more options for where you put the testing”, says Birney (who is a consultant for Oxford Nanopore). It could become routine to make a test part of airport flight check-in; commercial centres could have a testing facility where office workers get checked out at the start of the day. These options are still a long way off – and they depend on whether the promising initial results from the new methods stand up, as well as the companies’ unproven ability to scale up production. But “even if one technology doesn’t work our for rapid onsite screening, we have others in the pipeline”, says Beggs.

Another option is testing for the virus in sewage to keep track of infection levels in different parts of the country. From one test, you’re testing many thousands of people, says Birney. The Department for Environment, Food and Rural Affairs (Defra) has such a scheme underway, but it’s still too early to know how effective it will be.

Despite all this good progress, however, Hopson warns that there’s a lot to be done to create the testing regime that the NHS really needs. “Testing is one of the key issues we need to get right to prepare for winter, and there’s a long way to go to get to a fit-for-purpose operation,” he says. Both the number of tests and their speed will need to increase, and Hopson thinks that ideally we will need about a million tests a day by the end of December. “That’s a very tall order”, he says.

Tests will be crucial in health and care settings, where you need to know fast where a new patient should be put. For care homes, this information is vital to free residents from the need to be confined to the rooms. Epidemiologist Ruth Gilbert of University College London’s Institute of Child Health says that the loss of mobility and social interaction in care settings can accelerate mental and physical deterioration.

Equally crucially, the system needs to be joined up: a test result needs to go at once into people’s health records accessed by local GPs. And Hopson says there needs to be greater local control – at the moment the testing infrastructure is too nationally based.

“If you want to manage this risk, there’s a highly complex logistical operation with a complicated delivery chain”, Hopson says. “We need the funding to expand the capacity. We need the tests at volume. We need to set up the capacity close enough to where it’s needed. We need to get the computer systems joined up. It’s such a complex end-to-end process, from scientists developing tests to GP surgeries needing to see the care records, and local authorities, and it needs to operate at speed.”

It’s vital too that positive tests be followed up by effective contact tracing, so that others who might have been infected can self-isolate. “This is not working as well as it should”, says Hopson. “We’re losing too many people down that chain [of contacts].” The number of people being contacted and made to self-isolate is far lower in the UK than in other countries – and it’s not clear how much they are self-isolating anyway. “There has been no data published on it, and we know it’s not happening”, says Susan Michie, professor of health psychology at University College London.

This is as much a socioeconomic issue as a medical one. “People who are financially unable to self-isolate for 14 days need to be incentivized to do so”, says Hopson – their lost earnings need to be covered by the government. He points out that some places with high levels of outbreak tend to have higher percentages of ethnic monitory communities where English is not the first language, who are not always keen to interact with the state. This clearly needs sensitive handling – contact tracing must not seem “just a white middle-class operation”, he says.

Given the amount of preparation still to be done, many were alarmed by the news that Public Health England, the organization that overseas public healthcare within the Department of Health, is to be replaced by a new organization called the National Institute for Health Protection. This will bring the tasks of PHE under the same authority as NHS Test and Trace and the new pandemic data hub the Joint Biosecurity Centre.

“The last thing we need is reorganisation on top of this”, said Birney in response to the news, which came as a surprise to many like him who are involved in preparedness. “Even if this was the ultimately best chess move for a future pandemic preparedness, there is no way doing it mid-pandemic is sensible.” More than 200 public-health professionals signed a letter to The Telegraph in which they declared themselves “deeply disturbed by the news of another top-down restructure of the English public health system, particularly mid-pandemic, and without any forewarning for staff.”

But Hopson is more sanguine, saying that the move won’t involve large-scale restructuring of jobs. “I can see why everybody is jumping up and down”, he says, “but the leaders say to us that this is not a restructure.” Everyone will carry on doing their existing jobs – “it’s just that there’s a new interim team at the top level to link the parts together and create better coordination between them.” Having two different organizations doesn’t make a lot of sense. Putting them under one leadership team seems to us to make good sense.” Gilbert hopes that the new agency will make its data more widely available than PHE did, to help advance the science.

One of the biggest and most controversial issues for the autumn is the return of schools. While there is a broad consensus that getting pupils back must be a priority, this will inevitably raise the risk of spreading the virus. Although still too little is known about how readily this happens via children, there is some evidence now that secondary-school pupils can catch and pass on the virus much as adults do, and that primary-school children can do so even if they suffer only mild symptoms – probably about 15-20% of children infected have no symptoms, says Sanjay Patel of the Royal College of Paediatrics and Child Health.

There are encouraging signs that schools might not be a big source of infection, though. Sweden left schools open, and didn’t see lots of outbreaks or transmission, says Patel. Teachers didn’t have higher rates of infection either – lower than taxi drivers and supermarket workers.

“Schools have been working incredibly hard to try to get measures in place for opening in September”, says Patel. They will aim to keep pupils within small contact groups or “bubbles”, but this is much easier at primary than secondary level, where pupils change groups for different subjects and are less inclined to observe distancing rules. “If there’s an outbreak in a school, then sensible decisions need to be made about whether a bubble, a year, or a school needs to be closed”, says Patel.

He predicts that schooling “will be hugely disrupted for individual children and families, for bubbles and for year groups – there will be closures and outbreaks, and lots of children will be in and out of school.” Children of course get lots of coughs and colds over winter, and “those children will have to be excluded at once until they get a test result back. That means their parents will also have to isolate for that period.” But he hopes that regular seasonal viruses might themselves spread less because of the new measures.

“We have some really good plans in place for this winter”, he says. “We’ve learnt a lot from the first surge, and there’s absolutely no feeling of panic.”

But he adds that there’s no zero-risk option either. “The best way of protecting against outbreaks in school is to minimize the amount of infection in the community”, he says. This means compensating for school openings with restrictions elsewhere. At the moment, he says, it seems young people meeting in bars, pose a far higher risk of spreading than schools. So “do we prioritize our ability to go and have a drink in the pub, or the future education of our children?”

“The government has done a lot wrong, but generally we’re making progress”, says Beggs. “The natural British constitution is to be a bit gloomy about our ability to do things, but if we could share all the achievements we’ve done in a more optimistic way, I think people would be more reassured.”

Ah, there’s the rub. Beggs is right to warn about the danger of trying to present everything in the worst possible light in order to discredit a government that performed so dismally in the initial outbreak (about which I’ve written elsewhere). This would be unhelpful, as well as unfair to the many authorities, scientists, health professionals and others who have worked so hard to improve the prospects. Yet the fact remains that the good work done on preparedness stands in stark contrast to the very public and very damaging missteps the government has taken and continues to take. The messaging is still confusing, even misleading: ministers (and some chief medical advisers) seem intent, for example, on stressing the low risk that Covid-19 poses to young children returning to school (so stop worrying, parents!), whereas the true danger there is about transmission through the population generally. Announcements of local lockdowns have been woefully mismanaged. The alarm about the reorganization of PHE was deepened by the appointment of Dido Harding – who has no public health experience, a terrible track record with managing the Track and Trace system, and is married to a Conservative peer – as its head. While contracts do have to be awarded swiftly, without the delay of a drawn-out tendering process, in circumstances like these, too many seem to be going to companies with close contacts to government and its advisers. Blunders like the exams fiasco (and the refusal of government to accept blame or consequences) undermine even further public trust in our leaders.

This issue of trust will be crucial. Imposing local lockdowns to contain hotspots, identifying contacts of people who test positive, and persuading them to self-isolate, would be a challenge at the best of times, and hinges on whether people understand what they are being asked to do and why, and whether they trust those making the rules. Studies have shown that public trust in the government has already been badly eroded, both by the mishandling and poor messaging of the first wave and by what many see as the betrayal of Dominic Cummings’ lockdown breaches. Scientific and public health systems can do all they can to prepare, but in the end so much will depend on leadership and execution. I have been encouraged by what I have heard about the former; about the latter, I fear I remain gloomy.