Tuesday, January 12, 2021

Free will and physics: the next instalment

 

I’m sorry that I seem to have forced Jerry Coyne to write about a subject he is avowedly tired of, namely free will. But my piece in Physics World inspired him to do so, if only to suggest it is all wrong.

Needless to say, I don’t agree. I’m happy to say why, although it must be at a regrettably even greater length, given that just about every paragraph in his comments is misconceived.

But I’ll give you the short version first. If Coyne really is tired of writing about free will, he could have saved himself a lot of effort. He could have dropped the simple restatements of the “deterministic” case against free will (which were my starting point), and cut all the misrepresentations of what I said, and cut to the chase as follows:

“I don’t understand the scientific basis for Ball’s claim, but my hunch is that a couple of physicists I know would disagree with it. I’ll let readers argue that out.”

So that’s the executive summary. Here’s the rest.

First, a little flavour of the kind of thing that’s to come. At the start of the second half of his critique, Coyne says that my attacks on free will [sic – he means attack on attacks on free will] are misguided because I “do not appreciate that naturalism (determinism + quantum uncertainty) absolutely destroys the libertarian notion of free will held by most people.” This is such a peculiar statement, because my article was suggesting that this notion of naturalism doesn’t undermine free will. It’s not that I don’t “appreciate” that argument; it’s that I don’t agree with it. (I’m not sure quite what the “libertarian notion of free will held by most people” is precisely, because I haven’t asked them.) Surely Coyne of all people knows that convincing arguments are not simply made by declaring them correct by fiat? Isn’t that what he lambasts religious people for doing?

Now, let’s get this bit out of the way: “To say that psychological and neurological phenomena are different from physical phenomena is nonsense,” Coyne declares. This is the first of many plain misrepresentations of what I say. What I say – he even quotes it! – is that psychological and neurological phenomena are not meaningfully adjudicated by microphysics, by which I mean theories that begin with (say) subatomic particles. This is not the same as saying that the neural circuits involved in psychological and behavioural phenomena are not ultimately composed of such particles. The point of my article is to explain that distinction. As we’ll see, Coyne later admits that he doesn’t understand the scientific arguments that underpin the distinction. Hence my abridged version of his diatribe above.

Incidentally, Coyne alludes to experiments that allow us to predict “via brain monitoring what someone will do or choose.” This is presumably a reference to Libet-style experiments, conducted since the 1980s. As he has written on this topic before, I must assume that Coyne knows there has been a great deal of debate in the neurobiological and philosophical literature on whether they pronounce on free will at all. Only those who believe Coyne is correct about free will will absolve him of all responsibility for not mentioning that fact.

Coyne complains that I don’t define free will at the outset (although he seems oddly confident that whatever definition I choose, it is wrong). I don’t define it because I think it is a terrible term, which we seem lumbered with for historical reasons. A key aim of my article is in fact to suggest it is time to jettison the term and to talk instead about how we (and other creatures) make volitional decisions. This is an issue for cognitive neurobiology, and others have made an excellent start on outlining what such an endeavour might look like: for example here and here. I’m not sure if Coyne knows about this work; he makes no reference to it so perhaps I should assume he does not.

But is there really volitional behaviour at all, or is it all predetermined? That’s the key. Coyne admits that we can’t predict “with complete accuracy” what someone will do. Of course, there are lots of situations in life where a great deal of prediction is possible, sometimes simply on statistical grounds, sometimes on behavioural ones, and so on. No one disputes that.

So what do we mean by “with complete accuracy”? This is very clear. It means that, if Coyne is right, an all-seeing deity with complete knowledge of the universe could have predicted yesterday every action I took today, right down to, say, the precise moments I paused in my typing to sip my tea. It was all predetermined by the configuration of particles. 

If that were so, the unavoidable corollary is that everything that currently exists - including, say, the plot of Bleak House – was already determined in the first instants of the Big Bang. 

Now, as far as we know, this is not the case. That’s because quantum mechanics seems to be fundamentally indeterminate: there is an unpredictability about which outcomes we will see, because all we can predict is probabilities. But that just adds randomness, not anything that can be construed as will. So we can say that the plot of Bleak House was determined by the initial conditions of the Big Bang, plus some unpredictable randomness.

As it is unprovable, this is a metaphysical statement. It’s hard to see how we can advance beyond it one way or another. What I’m suggesting is that, rather than get stuck in that barren place, we might choose more profitably to talk about causes. That way, we can actually raises some useful and even answerable questions about why we do what we do, including why Dickens wrote Bleak House.

But Coyne says “Screw cause and effect… as they are nebulous, philosophical, and irrelevant to determinism.” Well, I could just stop here - because it means Coyne has said “Oh, your argument that rests on cause and effect? I’m not even going to think about it.” I’m not sure why he didn’t have the honesty to admit that, but hey. It’s true that causation is a very thorny philosophical issue indeed - but it also happens to be at the core of my notion of free will. Because it seems to me that the only notion of free will that makes much sense is not “I could have done otherwise” (which is also metaphysical, because you could never prove it - if your argument depends on working up from the exact microphysics of the situation, you can never conduct the same experiment twice) but “I - my mind, me as an organism - caused that to happen. Not the conditions in the Big Bang plus some randomness, but me.” And then of course we can argue about what “me” means, and how the mind is constructed, and all the rest of it, and we’ll find that it’s terribly complicated, but we’re arguing and constructing hypotheses and testing them in the right place, which is neuroscience and not microscopic physics.

So everything that follows that statement by Coyne that he’s not interested in debating causation is a sideshow, though it goes on for a very long time. (Later he returns to causation by saying I have confused notions about it. But he forgets to say why, or elects not to.) Still, let’s proceed.

“Is there anything we know about science that tells us that we can “will” ourselves to behave differently from how we did? The answer is no. We know of nothing about physics that would lead to that conclusion.” This is a restatement of the tired old idea that to posit “free will” means evoking some mysterious force outside of physics. I hope I have made it clear that I don’t do that. But let me say it again: I don’t believe there is anything operating when I make a decision beyond (as far as we know them) the fundamental forces of nature acting between particles. What I am saying is that it is wrong, perhaps even meaningless, to speak of all those countless interactions as the “cause” of the behaviour. What caused Dickens to write Bleak House? “Well, in the end, it has to be the Big Bang plus quantum randomness.” Really, that’s the hill you want to die on?

So when Coyne expresses outrage that I say it is “metaphysical” that “underlying our behavior are unalterable laws of physics?”, he has created an obvious straw man. What I in fact said - as careful readers might have noted - is that arguments that “free will is undermined by the determinism of physical law… claim too much jurisdiction for fundamental physics [and] are not really scientific but metaphysical.” This is not the same thing at all - precisely because of my assertion that we must judge such jurisdiction on the grounds of causation.

But straw men are about to appear in abundance. Coyne accuses me of one when I say:

“If the claim that we never truly make choices is correct, then psychology, sociology and all studies of human behaviour are verging on pseudoscience. Efforts to understand our conduct would be null and void because the real reasons lie in the Big Bang.”

This is a strawman, he says, “because none of us deny that there can be behavioral science, and that one can study many aspects of human biology, including history, using the empirical tools of science: observation, testing, falsification, and a search for regularities… Although the “laws” of human behavior, whether collective or instantiated in an individual, may not be obeyed as strictly  as the laws of physics, all of us determinists admit that it is fruitful to look for such regularities on the macro level—at the same time we admit that they must comport with and ultimately derive from the laws of physics.”

I find the extent of Coyne’s miscomprehension here astonishing. He goes on: how dare I call behavioural or social sciences pseudoscience, or history “just making up stories”, or say that behavioural regularities are just “peculiar coincidences” and nothing to do with evolution!

Now, there are a few clues that perhaps this is not what I’m saying or believing - like for example the fact that I wrote an entire book (more than one, actually) on how ideas from physics about how regularities and patterns arise in complex systems can be of value in understanding social science and economics. If Coyne had given a damn about who this chap he was criticising actually was, he might have discovered that and - who knows? - perhaps experienced a moment of cognitive dissonance that led him to wonder if he was actually understanding this article at all. That could have saved him some trouble. Still, onward.

In any case, he says, none of us determinists believe all those terrible things about the behavioural sciences and all the rest! It’s a straw man!

But my point is this: Sure, you don’t think those things. You all (I suspect) recognise the value of the behavioural and social sciences and so forth. But that’s because you haven’t really examined the implications of your belief.

Here’s why. If you believe that everything that happens (lets put aside the complication of quantum indeterminism for now) was preordained in the Big Bang - that the universe unfolds inexorably from that point as particle hits particle - then you really cannot sustain a genuine belief in behavioural sciences as true sciences. Let’s say that a behavioural scientist deduces that people behave a certain way, Y, in the presence of influence X, and so goes on to conduct an experiment in which X is withheld from the subjects, to see if their behaviour changes. And it does! So, there’s a fair case to be made that X is a causal influence on behaviour. 

But it’s not really so, is it? What you have to believe is that the conditions in the Big Bang caused a universe with people in it that are of the nature that behaviour Y tends statistically to be correlated with condition X. When we say “X causes Y”, we don’t mean that. There’s no genuine causal relationship involved; it’s just, as I say, “an enumeration of correlations”. I don’t care about dictionary definitions of “pseudoscience” (and Coyne only does, it seems, because he thinks I’m calling behavioural science a pseudoscience and wants to prove me wrong). But I do know that it is very common in pseudoscience to mistake correlation for causation. 

I guess it might be possible to imagine a kind of science that, while it employs “observation, testability, attempts at falsification, and consensus” while never rising above the level of documenting correlations, and never imputing any sort of causal mechanism. But I’m not sure I can think of one. What I am saying is that, if Coyne’s vision of determinism were true, behavioural sciences could never talk factually about mechanism and causation - or if they did, they’d not be speaking any kind of truth, but just a convenient story.

Still, I guess the best way is to find out. We could ask behavioural and social scientists if they are content to regard the objects of their studies as automata blindly carrying out computations – which is what Coyne’s view insists – or whether (at least sometimes) we should regard them as agents making genuine decisions. I’m pretty sure I know already the answer many neuroscientists would give, because some have told me.

At any rate, the basic point should be clear now: you don’t refute a reductio ad absurdum by crying “But that’s absurd!”

Well, on with the cognitive dissonance. Coyne says I “give the game away” by betraying that I can’t believe in free will after all, because I say:

“Classical chaos makes prediction of the future practically impossible, but it is still deterministic. And while quantum events are not deterministic – as far as we can currently tell – their apparently fundamental randomness can’t deliver willed action.”

“In other words” Coyne says, “physics, which Ball admits has to comport with everything at a “higher level”, can’t deliver willed action. Thus, if you construe free will in the libertarian, you-could-have-done-otherwise sense, then Ball’s arguments show that we don’t have it.” I’m not sure what to make of this. Does Coyne not realise that, by stating these things at the outset I am aiming to lay out the case to be addressed, and to avoid some spurious defences of free will that pin it all on some kind of fundamental indeterminacy? Does he not realise that, when one starts off presenting an argument by saying “Well, here’s the thing I’m seeking to challenge”, it is not a very impressive counter-argument to say “Ah but you just said that very thing, so you must believe it too!”?

Next. Evolution: I could have guessed this would be a sticking point! (Actually I did; that’s why I raised it.) 

I say:

“What “caused” the existence of chimpanzees? If we truly believe causes are reducible, we must ultimately say: conditions in the Big Bang. But it’s not just that a “cause” worthy of the name would be hard to discern there; it is fundamentally absent.”

In response, Coyne says:

“If Ball thinks biologists can figure out what “caused” the evolution of chimps, he’s on shaky ground. He has no idea, nor do we, what evolutionary forces gave rise to them, nor the specific mutations that had to arise for evolution to work. We don’t even know what “caused” the evolution of bipedal hominins, though we can make some guesses. We’re stuck here with plausibility arguments, though some assertions about evolution can be tested (i.e., chimps and hominins had a common ancestor; amphibians evolved from fish, and so on). And yes, that kind of testing doesn’t involve evoking the laws of physics, but so what?”

It’s hard to know where to begin with this. What he is talking about in terms of efforts to understand the evolution of chimps is precisely the same as what I’m talking about: one might look, for example, at morphological changes in the fossil record, and if possible at changes in genomics, and how they correlate. One does comparative genomics. One might frame hypotheses about changes in habitat and adaptations to them. In other words, I raise the notion of a “theory of chimp formation” as another reductio ad absurdum. I don’t believe biology should be aiming for such a thing, or that it is even meaningful. Rather I think it should be doing precisely what it is: making hypotheses about how chimps evolved on the basis of the available evidence.

The issue, though, is whether one regards this as renormalised physics. Coyne does. I am not sure all his colleagues would agree. I don’t mean that they would say (as he might), “Well, what we’re doing is just a more useful higher-level abstraction of the basic physics.” I suspect many would say that thinking about evolution as coarse-grained physics is of no value to what they do, and so they (rightly) don’t bother even to give it any thought.

But this does NOT mean there is anything except physics operating at the microscopic level of particles.

What does it mean then? That gets to the crux of the matter. What I’m suggesting is that it means that we shouldn’t be considering causation as only and entirely top-down. 

That is the point of the piece. And finally, after much huffing over straw men, Coyne gets to it. What does he have to say about it?

It is, he says, “something I don’t fully understand”. 

OK, so perhaps it would be best for him to leave it there. Sadly, he does not.

“As far as I do understand it”, he says, “it doesn’t show that macro phenomena result from the laws of physics, both deterministic and indeterministic, acting at lower levels. To me the concept is almost numinous.”

I don’t even know what this means. “It doesn’t show that macro phenomena result from the laws of physics acting at lower levels.” Huh? What then does he think it does show? That there’s some mysterious non-physical force at work? I’ve really no idea what he is trying to say here.

The idea of top-down causation, in the forms I’ve seen it, shows in fact that systems in which there are nothing but the laws of physics acting at lower levels nevertheless display causation that can’t be ascribed to those lower levels. 

Remember causation? That thing my argument was based on? Does Coyne agree with the arguments for the existence of top-down causation in complex systems? If not, why not? 

But it seems he doesn’t much care: he’ll “let readers argue this out”. Still, he adds, “if physicists like Sean Carroll and Brian Greene are not on board with this—and as far as I know, they aren’t—then I have reason to be skeptical.”

Really? An “argument from authority” – and one moreover that discounts the authority of Nobel laureates such as Phil Anderson? That’s the basis of his case?

Does he even know the position of Sean Carroll and Brian Greene on this? Has he asked them? Is there any evidence that they have considered such arguments? (Greene doesn’t mention it in his book.)

(By the way, I don’t think I “denigrate” (=“criticise unfairly”) Greene’s view in Until the End of Time. I simply disagree with it. If Coyne had more curiosity, it would have been very easy to discover that, while I bring up this point in my review of Greene’s book, I also had some good things to say about it.)

(And incidentally, Sean Carroll has written on top-down causation, but not in a way that is germane here. In The Big Picture, he dismisses the need to invoke it in snowflake formation - and I agree with him there. And in his blog here, he criticises John Searle’s view of consciousness from this perspective. But Searle believes consciousness is somehow a non-physical entity beyond science. That has nothing to do with the work I allude to. Where top-down causation matters is in discussing questions of agency.)

Truly, I had to ask myself, this is it? The reason Coyne thinks my piece is wrong is because (part from reasserting the same tired old arguments about determinism) he doesn’t fully understand the science on which they’re based, but he suspects a couple of his pals might not buy it and so that’s good enough for him?

Oh well. Onward.

Coyne says I’m wrong to say that dispelling the idea of free will has no implications for anything. Actually I don’t say that at all (I think I’m sensing a pattern here). I say it is rather telling that those who claim to have dispelled free will seem oddly keen to say we should go on acting as though it really is a thing.

No we don’t, Coyne says! We say that because there’s no free will, we should be “less retributive, more forgiving.” And this is precisely my point. If you don’t believe in free will, why should you be retributive or forgiving at all? In that case, none of what we do is our fault, because it was ordained in the Big Bang (plus randomness). That’s all there is to it. 

This is what I mean: those who deny free will don’t have the courage of their convictions. They feel obliged to resurrect it, or the ghost of it, to avoid having to absolve us of all responsibility. But they don’t seem to know how to do that, other than with arm-wavy statements like this: “I still think people are “responsible” for their actions, but the idea of “moral” responsibility is connected with “you-could-have-chosen-to-do-otherwise.”” So they are responsible but not morally responsible? Then responsible in what way, exactly? What kind of responsibility can stem from predeterminism? He doesn’t say.

Why, if there’s no free will, would we take any action at all to try to change people’s behaviour? After all, we can’t then have a genuinely causal influence on what they do. I guess in this case free-will deniers will say to themselves: “well, I know I’m not really deciding to do this, it’s just my automaton-brain playing out the 13.8-bn-year stage of the Big Bang, but then again, if I don’t then I suspect that 13.8-bn-year-old plan will include this person reoffending, and so I guess I’d better, but all the same I’m not choosing this but just telling myself I am because that’s what brains do, and so I guess I’m stuck with this belief that I personally have a causal effect on the future, but I don’t, and I must deny it, but there’s actually no must about it because that concept doesn’t exist either…” Or something. God knows what their narrative is. Perhaps it’s just “well I still have this gut feeling that that person is responsible in some way for what they do but I don’t really know what that means.”

What Coyne is talking about, I suspect, is the recognition that people vary in the degree to which they can truly decide on their actions. There are all kinds of influences that determine this: their past history, their social circumstances, the specific nature of their brain (part innate, part conditioned), whether they’ve just eaten… There’s a gradation from volitional to totally non-volitional (like reflexes). In a fair and just society, we already recognise this. So we try to make our rules and judgements by considering such factors, and trying to make a fair assessment of degrees of culpability, and thinking about what - if we punish someone for their actions - we might hope to achieve by it. We work at the macro level at which we can think meaningfully about cause and effect. We don’t argue about physics and the Big Bang. We don’t do that not because that would be an awfully hard way to reach a judgement about the situation, or because we lack the computational resources, but because we know it would be meaningless.

Because this is by no means the first time I’ve seen smart people transmuted into abysmal readers, I’m genuinely curious about what makes that happen. I have a hypothesis, though it would be hard to test. I think they start by reading the title or headline, thinking “Well I profoundly disagree with that”, and then let that preconceived judgement prevent them from actually reading the argument and assessing the rhetorical or logical trajectory of the piece. Instead they just read each sentence at a time and – without asking “Is this part of the author’s position, or the position he/she is setting out to attack?”, “Is this a rhetorical structure?” and so on – just decide for themselves what they think the sentence means and then consider how they can disagree with it. In Coyne’s case I fear that situation is compounded by his evident conviction that dismantling free will is part of his crusade against “religionists.”

Sometimes when I see this happen, I’m forced to wonder how science sustains any discourse at all. But fortunately, it seems to manage.

I guess I have been harsh here in some places, but I’m happy to take responsibility for that. I do think it was me that chose to write this, and not the Big Bang. And you do too really, don’t you?

PS If you read Coyne’s second article and go looking for my piece in Physics Today, you won’t find it. It was in Physics World. To judge from a glance at his comments thread, that’s a moot point anyway, as I saw little sign that most commenters were bothering to look at the article anyway. The one chap who evidently did, agreed with me.

Sunday, December 13, 2020

More on free will, and why quantum mechanics can't help you understand football

 

I’ve had some stimulating further discussion with Philip Goff and Kevin Mitchell on whether quantum mechanics can illuminate the free-will problem. Kevin has responded to our comments here; Philip’s have been on Twitter. Here’s where it all leaves me at this point.

First, here’s where I think we all agree:

(1) Events at the quantum scale can be adequately described by quantum mechanics – for our purposes, nothing more is needed.

(2) There’s no missing “force of nature” that somehow intervenes in matter as a result of “free will”.

(3) The future is not predetermined, because of quantum randonmess: at any given moment, various futures are possible.

Kevin’s argument is, as I understand it, that agents with free will are able to select from these possible futures.

Philip’s objection is that this is not how quantum mechanics works: those futures are determined by the probabilities they can be assigned from the Born rule.

I’m sympathetic to that observation: it isn’t at all clear to me how anything called free will can somehow intervene in a quantum process, however complex, to “select” one of its possible futures.

My objection to Philip’s point was, however, to the scenario he uses to illustrate it – where he decides whether or not to water his plant (called Susan). It seems to me to be ill-posed. I’m averse in general to thought experiments that don’t stack up in principle, and this seems to me to be one.

To calculate the Born probabilities for this situation, you would need to know the complete initial state of the system and the Hamiltonian that determines how its wavefunction evolves in time. Now, it is no good supposing we can define some generic “state of Philip confronted with thirsty Susan”. I’m not even sure what that could mean. How do we know what we need to include in the description to make a good prediction? What if Philip’s cell phone goes off just before he is about to water Susan, and calls him away on an emergency? How much of the world must we include for this calculation? And we’re looking to calculate the probability of outcome X, which quantum mechanics can enable us to do – so long as we know the target state X. But what is this? Is it one in which Susan stands in damp soil and Philip’s watering can is empty? But how do we know that he added the water of his own free will? What if in the initial state he know someone would shoot him later if he didn’t add the water? Does that still count as “free will”? I mean, he could in principle still refuse to water Susan, but it’s not what we would usually consider “free will”. But perhaps then our initial state needs to be one in which Philip has no such thought in his head. Had we better have a list of which thoughts are and aren’t allowed in that initial state? But whichever initial state we choose, we can never do the experiment anyway to see if the predictions are borne out, because we could never recreate it exactly.

My point is that we should not be talking about scenarios like this in terms of quantum states and wavefunctions, because that’s not what quantum mechanics is for. We can run an experiments many times that begins with a photon in a well-defined state and ends with it in another well defined state as it evolves under a well-defined Hamiltonian, and quantum mechanics will give us good predictions. But people are not like photons. Even though fundamentally their components are of course quantum particles obeying quantum rules, it is not just ludicrous but meaningless to suppose that somehow we can use quantum theory to make predictions about them – because the kind of states we care about (does Philip do this?) are not well defined quantum states, and the trajectories of any such putative states are not determined by well-defined Hamiltonians.

It seems to me the distinction here is really between quantum physics as a phenomenon and quantum mechanics as a theory. I don’t think anyone would dispute that quantum physics is playing out in a football match. But it seems to me a fundamental mistake to suppose that the formalism of quantum mechanics can (let alone should) be used to describe it, because that formalism does not involve the kinds of things that are descriptors of football matches, and vice versa. (Philip’s “watering a plant” scenario is of course much closer to a football match than to a Stern-Gerlach experiment.) It’s not just that the quantum calculations are too complex; the machinery of calculation is not designed for that situation. Indeed, we are only just beginning to figure out how to use that machinery to describe the simplest couplings of quantum systems to their environment, and these are probably probing the limits not just of what is tractable but what is meaningful.

Does all this objection, though, negate Philip’s point that free will can’t determine the outcome of a quantum process, as (ultimately) all processes are? In one sense, no. But my point is really that the answer to this is not legitimately yes or no, because I’m not sure the question has any clear meaning. The scenario Philip is depicting is one in which there is some massively complex wavefunction evolving in time that describes the whole system – him with watering can and potted plant – and somehow that evolution is steered by free will. But – and I think this is where I do agree with Kevin – I don’t believe this is the right way to describe the causation in the system.

 I don’t just mean it is not an operationally useful way to do that. I think it is fundamentally the wrong way to do it.

Here’s an example of what I mean. Imagine a tall tower of Jenga bricks. Now imagine it with one of the bottom brick removed, so that it’s unstable. The tower topples. What caused it to topple? Well, gravity and the laws of mechanics. Fine.

Now here’s the same tower, but this time we see what brought it to the state with the bottom brick removed: a child came along and took the brick. What caused it to fall? You could say exactly the same: gravity and mechanics. But we’re actually asking a different question. We’re asking not what caused the tower with the brick missing to fall, but what caused the tower with the brick still in place to fall – and the answer is that the child turned it into the unstable version. The child’s action was the cause.

When we try to speak of free will in terms of microphysics, we are confusing these two types of causal stories. We’re saying, Ah, the child acting is really just like the tower minus brick falling: physics says that’s the only thing that can happen. But what physics says that, exactly? Unlike the case of the tower falling, we can’t actually give an account of the physics behind it. So we just say, Ah, it’s somehow all there in the particles (why not the quarks? The strings, or whatever your choice of post-standard-model theory? But no matter), and I can’t say how this leads to that exactly, but if I had a really big computer that could calculate all the interactions, and I knew all the initial conditions, I could predict it, because there’s nothing else in the system. But that’s not a causal explanation. It is just a banal statement that everything is ultimately just atoms and forces. Yes it is – but at that level the true cause of the event has vanished, rather in the way that, by the time you have reduced a performance of Beethoven’s Eroica to acoustic vibrations, the music has vanished.

(This analogy goes deeper, because in truth the music is not in the acoustic waves at all, but in the influence they have on the auditory system of people attuned to hearing this kind of music so that they have the appropriate expectations. There is music because of the history of the system, including the deep evolutionary history that gave us pattern-seeking minds. So it makes sense to explain the effects of the music in terms of violations of expectation, enharmonic shifts and so on, but not in terms of quantum chromodynamics. You will simply not get a causal explanation that way, but just an (absurdly, opaquely complicated) description of underlying events.)

And you see that this argument has nothing to do with quantum mechanics, which is why I think quantum indeterminacy is a bit of a red herring. Free will – or better, volition – needs to be discussed at the level on which mental processes operate: in terms of the brain systems involved in decision-making, attention, memory, intention and so on.

The basic problem, then, is in the notion that causation always works from the bottom up, aggregating gradually in a sort of upwards cascade. There is good reason to suppose that it doesn’t – and that it is especially apt not to in very complex systems. Looked at this way, the microphysics is irrelevant to the issue, because the issue itself is not meaningful at the quantum level. At that level, I’m not sure that the matter of whether “things could have been otherwise” is really any different from the fact that things only turn out one way. (It could be interesting to pose all this in a Many Worlds context – but not here, other than to say I think Many Worlds makes the same mistake of supposing that quantum mechanics can somehow be casually welded onto decision theory.) Beyond quantum randomness, the notion that “things could have been otherwise” is a metaphysical one, because you could never prove it either way. Best, then, to jettison all of that and simply consider how decision-making works in cognitive and neurological terms. That’s how to make sense of what we mean by free will.

Friday, December 11, 2020

Does quantum mechanics rescue free will?

 

Philip Goff has challenged Kevin Mitchell’s interesting supposition that the indeterminacy of quantum physics creates some “causal slack” within which free will can operate. In essence, Kevin suggests (as I understand it) that quantum effects create a huge number of possible outcomes of any sufficiently complex scenario (like human decision-making), among which higher-level mechanisms of organismic agency can act to select one.

Philip responds that this won’t do the trick, because even though quantum mechanics can’t pronounce on which outcome will be observed for a quantum process with several possible outcomes, it does pronounce on the probabilities. He gives the example of his decision to water his dragon tree Susan (excellent name):

“Let’s say the Born rule determines that there’s a 90% chance my particles will be located in the way they would be if I watered Susan and a 10% chance they’ll be located in the way that corresponds to not watering Susan (obviously this is a ludicrously over-simplistic example, but it serves to make the point). Now imagine someone duplicated me a million times and waited to see what those million physical duplicates would decide to do. The physics tells us that approximately 900,000 of the duplicates will water Susan and approximately 100,000 of them will not. If we ran the experiment many times, each time creating a million more duplicates and waiting for them to decide, the physics tells us we would get roughly the same frequencies each time. But if what happens is totally up to each duplicate – in the radical incompatibilist sense – then there ought to be no such predictable frequency.”

It’s a good point, insofar as it needs an answer. But I think one exists: specifically, Philip’s scenario doesn’t really have any meaning. In this respect, it suffers from the same defect that applies to all attempts to reduce questions of human behaviour (such as those that invoke “free will”, a historically unfortunate term that deserves to have scare quotes imposed on it) to microphysics. The example Philip chooses is not “ludicrously over-simplistic” but in fact ill-defined and indeterminate. I don’t believe we could ever determine what is the configuration of Philip’s particles that predisposes him to water Susan. It’s not a question of this being just very, very difficult to ascertain; rather, I don’t see how such a configuration can be defined at the quantum level. We would presumably need to exclude all configurations that lead to other outcomes entirely – but how? What are the quantum variables that correspond to <watering Susan> or <not watering Susan (but otherwise doing everything else the same, so not cutting Susan in half either)>? What counts as “watering Susan”? Does a little water count? Is watering Susan before lunch the same as watering Susan after? This is not a simple binary issue that can be assigned Born probabilities – and neither can I see how any other human decision-making process is. (“Oh come on: what about ‘Either I press a button or I don’t’”? But no, that's not the issue as far as free will is concerned – it’s ‘Either I decide of my own volition to press the button, and I do it, and the botton works’ or not. And what then is the quantum criterion for ‘of my own volition’? How do we know it was that? What if I was bribed to do it?... and so on.)

Obviously such scenarios could go on ad infinitum, and the reason is that quantum mechanics is the wrong level of theoretical description for a problem like this. We simply don’t know what the right variables are: where the joints should be carved in an astronomically complex wavefunction for many particles that correspond to the macroscopic descriptions. And again, I don’t think this is (as physicists often insist) just a problem of lack of computational power; it’s simply a question of trying to apply a scientific theory in a regime where it isn’t appropriate. The proper descriptors of whether Philip waters Susan are macroscopic ones, and likewise the determinants of whether he does so. At the quantum scale they don’t just get intractably hard to discern, but in fact vanish, because one is no longer speaking at the right causal level of description.

This is, in fact, the same reason why Schrödinger’s cat is such an unhelpful metaphor. No one has ever given the vaguest hint at what the wavefunctions of a live and dead cat look like, and I would argue that is because “live” and “dead” can’t be expressed in quantum-mechanical terms: they are not well-defined quantum states.

I don’t necessarily argue that this rescues Kevin’s idea that quantum indeterminacy creates space for free will. I’m agnostic about that, because I don’t think what we generally mean by free will (which we might better call volitional behaviour) has any meaning at the quantum level, and vice versa. It’s best, I think, to explain phenomena at the conceptual/theoretical level appropriate to it. As Phil Anderson said years ago, it’s wrong to imagine that just because there’s reducibility of physical phenomena, this implies a reductive hierarchy of causation.

You’ll see very soon in Physics World why I’m thinking about this…

Tuesday, August 25, 2020

Is the UK ready for a Covid winter?

To prepare my article for The Guardian on whether the UK is prepared for a Covid winter, I spoke to many experts who gave a great deal of helpful information and advice. Only a small part of that could be fitted into the article, and I thought it would be helpful to put some more of it out there. So here is the longer version of that article.

_________________________________________________________________________

No one knows what Covid-19 holds in the coming months, but no one well-informed takes seriously Boris Johnson’s claim that it could all be back to normal by Christmas. With local outbreaks already prompting lockdowns in Leicester, Manchester and Preston, and cases rising at an alarming rate in Spain and Germany, it’s entirely possible that there will be grim days ahead. The faster spreading of the coronavirus and greater difficulty of maintaining social distancing as the weather gets colder, coupled to a return of schools and a desperate need to get the economy moving again, will increase the challenge of keeping a lid on the threat. So are we ready?

The good news is that some of what was lacking in March, and which led to such a disastrous outcome in the UK, is now in place. By no means all of that shortfall can be blamed on the present government; political leaders had for years ignored the warnings of specialists in infectious disease that a pandemic was a near certainty, the frightening lack of preparedness exposed by the 2016 Cygnus flu simulation was ignored while the nation was in the grip of Brexit-mania, the UK had no industrial infrastructure for generating testing capacity at short notice, and the NHS had been worn ragged by years of austerity. Besides, this was an entirely new virus, and little was known about how it spreads and harms the human body.

Significant headway has been made on some of those problems over the summer. The bad news is that it still might not be enough, and the outcome depends on many factors that are still all but impossible to predict. “We’ve got to up our game for the autumn”, says Ewan Birney, deputy director of the European Molecular Biology Laboratory, who heads its Bioinformatics Institute in Cambridgeshire. “We’ll be inside more. Universities and schools will be running. There will be a whole bunch of contacts that we don’t have now.”

“We can anticipate a lot more infections over the next few months”, says virologist Jonathan Ball of the University of Nottingham. The prime minister has advised hoping for the best and preparing for the worst, pledging that by the end of October there will be at least half a million tests for the virus conducted every day, and that the NHS will receive £3 bn of extra funding. But as Chris Hopson, chief executive of NHS Providers says, much more is likely to be needed in the next month or two to keep Covid-19 under control.

The nightmare scenario, he says, is a combination of a second surge of Covid-19 with a particularly difficult outbreak of winter flu, alongside the normal pressures that winter puts on health services, while they are trying to restart services put on hold during the crisis period – and all this being faced by an exhausted staff.

“The NHS would struggle if all of that came together at once”, Hopson says. “We struggle with winter pressures at the best of times, with insufficient bed capacity and community care capacity to deal with the levels of demand that we get”. Covid-19 creates a capacity loss because of the need to keep people infected by virus on separate wards from those who aren’t.

It’s not all gloom. The situation with personal protective equipment is now a lot better than in March, as is the availability of ventilators for severe cases (which turned out not to be so central anyway). What matters most, however, both for health services and for controlling the virus in the community, is the capacity for testing.

The lack of testing in the population was what largely hamstrung the response top the first wave – scientists and public health authorities were flying blind, not knowing how widespread the virus was or where it was concentrated. It was lack of testing that created the appalling spread of infection in care homes.

The situation now is very different. The UK is conducting tests as widely and as fast as most European countries: around 200,000 each day. Most of these are analysed in the Lighthouse Labs that were quickly set up for the task; repurposed academic labs throughout the country are also helping. “We’re in a much better position than we were at the start of the pandemic”, says molecular geneticist Andrew Beggs, who leads testing efforts at the University of Birmingham. “The government has massively increased the capacity for testing in a short space of time, and I’m more confident than I was two months ago that we’re got a really good chance of successfully testing people.”

What we need, says Ball, is “sentinel surveillance”: actively going out and working out where infections are occurring, particularly in high-risk populations such as hospitals and care homes, but also schools and universities. The Office for National Statistics is collaborating with other bodies in a pilot survey that will test a representative sample of households in the general population – up to 150,000 people a fortnight by October – to gauge the extent of infection.

Most testing uses swabs to collect samples that detect the presence of the virus, but it’s also possible to get an antibody test that reveals if you have had the virus without knowing it. Test results are almost always returned within 48 hours – much longer than that and they become of little value – and often within a day.

That’s important for several reasons. It alerts public health services and epidemiologists to dangerous hotspots of infections, so that they can be contained locally. It lets hospital staff know which patients can be safely kept on general wards, and whether they themselves are safe to be at work. Regular testing will be essential for frontline workers such as those operating public transport; at schools and offices it should not only tell people with suspicious symptoms whether they need to self-isolate but reveal whether the colleagues they came into contact with should do so.

Tests can also show how many people have now had the virus and are likely to have some level of immunity. Ball says that while its currently thought that perhaps 10% of the population have had Covid-19, some antibody results imply that the infection rater may have been much higher – as much as 50%. He suspects that actual number is somewhere in between. The more people have already been infected, the slower the virus might spread – and also, the lower the actual mortality rate is likely to be.

What’s more, new types of test being developed by British companies such as Oxford Nanopore and DNANudge could reduce the waiting time to a few hours, or less, from a procedure as simple as spitting into a cup. They can also be much more portable. “That gives you a lot more options for where you put the testing”, says Birney (who is a consultant for Oxford Nanopore). It could become routine to make a test part of airport flight check-in; commercial centres could have a testing facility where office workers get checked out at the start of the day. These options are still a long way off – and they depend on whether the promising initial results from the new methods stand up, as well as the companies’ unproven ability to scale up production. But “even if one technology doesn’t work our for rapid onsite screening, we have others in the pipeline”, says Beggs.

Another option is testing for the virus in sewage to keep track of infection levels in different parts of the country. From one test, you’re testing many thousands of people, says Birney. The Department for Environment, Food and Rural Affairs (Defra) has such a scheme underway, but it’s still too early to know how effective it will be.

Despite all this good progress, however, Hopson warns that there’s a lot to be done to create the testing regime that the NHS really needs. “Testing is one of the key issues we need to get right to prepare for winter, and there’s a long way to go to get to a fit-for-purpose operation,” he says. Both the number of tests and their speed will need to increase, and Hopson thinks that ideally we will need about a million tests a day by the end of December. “That’s a very tall order”, he says.

Tests will be crucial in health and care settings, where you need to know fast where a new patient should be put. For care homes, this information is vital to free residents from the need to be confined to the rooms. Epidemiologist Ruth Gilbert of University College London’s Institute of Child Health says that the loss of mobility and social interaction in care settings can accelerate mental and physical deterioration.

Equally crucially, the system needs to be joined up: a test result needs to go at once into people’s health records accessed by local GPs. And Hopson says there needs to be greater local control – at the moment the testing infrastructure is too nationally based.

“If you want to manage this risk, there’s a highly complex logistical operation with a complicated delivery chain”, Hopson says. “We need the funding to expand the capacity. We need the tests at volume. We need to set up the capacity close enough to where it’s needed. We need to get the computer systems joined up. It’s such a complex end-to-end process, from scientists developing tests to GP surgeries needing to see the care records, and local authorities, and it needs to operate at speed.”

It’s vital too that positive tests be followed up by effective contact tracing, so that others who might have been infected can self-isolate. “This is not working as well as it should”, says Hopson. “We’re losing too many people down that chain [of contacts].” The number of people being contacted and made to self-isolate is far lower in the UK than in other countries – and it’s not clear how much they are self-isolating anyway. “There has been no data published on it, and we know it’s not happening”, says Susan Michie, professor of health psychology at University College London.

This is as much a socioeconomic issue as a medical one. “People who are financially unable to self-isolate for 14 days need to be incentivized to do so”, says Hopson – their lost earnings need to be covered by the government. He points out that some places with high levels of outbreak tend to have higher percentages of ethnic monitory communities where English is not the first language, who are not always keen to interact with the state. This clearly needs sensitive handling – contact tracing must not seem “just a white middle-class operation”, he says.

Given the amount of preparation still to be done, many were alarmed by the news that Public Health England, the organization that overseas public healthcare within the Department of Health, is to be replaced by a new organization called the National Institute for Health Protection. This will bring the tasks of PHE under the same authority as NHS Test and Trace and the new pandemic data hub the Joint Biosecurity Centre.

“The last thing we need is reorganisation on top of this”, said Birney in response to the news, which came as a surprise to many like him who are involved in preparedness. “Even if this was the ultimately best chess move for a future pandemic preparedness, there is no way doing it mid-pandemic is sensible.” More than 200 public-health professionals signed a letter to The Telegraph in which they declared themselves “deeply disturbed by the news of another top-down restructure of the English public health system, particularly mid-pandemic, and without any forewarning for staff.”

But Hopson is more sanguine, saying that the move won’t involve large-scale restructuring of jobs. “I can see why everybody is jumping up and down”, he says, “but the leaders say to us that this is not a restructure.” Everyone will carry on doing their existing jobs – “it’s just that there’s a new interim team at the top level to link the parts together and create better coordination between them.” Having two different organizations doesn’t make a lot of sense. Putting them under one leadership team seems to us to make good sense.” Gilbert hopes that the new agency will make its data more widely available than PHE did, to help advance the science.

One of the biggest and most controversial issues for the autumn is the return of schools. While there is a broad consensus that getting pupils back must be a priority, this will inevitably raise the risk of spreading the virus. Although still too little is known about how readily this happens via children, there is some evidence now that secondary-school pupils can catch and pass on the virus much as adults do, and that primary-school children can do so even if they suffer only mild symptoms – probably about 15-20% of children infected have no symptoms, says Sanjay Patel of the Royal College of Paediatrics and Child Health.

There are encouraging signs that schools might not be a big source of infection, though. Sweden left schools open, and didn’t see lots of outbreaks or transmission, says Patel. Teachers didn’t have higher rates of infection either – lower than taxi drivers and supermarket workers.

“Schools have been working incredibly hard to try to get measures in place for opening in September”, says Patel. They will aim to keep pupils within small contact groups or “bubbles”, but this is much easier at primary than secondary level, where pupils change groups for different subjects and are less inclined to observe distancing rules. “If there’s an outbreak in a school, then sensible decisions need to be made about whether a bubble, a year, or a school needs to be closed”, says Patel.

He predicts that schooling “will be hugely disrupted for individual children and families, for bubbles and for year groups – there will be closures and outbreaks, and lots of children will be in and out of school.” Children of course get lots of coughs and colds over winter, and “those children will have to be excluded at once until they get a test result back. That means their parents will also have to isolate for that period.” But he hopes that regular seasonal viruses might themselves spread less because of the new measures.

“We have some really good plans in place for this winter”, he says. “We’ve learnt a lot from the first surge, and there’s absolutely no feeling of panic.”

But he adds that there’s no zero-risk option either. “The best way of protecting against outbreaks in school is to minimize the amount of infection in the community”, he says. This means compensating for school openings with restrictions elsewhere. At the moment, he says, it seems young people meeting in bars, pose a far higher risk of spreading than schools. So “do we prioritize our ability to go and have a drink in the pub, or the future education of our children?”

“The government has done a lot wrong, but generally we’re making progress”, says Beggs. “The natural British constitution is to be a bit gloomy about our ability to do things, but if we could share all the achievements we’ve done in a more optimistic way, I think people would be more reassured.”

Ah, there’s the rub. Beggs is right to warn about the danger of trying to present everything in the worst possible light in order to discredit a government that performed so dismally in the initial outbreak (about which I’ve written elsewhere). This would be unhelpful, as well as unfair to the many authorities, scientists, health professionals and others who have worked so hard to improve the prospects. Yet the fact remains that the good work done on preparedness stands in stark contrast to the very public and very damaging missteps the government has taken and continues to take. The messaging is still confusing, even misleading: ministers (and some chief medical advisers) seem intent, for example, on stressing the low risk that Covid-19 poses to young children returning to school (so stop worrying, parents!), whereas the true danger there is about transmission through the population generally. Announcements of local lockdowns have been woefully mismanaged. The alarm about the reorganization of PHE was deepened by the appointment of Dido Harding – who has no public health experience, a terrible track record with managing the Track and Trace system, and is married to a Conservative peer – as its head. While contracts do have to be awarded swiftly, without the delay of a drawn-out tendering process, in circumstances like these, too many seem to be going to companies with close contacts to government and its advisers. Blunders like the exams fiasco (and the refusal of government to accept blame or consequences) undermine even further public trust in our leaders.

This issue of trust will be crucial. Imposing local lockdowns to contain hotspots, identifying contacts of people who test positive, and persuading them to self-isolate, would be a challenge at the best of times, and hinges on whether people understand what they are being asked to do and why, and whether they trust those making the rules. Studies have shown that public trust in the government has already been badly eroded, both by the mishandling and poor messaging of the first wave and by what many see as the betrayal of Dominic Cummings’ lockdown breaches. Scientific and public health systems can do all they can to prepare, but in the end so much will depend on leadership and execution. I have been encouraged by what I have heard about the former; about the latter, I fear I remain gloomy.

Saturday, August 08, 2020

Music in lockdown


The images of people in Italian cities singing to one another from their balconies during the lockdowns to cope with the Covid-19 pandemic seem to come now from another, kinder era: before the enormity of the international crisis was fully apparent, before the death toll approached a quarter of a million and the sense of social unity had begun to fragment as politicians and others used the situation to sow and exploit division.

Here in Britain we considered that footage of balcony serenades to be gloriously Italianate, feeding into a romantic national stereotype (even if it later happened too in Germany, Spain and Switzerland). But there was in truth something universal about this impulse to turn to music in times of crisis and catastrophe. It has happened everywhere as people struggle to cope with the fears and constraints of the pandemic, offering a cathartic release much as Leonard Slatkin, chief conductor of the BBC Symphony Orchestra, turned to Samuel Barber’s Adagio for Strings to express the right sentiment at the usually celebratory Last Night of the Proms following the 9/11 terrorist attacks in 2001.

What makes music a good vehicle for this role? “In crises”, German musicologist Gunther Kreuz has said, “music has a very strong function to balance people, and show them there is light at the end of the tunnel.” During the pandemic, he says, there were also initiatives involving small ensembles playing in front of care homed for elderly people. One obvious advantage music has in this respect is that it works very well as a socially distanced medium: a kind of communication and contact that remains effective from a distance.

But there is more to it than that, some of which surely relates to the global use of music in ritual and worship. Unlike conversation, music is designed to be broadcast to groups: it allows everyone who hears it to feel addressed individually. That can be true to some extent for the spoken word too – the recital of a poem or sacred text, for example. But the deep value of music for promoting a sense of community, sacredness and emotional connection is precisely that it has no words – or perhaps, for those of us listening to the Italian balcony arias without understanding a word, that the words needn’t matter. Because music shares a great deal with spoken language – the rhythmic and pitch variations, the nested and episodic structure of phrases – it seems to carry meaning without actual semantic content. Each of us is free to create the meaning for ourselves.

At the same time, it penetrates directly to the emotions, in part by a kind of mimicry of human emotional expression but also by stimulating the neural reward pathways that respond to our subconscious anticipation of pattern and regularity. It is this powerful capacity of music for expression of what lies beyond words that led cultural critic Walter Pater to declare that “all art constantly aspires towards the condition of music”. In an age of catastrophe, music becomes more indispensable than ever.

Tuesday, April 21, 2020

Three colours: Yellow

Jan van Huysum’s Flowers in a Terracotta Vase (1736) is a riot of floral colour, the equal of anything else by the Dutch flower painters of the seventeenth and early eighteenth centuries. But some of it looks decidedly odd. The leaves spilling out from among the bright blooms don’t look at all healthy, or indeed natural: they are more blue than green.


Jan van Huysum, Flowers in a Terracotta Vase (1736).

This is neither by intention nor mistake. Simply, the yellow pigment that Huysum mixed with blue to create his greens has faded. It was a common problem noted even at the time: the English chemist and writer Robert Dossie wrote in his Handmaid to the Arts (1758) that “The greens we are forced at present to compound from blue and yellow are seldom secure from flying or changing.”

Because artists did not then have a particularly vibrant green pigment that approached the colour of fresh vegetation, they often needed to resort to this mixing of primaries. But unless your primary pigments are bright and pure, such a mixture may become a little murky. Among the brightest of yellows were lake colours, meaning that the pigment was made from a water-soluble organic (plant- or animal-based) substance – basically a dye – fixed to the surface of fine particles of a white powder like chalk or ground eggshell. But organic dyes don't last well when exposed to light: the rays break up the colorant molecules, and the colour “flees”. (Even today pigments and dyes that are not colourfast are said to be “fugitive”.)

Technically these yellows were not exactly lakes, but pinks. Yes, it’s confusing: the word “pink” originally referred not to a pale reddish colour but to a class of pigments similar to lakes but made without the need for an alkali in the recipe. In the seventeenth century there were yellow pinks, green pinks, and light rose-coloured pinks. It is only because the last of these stayed in use for the longest that the term today denotes a hue.

The colorant used for yellow pinks was typically an extract of weld, broom or buckthorn berries. But one used these materials – as Huysum discovered – at one’s own risk.

It’s not that artists didn't have alternative, more stable yellows available. But as with any colour, not all yellows are equal. Those that could be made from minerals or inorganic compounds produced artificially might last longer, but some were rather dirty or pale in their tint.

There was, for example, yellow ochre: a yellowish form of the iron oxide mineral that also came in reds and browns. But if ochre today conjures up a brownish earth colour, that’s because yellow ochre was in truth more like that: fine for tawny hair, but not at all the thing for tulips or satin robes.

Then there was Naples yellow, as it was known from the seventeenth century: a pigment of rather variable composition but which was generally made from synthetic compounds of tin, antimony and lead. The ancient Egyptians knew how to combine lead with antimony ore to make a yellow, and in fact a natural mineral form of that compound (lead antimonite) was also used as an artists’ material. It could be found on the volcanic slopes of Mount Vesuvius, which is how it came to be associated with Naples. Other recipes for a yellow of similar appearance specified mixing the oxides of lead and tin. The ingredients weren’t always too clear, actually: when Italian medieval painters refer to giallorino, you can’t be sure if they mean a lead-tin or lead-antimony material, and it is unlikely that the painters recognized much distinction. Before modern chemistry clarified matters from the late eighteenth century, names for pigments might refer to hue regardless of composition or origin, or vice versa. It could all be very confusing, and from a name alone you couldn’t always be sure quite what you were getting – or, for the historian today, quite what a painter of long ago was using or referring to.


The chemise of Jan Vermeer’s The Milkmaid (c.1658-61) is painted with a lead-tin yellow.

In some respects that’s still true now. A tube of modern “Naples yellow” won’t contain lead (rightly shunned for its toxicity) or antimony, but might be a mixture of titanium white and a chromium-based yellow, blended to mimic the colour of the traditional material. There’s no harm in that – on the contrary, the paint is likely to be not only less poisonous but more stable, not to mention cheaper. But examples like this show how wedded artists’ colours are to the traditions from which they emerged. When you’re talking about vermilion, Indian yellow, Vandyke brown, orpiment, the name is part of the allure, hinting at a deep and rich link to the Old Masters.

One thing is for sure: you won’t find the gorgeous orpiment yellow on the modern painter’s palette (unless perhaps they are consciously, and in this case rather hazardously, using archaic materials). It is a deep, golden yellow, finer than Naples and lead-tin yellows. The name simply means “pigment of gold”, and the material goes back to ancient times: the Egyptians made it by grinding up a rare yellow mineral. But at least by the Middle Ages, the dangers of orpiment were well known. The Italian artist Cennino Cennini says in his handbook, written in the late fourteenth century, that it is “really poisonous”, and advises that you should “beware of soiling your mouth with it.” That’s because it contains arsenic: it is the chemical compound arsenic sulphide. (A different form of the same compound, also found as a natural mineral, furnishes the pigment realgar, the only pure orange colour available to painters until the nineteenth century.)


Natural orpiment (arsenic sulphide).

Orpiment was one of those gorgeous but costly pigments imported to Europe from the East, in this case from Asia Minor. (In the early nineteenth century there were also imports from China, so that orpiment was sold in Britain as Chinese Yellow.) Such alluring imports often arrived through the great trading centre of Venice, and orpiment was hard to acquire up in Northern Europe during the Middle Ages and the Renaissance – unless, like the German artist Lucas Cranach, who ran a pharmacy, you had specialist connections to exotic materials. Some orpiment was made not from the natural mineral but artificially by the chemical manipulations of alchemists. This type can be spotted on old paintings today by studying the pigment particles under the microscope: those made artificially tend to be more similar in size and have rounded grains. From the eighteenth century it was common to refer to this artificial orpiment as King’s Yellow. Rembrandt evidently had a supplier of the stuff, which has been identified in his Portrait of a Couple as Isaac and Rebecca (often called The Jewish Bride), painted around 1665.

If Dutch painters wanted a golden yellow like orpiment without the risk of poisoning, the Age of Empire supplied another option. From the seventeenth century, Dutch paintings (including those of Jan Vermeer) begin to feature a pigment known as Indian Yellow, brought from the subcontinent by the trading ships of Holland. It arrived in the form of balls of dirty yellowish-green, although bright and untarnished in the middle, which bore the acrid tang of urine. What could this stuff be? Might it truly be made from urine in some way? Lurid speculation abounded; some said the key ingredient was the urine of snakes or camels, others that it was made from the urine of animals fed on the yellow Indian spice turmeric.

The mystery seemed to be solved in the late nineteenth century, when an Indian investigator making enquiries in Calcutta was directed to a village on the outskirts of the city of Monghyr in Bihar province, allegedly the sole source of the yellow material. Here, he reported, he found that a group of cattle owners would feed their livestock only on mango leaves. They collected the cows’ urine and heated it to precipitate a yellow solid which they pressed and dried into lumps.

The cows (so the story goes) were given no other source of nutrition and so were in poor health. (Mango leaves might also contain mildly toxic substances.) In India such lack of care for cattle was sacrilegious, and legislation effectively banned the production of Indian Yellow from the 1890s.


J. M. W. Turner was one of the nineteenth-century artists who made much use of Indian yellow.

There has been debate about how much of this story is true, but the basic outline seems to stand up – the pigment has a complicated chemical make-up but contains salts of compounds produced from substances in mango leaves when they are metabolized in the kidneys.

While artists were having to rely for brilliant yellows on fugitive plant extracts, deadly arsenic-laden powers and cows’ urine, one might fairly conclude that they would welcome better yellows. So, then, it’s not hard to imagine the excitement of the French chemist Nicolas Louis Vauquelin when at the start of the nineteenth century he found he could make a vibrant yellow material by chemical alteration of a mineral from Siberia called crocoite.

This stuff was itself red – it was popularly called Siberian red lead, since there was truly lead in it. But in 1797 Vauquelin found there was something else too: a metallic element that no one had seen before, and which he named after the Greek word for colour, chrome or chromium.


“Siberian red lead”, a mineral source of chromium.

The name was aptly chosen, because Vauquelin soon discovered that chromium could produce compounds with various bright colours. Crocoite is a natural form of lead chromate, and when Vauquelin reconstituted this compound artificially in the laboratory, he found it could take on a bright yellow form. Depending on exactly how he made it, this material could range from a pale primrose yellow to a deeper hue, all the way through to orange. Vauquelin figured by 1804 that these compounds could be artists’ pigments, and they were being used that way even when the French chemist published his scientific report on them five years later.

The pigment was expensive, and remained so even when deposits of crocoite as a source of chromium were discovered also in France, Scotland and America. Chromium could also supply greens, most notably the pigment that became known as viridian and which was used avidly by the Impressionists and by Paul Cézanne.

The chromium colours play a major role in the explosion of prismatic colour during the nineteenth century – evident not just in Impressionism and its progeny (Neo-Impressionism, Fauvism and the work of van Gogh) but also in the paintings of J. M. W. Turner and the Pre-Raphaelites. After the muted and sometimes downright murky shades of the eighteenth century – think of Joshua Reynolds’ muddy portraits and the brownish foliage of Poussin and Watteau – it was as if the sun had come out and a rainbow arced across the sky. Sunlight itself, the post-Impressionist Georges Seurat declared, held a golden orange-yellow within it.

For their sun-kissed yellows, the Pre-Raphaelites and Impressionists did not need to rely on chromium alone. In 1817 the German chemist Friedrich Stromeyer noticed that zinc smelting produced a by-product with a yellow colour in which he discovered another new metallic element, named after the archaic term for zinc ore, cadmia: he called it cadmium. Two years later, while experimenting on the chemistry of this element, he found that it would combine with sulphur to make a particularly brilliant yellow – or, with some modification to the process, orange. By the mid-century, as zinc smelting expanded and more of the byproduct became available, these materials were offered for sale to artists as cadmium yellow and cadmium orange.


The artificial pigment cadmium yellow.

The cadmium colours have always stayed rather expensive, though. Nothing really beats cadmium red, a variant that went on market only around 1910. But it is typically around twice the price of other comparable reds, and the same goes for cadmium yellow. In that respect things have not changed so much since an artist in the Renaissance had to weigh up the worth of acquiring expensive orpiment as opposed to the drabber but much cheaper Naples yellow.

There’s a lesson in the cadmium pigments that applies to all colours, through all ages: they have often been byproducts of some other chemical process altogether, often discovered serendipitously as chemists and technologists pursue other goals – to make ointments, say, or soap, glass or metals.

It’s no different now. If you buy a tube labeled “Indian Yellow”, you can be sure no mangos or urine went into its making. Chances are, it will contain a yellow pigment that goes by the unromantic name of PY (pigment yellow) 139 – no mineral or metal salt, but a complicated organic molecule, meaning today that it is carbon-based and resembles molecules found in some living organisms. Chemists will say that it is a “derivative of isoindoline”, but the key point is that at its core is a ring of six carbon atoms joined into a so-called benzene ring.

That’s a clue to the true heritage of these modern organic pigments. Pure benzene, as well as other molecules closer still in their shape and structure to those of PY139, was first isolated in the early nineteenth century from a substance called coal tar, the black tarry residue left over from the industrial extraction of natural gas from coal for gas lighting. Coal tar has a pungent smell – think of the traditional coal-tar soap, which contained some disinfectant compounds distilled from coal tar. This is because it is full of molecules with benzene rings at the core, which tend to be aromatic. (Chemists use that word simply to signify that benzene rings are present, irrespective of smell itself.) In the mid-nineteenth century, German chemist August Hofmann, the leading expert on aromatic coal-tar compounds, set his young English student William Perkin the challenge of trying to make the anti-malarial drug quinine from coal-tar extracts. Perkin didn’t succeed, but instead he found he had made a rich purple substance that he called aniline mauve and began to sell as a dye. That was the beginning of the synthetic-dye industry, which gave rise to the modern era of industrial chemistry: by the early twentieth century, dye manufacturers were starting to diversify into pharmaceuticals and then plastics.

This is the world from which PY139 comes, along with a host of other organic pigments that mimic the old traditional colours with safer, cheaper compounds – many of them used also as food colorants, dyes and inks. One of the first offshoots of the aniline dyes was a yellow, simply called aniline yellow and belonging to an important class of colorants called azo dyes; it was sold commercially from 1863. There is a good chance that, when you see yellow plastic products today, they are coloured with azo dyes.


Winsor and Newton’s azo yellow.

It seems a deeply unglamorous way to brighten the world today, compared to the age of King’s Yellow, saffron and Indian Yellow. It could feel that what is saved in the purse is sacrificed in the romance. Maybe so. But artists are typically pragmatic people, as eager for novelty as they are attached to tradition. There has never been a time when they have not avidly seized on new sources of colour as soon as those appear, nor when they have not relied on chemistry to generate them. The collaboration of art and science, craft and commerce, chance and design, is as vibrant as ever.