Saturday, February 29, 2020

How you hear the words of songs

This is my latest column for the Italian science magazine Sapere.

________________________________________________________________



The distinctions between song and spoken word have always been fuzzy. There’s musicality of rhythm and rhyme in poetry, and some researchers think the origins of song merge with those of verse in oral traditions for passing on stories and knowledge. Many musical stylings lie on the continuum between melodic singing and spoken recitation, ranging from the quasi-melodic recitative of traditional opera, the almost pitchless Sprechstimme technique introduced by Schoenberg and Berg in operas such as Pierrot Lunaire and Lulu, the Beat poetics of Tom Waits or the rapid-fire wordplay of rap.

It’s also well established that the cognitive processing of music and language share resources in the brain. For example, the same distinctive pattern of brain activity appears, in the language-processing region called Broca’s area, when we hear both a violation of linguistic syntax and a violation of the ‘normal’ rules of chord progressions. Yet the brain appears to use quite different parts of the brain to decode speech and sung melody: to a large extent, it categorizes them as different kinds of auditory input, and analyses them in different ways.

To a first approximation, speech is mostly processed in the left hemisphere of the brain, while melody is sent to the right hemisphere. Philippe Albouy and colleagues, working in the lab of leading music cognitive scientist Robert Zatorre at McGill University in Montreal, have now figured out how that processing differs in detail: what the brain seems to be looking for in each case. They asked a professional composer to generate ten new melodies, to each of which they set ten sentences, creating a total of 100 “songs” that a professional singer then recorded unaccompanied.

They played these recordings to 49 participants while altering the sound to degrade its information. In some cases they scrambled details of timing, so the words sounded slurred or indistinct. In others they filtered the sound to alter the acoustic frequencies (spectra), giving the songs a robotic, “metallic” quality. Participants were played an untreated song followed by the pair of altered versions, and were asked to focus either on the words or the melody.

For the tunes where timing details were altered, the melodies remained audible but not the words. With spectral manipulation, the reverse was true: people could make out the words but not the tune. So it seems that the speech-processing brain looks for temporal cues to decode the sound, whereas for melody-processing it’s the spectral content that matters more. Albouy and colleagues confirmed, using functional MRI for brain imaging, that the changes caused different activity in the auditory cortex on the left and right of the brain respectively.

Importantly, this doesn’t mean that the brain sends the signal one way for song and the other for speech. Both sides are working together in both cases, for otherwise we couldn't make out the lyrics of songs or the prosody – the meaningful rise and fall in pitch – of speech. The musical brain is integrated, but delegates roles according to need.

Sunday, January 05, 2020

Was Dracula gay?

The mostly rather splendid adaptation of Bram Stoker’s Dracula just screened by the BBC prompts me to post here this short edited extract from my forthcoming book The Modern Myths: Adventures in the Machinery of the Popular Imagination.

The BBC Dracula excited much comment, some of it affronted and outraged, in its portrayal of the Count as bisexual. I thought it might be useful to explain, then, how and why gay sexuality is a central theme in Dracula.

If you like the sound of this piece, please do feel free to advertise it far and wide. My book, which ranges from Robinson Crusoe to Batman, and which touches on (among other things) zombies, werewolves, superheroes, aliens and UFOs, psychoanalysis, incest and perversion, Judge Dredd, Jane Austen, J. G. Ballard, J. M. Coetzee, and the end of the world, was not deemed terribly interesting (or sciencey enough) by most UK publishers, so forgive me for having to promote it shamelessly from now until publication.

___________________________________________________________________________

First and foremost, “the vampire is an erotic creation”, according to the Italian writer Ornella Volta: “The vampire can violate all taboos and achieve what is most forbidden.” Those taboos surely include, inter alia, dominance and submission, rape, sadomasochism, bestiality and homoeroticism. Let’s throw in masturbation and incest too: cultural critic Christopher Frayling sees in the voluptuous nightly visitation of a being who leaves the victim in a swoon and depleted of vital fluids the imprint of erotic dreams and nocturnal emissions; while for Freudian psychoanalyst Ernest Jones, writing in 1910, the vampire expressed “infantile incestuous wishes that have been only imperfectly overcome.”

What makes Dracula so compelling and potent is that its sexual work is done largely unconsciously. If Bram Stoker’s great-nephew and biographer Daniel Farson is to be believed (which is by no means always the case), the author “was unaware of the sexuality inherent in Dracula”. Which is why gothic scholar David Skal is right to suggest that we can read the book today as “the sexual fever-dream of a middle-class Victorian man, a frightened dialogue between demonism and desire.” He calls Dracula “one of the most obsessional texts of all time, a black hole of the imagination”.

For Frayling, the book “was probably transgressing something – but the critics weren’t quite sure exactly what.” That is probably because the author was not sure either. On the face of things, Stoker was an eminently respectable late Victorian – and we know well enough today not to trust that persona an inch. For what he produced in Dracula was aptly described by English writer and critic Maurice Richardson “a kind of incestuous, necrophilious [sic], oral-anal-sadistic all-in wrestling match”. Which means (I am not being entirely glib here) that it had something for everyone.

Stoker was born and raised in Dublin to a Protestant family clinging to the lower rungs of the middle classes. Respectability mattered to him; he had the judgemental morality of the socially precarious, for example proclaiming after a visit to America that the hobos and tramps there should be branded and sent to labour colonies to learn what hard work was. His rather shrill worship of “manliness” and conventional views on gender roles look now like aspects of a determined act of self-deception.

Still, you can’t fault his work ethic. Stoker never seemed to feel that one full-time occupation need preclude others, and so even while he was studying mathematics at Trinity College Dublin he took a job in the civil service, established himself as a theatre critic, and in 1873 accepted the editorship of the Irish Echo, where he was essentially the only member of staff. (Somehow he still got his degree.)

In 1867 he saw a production of Sheridan’s The Rivals starring the actor Henry Irving. Stoker was smitten, and when Irving, by then actor-manager at the Lyceum Theatre in London, returned to Dublin in 1876, Stoker’s reviews of his production of Hamlet were so effusive that the acutely vain actor invited the young critic to dinner, at which he treated Stoker to a rendition of a melodramatic poem. It’s all too easy (given accounts of his acting technique) to imagine Irving reciting it with self-adoring hamminess, but Stoker attested that he was reduced to “a violent fit of hysterics” (at that time a stereotypically womanly response – he’s not talking about laughter). His devotion was sealed. “Soul had looked into soul!” Stoker wrote. “From that hour began a friendship as profound, as close, as lasting as can be between two men.” At the end of 1878 Irving asked him to come to London as his front-of-house manager.

It’s easy to see why Count Dracula is often said to be based on Irving, even though there is not really any compelling reason to think so. For the man truly was a sort of monster (mainly of egotism); he could be charming, but also cruel, haughty and callous. And he drained Stoker dry, treating him like a servant if not indeed a slave, and exploiting his house manager’s blind devotion to make outrageous demands at all times of day and week.

Buy even if we accept that Stoker’s arrogant, domineering Count Dracula was not formulated as an act of surreptitious revenge on Irving, still he seemed to have hoped the actor might play the role on stage. That never looked likely to happen. After an informal reading of Stoker’s stage treatment of the book at the Lyceum shortly after its publication, Irving was reported to have walked out muttering ”Dreadful!”

That Stoker not only bore all these demands and rebuffs but sustained his adoration of Irving regardless seems to speak of more than just hero worship. He had a masochistic infatuation with Irving that persisted until the actor died, a feeling stronger than any he showed for his wife and family. Stoker’s friend, the Manx writer Hall Caine, attested that “I have never seen, nor do I expect to see, such absorption of one man’s life in the life of another.” Stoker’s feeling for Irving, he added, was “the strongest love that man may feel for man.” (There was possibly a homoerotic element too in Stoker’s closeness to Caine. Highly-strung and flamboyant, with a thinning mane and piercing stare, Caine was the “Hommy-Beg” – his Manx nickname – to whom Stoker dedicated Dracula.)

While studying at Trinity in the early 1870s, Stoker became fixated on the American poet Walt Whitman, who is now widely considered to have had homoerotic (if not necessarily homosexual) relationships. Stoker wrote Whitman an impassioned letter, full of ambiguous remarks about his own sexuality: “How sweet a thing it is for a strong healthy man with a woman’s eyes and a child’s wishes to feel that he can speak to a man who can be if he wishes father, and brother and wife to his soul.” In the Whitmanesque poem he wrote in 1874, “Too Easily Won”, he speaks of the anguish of being rejected by another man: “His heart when he was sad & lone/Beat like an echo to mine own/But when he knew I loved him well/His ardour fell.” Stoker met Whitman during an American tour with Irving in 1884, and the two men talked warmly.

Whether all this means we can consider Stoker a repressed homosexual is a complicated question, however. Only in his day were the boundaries of sexuality becoming fixed – indeed, it was then that the word “homosexual” was coined. In the same year Dracula was published, Sexual Inversion by Havelock Ellis and John Addington Symonds argued that same-sex desire was a common state of affairs with a long history, albeit an “inversion” of the norm. Before this medicalization of sexuality, it simply had no role as a label of identity, and distinctions between mutual affection and sexual desire between men were barely scrutinized.

Homosexual preference wasn’t seen as incompatible with heterosexual marriage, for the objectives of the two types of relationship were quite different. What a wife might bring to a man who loved men was almost as much a matter of aesthetic balance as of social respectability. Love wasn’t necessary to balance that equation. Just as Oscar Wilde’s marriage can’t simply be considered a sham for the sake of social conformity, no more can Stoker’s. The comparison could hardly be more apt anyway, for both men (whose families were acquainted in Ireland) courted the same woman: the “Irish beauty” Florence Balcombe. She chose Stoker but got little joy from it; the marriage was said to be devoid of passion, and Bram only ever speaks of ‘love’ and ‘loving’ in the context of his male relationships. Poor Florence was left at home to mind the children while her husband worked long nights at the Lyceum. At the end of her life she hinted at regrets that she hadn’t opted for Wilde after all, in spite of what had befallen him.

But Oscar’s flamboyance seems to have made Bram wary of his acquaintance when the two men lived in London: they remained in uneasy contact, but Stoker never once mentioned Wilde in his letters. He must have watched with horror as Oscar’s public disgrace unfolded in 1895 in the fateful trial against the Earl of Queensberry, provoked by Wilde’s relationship with the earl’s son Lord Alfred Douglas. After Wilde had been convicted, his brother Willie wrote to Stoker saying “poor Oscar was not as bad as people thought him.” It isn’t clear Bram ever replied.

In many ways Wilde’s The Picture of Dorian Gray represents his more metaphorical take on the vampire myth. Here again is the corrupt, Byronic aristocrat (Lord Henry Wooton) who saps the life-force from those around him. Wilde’s book is far superior in literary terms, and provides a more nuanced and imaginative perspective on vampirism. He was also much more in conscious control of his material, especially its homoerotic subtext. And this is precisely why Dorian Gray has entered the modern literary canon but is not a modern myth.

*

Given what we know of Stoker, we might expect the subliminal eroticism of Dracula to be slanted towards male same-sex relations. In that respect it doesn’t disappoint. Whether or not, as cultural historian Nina Auerbach claims, Dracula “was fed by Wilde’s fall”, the book positively throbs with frantically suppressed homoeroticism. The Count dismisses his predatory “brides” as they prepare to violate Harker with the command “This man belongs to me!” F. W. Murnau, the director of the iconic 1922 film adaptation Nosferatu, was gay himself and seems alert to Stoker’s unconscious subtext when his Dracula figure Graf Orlok licks blood from the finger of his bewildered guest after he cuts himself shaving. (We should be cautious about ascribing explicit intent, however; Murnau used a screenwriter.)

Yet while Dracula has sometimes been presented as a model of queer identity, it hardly seems a positive portrayal. He embodies everything perceived to be “bad” about homosexuality: all that Stoker may have sensed, feared and loathed in himself. Like vampirism, homosexuality was at that time becoming a deplorable “condition” from which one needed to be rescued.

The heroes of the novel do their best to show how. Contrasting with the vile lechery of the Count as he looms over Harker, and the young man’s fascinated disgust as he discovers the vampire’s blood-gorged body in the crypt of his castle, the camaraderie among the band of men who vanquish the vampire models Stoker’s own solution to his predicament. As Auerbach says, the book “abounds in overwrought protestations of friendship among the men, who breathlessly testify to each other’s manhood.” Over them all presides the fatherly, Whitman-like figure of Van Helsing, who assures Arthur Holmwood that “I have grown to love you – yes, my dear boy, to love you.” For all his prejudice and dissembling, one can’t help feel for the anguished Stoker here, seemingly so desperate to neutralize and normalize his feelings.

The homoerotic allure of the vampire had been explored, with far less inhibition and repression, three years before Dracula was published, by the Slavic aristocrat Eric Stenbock. His short story “A True Story of a Vampire” records the recollection by an old woman called Carmela who lives in a castle in Styria – an obvious reference to Joseph Sheridan Le Fanu’s homoerotic vampire novella Carmilla (1872) – of an episode from her youth. A mysterious Hungarian guest called Count Vardalek arrives at the castle and enthralls Carmela’s young brother Gabriel. We can guess well enough from his appearance what the Count has in mind: “He was rather tall with fair wavy hair, rather long, which accentuated a certain effeminacy about his smooth face.”

Gabriel pines when Vardalek, now welcomed into the family home, has to go away on trips to Trieste. “Vardalek always returned looking much older, wan, and weary. Gabriel would rush to meet him, and kiss him on the mouth. Then he gave a slight shiver: and after a little while began to look quite young again.”

But Gabriel, previously so healthy, starts to succumb to a mysterious illness. On his deathbed, “Gabriel stretched out his arms spasmodically, and put them round Vardalek's neck. This was the only movement he had made, for some time. Vardalek bent down and kissed him on the lips.” When the boy dies, the Count leaves, never to be seen by the family again.

A poet as well as an author of fantastical tales, Stenbock was more Wildean than Wilde, and made no bones about it. He conducted many homosexual relationships while studying at Oxford, and he was said to have gone everywhere with a life-sized doll that he called his son. He was said to be eccentric, morbid and perverse – qualities he put to good use in his vampire story.

The gay subtext of Dracula was pursued in The House of the Vampire (1907) by the writer George Sylvester Viereck. A complex and controversial figure, Viereck met with Adolf Hilter in Germany in the 1930s, and was imprisoned in the United States in 1942 because of his Nazi sympathies – specifically for failing to register as a supporter of National Socialism. His book relates the story of Reginald Clarke, a dissipated Henry Wotton figure who draws young men into a web of corruption. He represents a popular character type of the early twentieth century: a psychic vampire, who absorbs the energy and creativity of his victims. “Your vampires suck blood”, cries one of his victims, “but Reginald, if vampire he be, preys upon the soul.” Viereck’s description of Clarke’s fate leaves little doubt that the character was modeled on Wilde himself:

“Many years later, when the vultures of misfortune had swooped down upon him, and his name was no longer mentioned without a sneer, he was still remembered in New York drawing rooms as the man who brought to perfection the art of talking.”

Clarke’s vampirism is portrayed here not as a vile perversion but as the right of a superior being to draw the life force from his inferiors. True to his fascistic leanings, Viereck wrote that “My vampire is the Overman of Nietzsche. He is justified in the pilfering of other men’s brains.” He is a revival of the Byronic vampire, a creature of taste and refinement.

It’s puzzling why gay men like Stenbock, Wilde and Viereck chose to reiterate the widespread association of homosexuality with the desecration of youthful innocence. Yet the theme of corruption being spread via body fluids during illicit, decadent sexualized embraces, would, in Stoker’s time, have resonated with fears about syphilis. Science-fiction writer Brian Aldiss considers Dracula “the great nineteenth-century syphilis novel”, although Oscar Wilde’s biographer Richard Ellman argues that this was also the real subject of The Picture of Dorian Gray. The disease announced itself with lesions on the body, much like the bodily symptoms of vampirism for which the characters in Dracula search their skin. It has been suggested that Stoker died from syphilis contracted from a prostitute; this seems unlikely, although that is his fate in Aldiss’s metafictional Dracula Unbound (1991).

Aldiss’s book was published when an entirely new sexually transmitted disease had grown to epidemic proportions: AIDS, as an allegory of which vampire mythology seems unnervingly tailored. There is the infection by blood, the enervated state of the victims, and also the suggestion pushed by homophobic media reports that AIDS is associated with perverse sexual behaviour. As is often the way with modern myths, there was already a new version available to explore the metaphor: Anna Rice’s “Vampire Chronicles”, beginning with Interview With a Vampire (1976), and followed by The Vampire Lestat (1985) and The Queen of the Damned (1988). The critical reception and public response was much more positive for the latter two novels than for the first, and perhaps that reflected how society had changed – not just because the strength of the gay rights movement by the early 1980s had created an environment more receptive to the themes, but because the precarious existence of Rice’s vampires seemed to speak to the devastating effects of AIDS in the gay community. “I happened to encounter Interview With a Vampire at the height of the AIDS epidemic”, writes author Audrey Niffenegger,

“…and it seems in retrospect to be a prescient book. Anna Rice could not have known how her creation would resonate with a world in which blood itself was dangerous, in which male homosexuality and death became closely entwined.”

Although Rice was comfortable with such interpretations, she denied having any intention of making her Vampire Chronicles a gay allegory But the homoeroticism is plain to see, and the vampires Louis and Lestat live for years like a gay couple with their young adoptive daughter Claudia. That analogy arguably does gay parents no favours, however, since the prospect of Claudia maturing into an adult woman within the body of a perpetually young girl makes for some of the most disturbing – and, it must be said, mythopoeic – material in Interview With the Vampire. “You’re spoiled because you’re an only child”, Lestat tells her, to which she languidly responds, “I suppose we could people the world with vampires, the three of us.” By daring to voice such things, Rice gives the vampire myth a genuinely contemporary infusion, the horror element for once being far less significant than the capacity to provoke unease.

Sunday, December 29, 2019

Rise of the vacuum airship

Sorry folks, I had to take the full story down - it violates New Scientist's rights agreement, which was entirely my oversight. The published version of the article is available to NS subscribers here.

Thursday, November 07, 2019

The City is the City

My brief from the wonderfully named Dream Adoption Society of the Zbigniew Raszewski Theatre Institute in Warsaw – for their 2019 exhibition The City is the City (the allusion to China Miéville is intended) – was to express a dream of the utopian city of the future. I’m not sure I did that, but here is what I gave them.

________________________________________________________________________

When trying to imagine the future, I tend to look back to the past. What we can find there are not answers but reasons to be humble.

It’s one thing to laugh at how wide of the truth the forecasts of a century or so ago were about the warp and weft of life today: all those moonbases, jet packs, flying cars. But it is more useful to think about why they were wrong.

The finest example, in many respects, of a vision of the future city in the late nineteenth century was supplied by the French author and illustrator Albert Robida in his books The Twentieth Century (1882) and its sequel The Electric Life (1892). Set in the 1950s, the first book shows the life of a Parisian woman called Hélène Colobry as she goes about her life as a recent law graduate; in the second we meet engineer Philoxène Lorris and his son Georges. In a series of glorious illustrations, Robida shows us a world of electric light, interactive televisions (“telephonoscopes”), airborne rocket-shaped cars and dirigibles, all in a style that is the very epitome of steampunk – and not a bit like the way things turned out.



The Parisians who populate Robida’s world could have stepped straight out of the fin de siècle, all elegant hats and parasols. And while the skies swarm with vehicles, the city below is architecturally recognizable as the Paris of Robida’s time. Our first inclination might be to read this as anachronistic – but wait, isn’t Paris indeed still that way now, with its art nouveau Metro stations and its Haussmann boulevards? So Robida is both “wrong” and “right”: he didn’t anticipate what was coming, but he reminds us that cities, and the entire texture of life, are palimpsests where traces of the past going back decades, centuries, even millennia, coexist with the most up-to-the-minute modernity.

More than that: the devices of modernity have built into them a visual and conceptual continuity with the past, for how else could we at first have navigated them? The joke has it that a young person, seeing for the first time a real floppy disk, exclaims “Hey, you’ve 3D-printed the Save icon!” I’ve no idea if this was ever actually said, but it is inadvertently eloquent as well as funny.

Thus forewarned, let us stroll into the utopian city – and discover that, as ever, it reflects our own image, our fantasies and fears, our current, compromised, patchwork technologies. This place is after all where we live here and now, but allowed to have grown and morphed in proportion to our old obsessions and habits, disguised with a veneer of synthetic futures. We have walked a circle and re-entered the present from another direction.

*

Utopia is an invention of the Renaissance, and ever since the quasi-theocracies imagined by Thomas More and Francis Bacon it has been bound up with the city and the city-state. In Tommaso Campanella’s The City of the Sun (1623), the philosophical and political foundations of his utopia are inseparable from the fabric of his city with its seven concentric walls: a design that, like the Gothic cathedrals of the Middle Ages, represented the construction of the entire (now Copernican) cosmos. The very walls have a pedagogical function, covered with pictures and diagram that illustrate aspects of astronomy, mathematics, natural history and other sciences.


Palmanova in northern Italy has a radial design echoed in Tommaso Campanella’s utopian City of the Sun, reflecting the political ideals of social order and harmony.

For producing this vision, Campanella suffered 27 years of imprisonment and torture – reminding us that, when they began, utopian cities of the future were not forecasts of what technology might deliver but statements of political intent.

And, I can hear urban theorists sigh, when was a city ever not a statement of political intent? Cities speak about the societies that build them. The rich man in his high castle, the poor man at his gate – traditionally, real cities have symbolized not the heavens but the hierarchies here on earth. As the brutalist concrete modernism of housing complexes near my home in south London is slowly demolished, I see a failed experiment not just in architecture but also in social philosophy – just, indeed, as was the case when those rectilinear grey hulks of the 1950s and 60s replaced the Victorian slums that stood there before. And no one doubts that the disappearance of the hutongs of Beijing before the march of high-rise, daringly asymmetrical steel and glass makes a statement about what China is determined to leave behind and what it aspires to become.

So while my instinct, as an avid follower of trends in self-organization, complexity and new materials, is to bring science and technology to bear on the question of utopian urbanism (and I’ll get to that), I am reluctant to say a word on such matters before admitting that this question is primarily bound up with politics and demographics.

Not that I want first to make predictions about that; at this particular moment in history I would hesitate to forecast the politics of next week. Rather, I want to acknowledge that whatever fantasies (that is all they will be) I spin, they have to build on some kind of social philosophy before we think about the fabric.

But this is more complicated than it used to be, and the reason why is partly technological. One of the interesting aspects of Robida’s drawings is that his skies above the cityscape are sometimes a dense web of telephone wires. He evidently felt that whatever the twentieth century city would look like, communication and information networks would be important for them.

Now those wires are disappearing. Why? First, because they began to run underground, in optical fibres able to pack a far greater density of information into a narrow channel, encoded in pulses of light. But ever more now it is because the wires have become virtual: the networks are wireless.

“Wifi” is like that Save icon: a ghost of past technology condensed into an avatar of modernity. You need to be rather old to see it as anything more than a “dead metaphor”, meaning that it now stands only for itself and its etymological roots have themselves become irrelevant and invisible. Older readers, as the phrase goes, will hear the echo of “hifi”, which the term was coined to imitate: high fidelity, referring to the high-quality reproduction of sound in home audio systems, or more generally, to the superior conveyance of (audio) information. The “wi”, of course, is “wireless”, which harks back to the miracle that was radio. By means that many people considered semi-magical in the 1920s and 30s, sound and information could be broadcast through the air as radio waves rather than along transmission wires.

This seemed like an occult process, and indeed was initially thought by some to be allied to spiritualism and mediumship: the “ether” that was seen as the material medium of radio waves was suspected of also being a bridge between the living and the dead. When television arrived – so that you could not only hear but even see a person hundreds of miles away – the mystical aura of “wireless” technology only increased.

What has this to do with the city of the future? It illustrates that new technologies, especially of communication, have psychic implications as well as infrastructural ones. Even with a web of wires as dense as Robida’s, no one would have imagined a future in which you can sit in a coffee shop and, with a slab of glass and silicon held in your hand, tap instantly into more or less the sum total of existing human knowledge: to read in facsimile Isaac Newton’s original Principia, or watch in real time images of a spaceship landing on an asteroid. No one imagined that, thanks to technological innovation, we would in 2018 be producing as much data every two days as we produced throughout all of human existence until 2003.

And the truly astonishing thing is that this seems normal. More, it is regarded now almost as a human right, so that we are irritated to find ourselves in an unban space where every cubic centimetre of empty space is not animated by this invisible and ever expanding information flow.

Why in heaven’s name should we be expected to make sense of this situation as well as simply to exploit it? From a perceptual point of view, wifi wrecks spacetime. These ten square centimetres of reality are no longer where I am sitting in (say) Starbucks on Euston Road, but are the living room of my sister in Canberra, with whom I am chatting on Skype. Do you think it is just coincidence that the ways we interact with information technology are often indistinguishable from the symptoms of psychosis (and I’m not just talking about the associated addictions and other dysfunctional behaviours)?

Add to this now the possibility that even what might seem like the concrete existence of your own immediate environment can be tinkered with, overlain with the metadata of augmented reality. How then are we supposed to police the borders of virtual and real? Is it even clear what the distinction means? But if it is not, then who decides? And who decides where in that space of possibilities “normality” lies?

So look: a utopian city of the future must recognize that there will be more technologies like this, and also that people will adapt to them without ever quite processing them psychically. One thing Robida’s future citizens are never doing is sitting in their public-transport dirigibles staring at little black tablets in their palm, and frowning at them, laughing or weeping at them, talking to them. To Robida’s readers that would have made no sense at all.

It’s an easy matter to see how technologies and networks of information have changed our lives and built environments. Ever more people can work from home, for one thing – and the proper way to say this is that the boundaries of work and domesticity have become porous or almost invisible. What is perhaps more striking is how these technologies have been assimilated by, and altered, life in places far removed from the centres of modern development: rural sub-Saharan Africa, the plains of Mongolia. A weather app is handy if you want to know whether to take your umbrella with you in Paris; it is rather more than that if you are a farmer in Kenya.

It is precisely this importance of information that makes it a currency of political and economic power. Increasingly indices of development include the question of wifi access and screens per capita. Censorship of information technologies has become a significant means not just of social control but of employment in some countries; democracy is struggling (and failing) to keep pace with the tools that exist for manipulating opinion and distorting facts. As professor of communication John Culkin famously said, “We shape our tools, and thereafter our tools shape us.”

*

The struggle between, say, the Chinese authorities to censor the web and users’ efforts to evade them are, in an Orwellian sort of way, a metaphor for the tensions that exist in any complex adaptive system that unfolds in a social context. They are a dance between attempts at centralized control and design and the tendency of such systems to grow of their own accord. Both the internet and cities are often presented as exemplars of human constructs that no one designed, although of course the truth is that design and planning simply have limited impact. Christopher Wren’s orderly, utopian vision of London after the Great Fire of 1666 was never realized because of the city’s irrepressible urge to reform itself – with all the attendant chaos – while the embers were scarcely cool.

Perhaps the central revelation of the scientific study of complex adaptive systems is that this spontaneous growth is not merely chaotic and random, but follows particular law-like regularities – albeit ones quite unlike the geometric designs of Campanella and Wren, and which more closely resemble, and often exactly reproduce, the growth laws of living organisms.


The growth and form of cities like London resemble those of natural processes, like fluid flow through porous rock or the spread of a bacterial colony.

These regularities exist not because the agents responsible for growth – the people who build new roads and houses, say – are so intelligent, but precisely because their intelligence is, in this context at least, so limited. It’s not very clear what kinds of structures intelligent agents create when they are exerting their full cognitive capacities – the question is less studied theoretically – but there is some chance that they might be either too complex for any laws to be apparent at all, or totally random (the two could of course be indistinguishable). But when agents have very constrained cognition – when they act according to rather simple laws – then complex but nonetheless rather predictable and law-like group behaviour emerges. Cities, for example, show so-called scaling laws in which everything ranging from their crime rate to their innovative capacity, and even the speed of walking, varies with size according to a rather simple mathematical relation. They grow in a manner similar to tumours and snowflakes: they look like natural phenomena. Ants, wasps and termites are by no means cognitively sophisticated, but their social structures and even their architecture – their nests and mounds – certainly can be.


The complex architecture of tunnels in a termite mound, revealed here by taking a plaster cast. Photo: Rupert Soar, Loughborough University.

This doesn’t mean that humans are cognitively simple too (although who knows where that scale starts and ends?). Rather, our social systems inherently reduce the amount of cognition needed, and perhaps have evolved precisely in order to do so. It’s why we have traditions, conventions, norms and taboos. It’s why we have traffic lanes, speed limits, highways codes. (Traffic is a particularly clear example of complex behaviour, such as waves of stop-start jamming, emerging from agents interacting through simple rules.)

There is a strong case, then, why a well-designed city of the future would be quite unlike the precisely planned and geometrically ordered city-utopias of the past. Self-organized systems commonly show desirable traits, such as efficiency and economy in use of space and energy, robustness against unpredictable outside disturbances, and the ability to adapt to changing circumstances. The trick is to find the “rules of engagement” between agents that create such outcomes and do not risk getting trapped in “bad solutions”.

Another way of saying this, perhaps, is that there is no point in trying to specify what an urban utopia would look like; rather, the important questions are what qualities we would like it to have (and to avoid), and what kinds of constraints and underlying rules would guide it to towards those outcomes. There is no reason to think that either of these things have universal answers for all cultures and all places.

What surely is clear is that the social ethos and the physical fabric will be intimately connected, as they always have been. What a city looks like both reflects and determines the values of the society it accommodates.

One piece of futurology that I am tentatively prepared to offer is that the utopian city will be protean. It will be able to change its physical state in ways that bricks and mortar, tarmac and steel never could. These capabilities are already being incorporated into materials at the level of individual buildings and civil-engineering structures. In fact to a limited extent they always have been: historical lime mortars are self-healing in their capacity to reconstitute themselves chemically and cement together cracks. Today, self-repair is being built into construction materials ranging from plastics to asphalt to steel, for example by incorporating cavities that release air-setting glues when broken open.

It’s a small step towards what have become dubbed “animate materials”, which have some of the qualities of living systems: an ability to grow in response to environmental cues, to heal damage, to alter their composition to suit the circumstances, to sense and alter their surroundings. Trees and bones reshape themselves in response to stress, removing material where it is not needed and reinforcing it where the danger of failure lurks. And they are of course fully biodegradable and renewable.

For such reasons, natural ecosystems are a flux not just of materials and energy but of information. They even contain information networks: trees, say, communicating via airborne hormones and subterranean root systems. There is no clear distinction between structural fabric, sensors, and communication and information systems: the smartness of the material is built-in, invisible to the eye. This is the direction in which our artificial and built environments are heading, so that they are ever less a tangle of wires and increasingly a seamless interface as bland and cryptic as an iPod. The mechanisms are unseen, often inseparable from the materials from which they are made.

And this is one reason why we don’t have robot butlers. What a great deal of redundant design would be needed to create such a humanoid avatar; how much effort would have to be expended simply to ensure that it does not trip on the carpet. In a sufficiently smart, adaptive, wireless environment, a mere static cylinder will do instead; shall we call her Alexa? The future’s technology needn’t pay much heed to surface and texture – faux-mahogany Bakelite, smooth, glossy plastic, gleaming steel – because the interface will be on and within us: responding to vision, voice, posture, perhaps sheer thought.

Which leads to the real question: who will we be?

Here there is another lesson to be learnt from Robida’s wonderful books. He has very evidently taken the citizens of late nineteenth-century Paris and deposited them in what was then a futuristic-looking world. We might laugh at how transparent a ruse that is now, but we’ve a tendency to do the same. All those images of utopian cities (often in outer space) from the 1950s might have granted to the futuristic citizens a bit of nifty, brightly coloured and stylishly minimalist clothing, but there were the same rosy-cheeked, smiling nuclear families, the dad waving goodbye to the blond-haired kids on his way to work. We even do this with our vision of alien and artificial intelligence, attributing to them all the same motives as ours (for better or worse) but just with fancier tech.

Yet not only social mores and norms but also the very nature of identity is mutable over time. Arguably this is more true now than ever, so there is no reason to suppose the transformation of identity will be any less rapid in the future.

Already modernity demands that we adopt multiple identities that surface in different situations, often overlapping and increasingly blurred but defining our views and choices in distinct ways. Traditional social categories that defined identity, such as age, class, and nationality, are becoming less significant, as are distinctions between public and private identity. Old definitions based on class, ethnicity and political affiliation are ceding to new divisions, for example marked by distinctions of urban/rural, well/poorly educated, young/old, connected/off grid. In our fictional dystopias, such divisions are sometimes genetic, perhaps artificially induced and maintained.

What’s more, identities are being increasingly shaped by active construction, documentation, affiliation and augmentation. The kind of manufactured and curated public profile once reserved for celebrities is now available to billions, at least in principle. We arrange and edit our friendships and our memories, attune our information flows to flatter our preconceptions, and assemble our thoughts, experiences and images into packages that we present as selves.

We still have no idea what kind of societies will grow from these opportunities for self-definition. If traditional attributes of individual identities become more fragmented, communities might be expected to become less cohesive, and there could be greater marginalization, segregation and extremism. Yet hyperconnectivity can also produce or strengthen group identities in positive ways, offering new opportunities for community-building – which need pay not heed to geography and spatial coordinates.

The city is a living embodiment of its citizens. They have selected the contours, the technologies, the interfaces that they believe best represent them. That’s why utopian dreams are just another way of looking at ourselves. So be careful what you wish for.

Friday, September 27, 2019

Just how conceptually economical is the Many Worlds Interpretation?

An exchange of messages with Sabine Hossenfelder about the Many Worlds Interpretation (MWI) of quantum mechanics has helped me sharpen my view of the arguments around it. (Sabine and I are both sceptics of the MWI.)

The case for Many Worlds is well rehearsed: it relates to the “measurement problem” and the idea that if you take the “traditional Copenhagen” view of quantum mechanics then you need to add to the Schrödinger equation some kind of “collapse postulate” whereby the wavefunction switches discontinuously from allowing multiple possible outcomes (a superposition) to having just one: that which we observe. In the Many Worlds view postulated by Hugh Everett, there is no need for this “add on” of wavefunction collapse, because all outcomes are realized, in worlds that get disentangled from one another as the measurement proceeds via decoherence. All we need is the Schrödinger equation. The attraction of this idea is thus that it demands no unproven additions to quantum theory as conventionally stated, and it preserves unitarity because of the smooth evolution of the wavefunction at all times. This case is argued again in Sean Carroll’s new book Something Deeply Hidden.

One key problem for the MWI, however, is that we observe quantum phenomena to be probabilistic. In the MW view, all outcomes occur with probability 1 – they all occur in one world or another – and we know even before the measurement that this will be so. So where do those probabilities come from?

The standard view now among Everettians is that the probabilities are an illusion caused by the fact that “we” are only ever present on one branch of the quantum multiverse. There are various arguments [here and here, for example] that purport to show that any rational observer would, under these circumstances, need to assign probabilities to outcomes in just the manner quantum mechanics prescribes (that is, according to the Born rule) – even though a committed Everettian knows that these are not real probabilities.

The most obvious problem with this argument is that it destroys the elegance and economy that Everett’s postulate allegedly possesses in the first place. It demands an additional line of reasoning, using postulates about observers and choices, that is not itself derivable (even in principle!) from the Schrödinger equation itself. Plainly speaking, it is an add-on. Moreover, it is one that doesn’t convince everyone: there is no proof that it is correct. It is not even clear that it’s something amenable to proof, imputing as it does various decisions to various “rational observers”.

What’s more, arguments like this force Everettians to confront what many of them seem strongly disinclined to confront, namely the problem of constructing a rational discourse about multiple selves. There is a philosophical literature around this issue that is never really acknowledged in Everettian arguments. The fact is that it becomes more or less impossible to speak coherently about an individual/observer/self in the Many Worlds, as I discuss in my book Beyond Weird. Sure, one can take a naïve view based on a sort of science-fictional “imagine if the Star Trek transporter malfunctioned” scenario, or witter on (as Everett did) about dividing amoebae. But these scenarios do not stand up to scrutiny and are simply not science. The failure to address issues like this in observer-based rationales for apparent quantum probabilities shows that while many Everettians are happy to think hard about the issues at the quantum level, they are terribly cavalier about the issues at the macroscopic and experiential level (“oh, but that’s not physics, it’s psychology” is the common, slightly silly response).

So we’re no better off with the MWI than with “wavefunction collapse” in the Copenhagen view? Actually, even to say this would be disingenuous. While some Everettians are still happy to speak about “wavefunction collapse” (because it sounds like a complicated and mysterious thing), many others working on quantum fundamentals don’t any longer use that term at all. That’s because there is now a convincing and indeed tested (or testable) story about most of what is involved in a measurement, which incorporates our understanding of decoherence (sometimes wrongly portrayed as the process that makes MWI itself uniquely tenable). For example, see here. It’s certainly not the case that all the gaps are filled, but really the only thing that remains substantially unexplained about what used to be called “collapse” is that the outcome of a measurement is unique – that is, a postulate of macroscopic uniqueness. Some (such as Roland Omnès) would be content to see this added to the quantum formalism as a further postulate. It doesn’t, after all, seem a very big deal.

I don’t quite accept that we should too casually assume it. But one can certainly argue that, if anything at all can be said to be empirically established in science, the uniqueness of outcomes of a measurement qualifies. It has never, ever been shown to be wrong! And here is the ultimate irony about Many Worlds: this one thing we might imagine we can say for sure, from all our experience, about our physical world is that it is unique (and that is not, incidentally, thrown into doubt by any of the cosmological/inflationary multiverse ideas). We are not therefore obliged to accept it, but it doesn’t seem unreasonable to do so.

And yet this is exactly what the MWI denies! It says no, uniqueness is an illusion, and you are required to accept that this is so on the basis of an argument that is itself not accessible to testing! And yet we are also asked to believe that the MWI is “the most falsifiable theory ever invented.” What a deeply peculiar aberration it is. (And yet – this is of course no coincidence – what a great sales hook it has!)

Sabine’s objection is slightly different, although we basically agree. She says:

“Many Worlds in and by itself doesn't say anything about whether the parallel worlds "exist" because no theory ever does that. We infer that something exists - in the scientific sense - from observation. It's a trivial consequence of this that the other worlds do not exist in the scientific sense. You can postulate them into existence, but that's an *additional* assumption. As I have pointed out before, saying that they don't exist is likewise an additional assumption that scientists shouldn't make. The bottom line is, you can believe in these worlds the same way that you can believe in God.”

I have some sympathy with this, but I think I can imagine the Everettian response, which is to say that in science we infer all kinds of things that we can’t observe directly, because of their indirect effects that we can observe. The idea then is that the Many Worlds are inescapably implicit in the Schrödinger equation, and so we are compelled to accept them if we observe that the Schrödinger equation works. The only way we’d not be obliged to accept them is if we had some theory that erases them from the equation. There are various arguments to be had about that line of reasoning, but I think perhaps the most compelling is that there are no other worlds explicitly in any wavefunction ever written. They are simply an interpretation laid on top. Another, equally tenable, interpretation is that the wavefunction enumerates possible outcomes of measurement, and is silent about ontology. In this regard, I totally agree with Sabine: nothing compels us to believe in Many Worlds, and it is not clear how anything could ever compel us.

In fact, Chad Orzel suggests that the right way to look at the MWI might be as a mathematical formalism that makes no claims about reality consisting of multiple worlds – a kind of quantum book-keeping exercise, a bit like the path integrals of QED. I’m not quite sure what then is gained by looking at it this way relative to the standard quantum formalism – or indeed how it then differs at all – but I could probably accept that view. Certainly, there are situations where one interpretational model can be more useful than others. However, we have to recognize that many advocates of Many Worlds will have none of that sort of thing; they insist on multiple separate universes, multiple copies of “you” and all the rest of it – because their arguments positively require all that.

Here, then, is the key point: you are not obliged to accept the “other worlds” of the MWI, but I believe you are obliged to reject its claims to economy of postulates. Anything can look simple and elegant if you sweep all the complications under the rug.

Thursday, September 05, 2019

Physics and Imagination

This essay appears in Entangle: Physics and the Artistic Imagination, a book edited by Ariane Koek and produced for an exhibition of the same name in Umea, Sweden.

______________________________________________________________________

It would seem perverse, almost rude, not to begin a discussion of imagination in physics with Einstein’s famous quote on the topic, voiced during a newspaper interview with the writer George Viereck in 1929:
“I'm enough of an artist to draw freely on my imagination, which I think is more important than knowledge. Knowledge is limited. Imagination encircles the world.”

For a fridge-magnet inspirational quote to celebrate the value of imagination, you need look no further. But context, as so often with Einstein, is everything. He said this after talking about the 1919 expedition led by the British physicist Arthur Eddington to observe the sky during a total solar eclipse off the coast of Africa. Those observations verified the prediction of Einstein’s theory of general relativity that starlight would be bent by the gravitational field of a massive body like the sun. Einstein told Viereck that “I would have been surprised if I had been wrong.” Viereck – a fascinating figure in his own right, who had previously interviewed (and showed some sympathy for) Adolf Hitler and wrote a psychological and gay-inflected Wildean vampire novel in 1907 – responded to that supremely confident statement by asking: “Then you trust more to your imagination than to your knowledge?”

You could say that Einstein’s reply was a qualified affirmative. And this seems very peculiar, doesn’t it, for a “man of science”?

The story dovetails with Einstein’s other well-known response to the eclipse experiment. Asked by an assistant (some say a journalist) how he should have felt if the observations had failed to confirm his theory, he is said to have responded “Then I would feel sorry for the dear Lord. The theory is correct.”

Compare that with the statement of another celebrated aphoristic physicist, the American Richard Feynman:
“It doesn't matter how beautiful your theory is, it doesn't matter how smart you are. If it doesn't agree with experiment, it's wrong.”

Who is right? Einstein trusting to imagination, intuition and artistry, or Feynman to the brutal judgement of empiricism? If we’re talking about scientific methodology, Feynman is right in spirit but nonetheless displaying the limitations of the physicist’s common “naïve realist” position about science, which assumes that nature delivers uncomplicated, transparent answers when we put to it questions about our physical theories. Yet Einstein’s general relativity was a theory so profoundly motivated and so conceptually satisfying, despite the mind-boggling shift it demanded in conceptions of space and time, that it could not be lightly tossed on the scrapheap of beautiful ideas destroyed by ugly facts.

So the sensible way to have handled a discrepancy with observed “facts” like those collected by Eddington in his observations of the positions of stars during an eclipse would have been to wonder if the observations were reliable. Indeed, Eddington was later accused of cherry-picking those facts to confirm the theory, perhaps motivated by his Quaker’s desire to bring about international reconciliation after First World War had triggered the ostracising of Germany. (It seems those charges were unfounded.) Nature doesn’t lie, but experimentalists can blunder.

*

There’s a deeper reason to valorize Einstein’s claim about imagination in physics. What I feel he is really saying is that imagination precedes knowledge, and indeed establishes the precondition for it. You might say that when the shape of imagination sufficiently fits the world, knowledge results.

We have never needed more reminding of this. In his unfinished magnum opus Novum Organum the seventeenth-century English philosopher Francis Bacon presented knowledge as the product obtained when raw facts – observations about the world – are fed into a kind of science machine (what we might now call an algorithm) and ground into their essence. It was an almost mechanical process: you first collect all the facts you can, and then refine and distil them into general laws and principles about the way the world works. Bacon never completed his account of how this knowledge-extraction process was meant to work, but at any rate no one in science has ever successfully used such a thing, or even knows what it could comprise.

Yet Bacon’s vision threatens to return in an age of Big Data – especially in the life sciences, where the availability of information about, say, genome sequences or correlations between genes and traits has outstripped our ability to create theoretical frameworks to make sense of it. There’s a feeling afoot not only that data is intrinsically good but that knowledge has no option but to fall out of it, once the mass of information about the world is large enough.

Physicists have received advance warning of the limitations of that belief. They have their own knowledge machines: sophisticated telescopes and particle detectors, say, and most prominently the Large Hadron Collider and other particle colliders capable of generating eye-watering quantities of data about the interactions between the fundamental constituents of the world. But they already know how little all this data will help without new ideas: without imagination.

For example? There are some good reasons to believe that if physics is going to penetrate still further into the deep laws of nature, it needs a theoretical idea called supersymmetry. So far, all we know about the particles and forces of nature is described by a framework called the Standard Model, which contains all the ingredients seemingly needed to explain everything seen in experiments in particle physics to date. But we know that there’s more to the universe than this, for many reasons. For one thing, the current theory of gravity – Einstein’s general relativity – is incompatible with the theory of quantum mechanics used to describe atoms and their fundamental particles. Supersymmetry – a putative connection between two currently distinct classes of particle – looks like a promising next step to a deeper physics. Yet so far, the LHC’s high-energy collisions have offered no sign that it’s true.

What’s more, there’s nothing in the Standard Model that seems to account for the “dark matter” that astrophysicists need to invoke to explain what they see in the cosmos. This mysterious substance is believed to pervade the universe, invisibly, being felt by ordinary matter only through its gravitational influence. Without something like dark matter, it is hard to make sense of the observed forms and motions of galaxies: how they rotate without shedding stars like a water sprinkler. The reasons to believe in dark matter – and moreover to believe it exceeds the mass of ordinary visible matter by a factor of about five – are very strong. Yet countless efforts to spot what it consists of have failed to offer any clues. Huge quantities of data constrain the choices, but no evidence supports any of the theories proposed to explain dark matter.

These are – there is no avoiding the issue – failures of imagination. Supersymmetry and dark matter are wholly imagined theories or entities, but the collective imagination of physicists has not yet made them vivid enough to be revealed or disproved. It is possible that this is because they are imaginary in the more literary sense: they exist only in our minds. And they are not alone; dark energy (which causes the universe to expand at an increasing rate) and string theory (one candidate for a theory that would unite gravity and quantum mechanics) are other components of the physicist’s imaginarium waiting to be verified and explained or to be dismissed as unicorns, as the ether, as the philosopher’s stone.

A single observation – one experiment revealing a discrepancy with a definite theoretical prediction, or one sighting of a new kind of particle – could change the situation. Maybe it will. But it is equally possible that we will need ultimately to concede defeat, and to extent the imagination of physics into new territory: for example to accept, as some are already arguing, that what we call “dark matter” is a symptom of another physical principle (a modification to the theory of gravity, say) and not a true substance.

There’s nothing embarrassing or damning in all this. It’s not that physics itself is failing. The situation is just business as usual in science: to have mysteries awaiting explanation, even ones of this magnitude, is a sign of health, nor sickness. For individual physicists whose reputations hang (or seem to) on the validity of a particular idea, that’s scant comfort. But for the rest of us it’s nothing short of exhilarating to see such deep and broad questions remaining open.

*

The real point is that imagination in physics is what the paths to the future, to new knowledge, are built from. Actual knowledge – things we can accept as “true”, in the sense that they offer tried and tested ways of predicting how the world behaves – has been assembled into an edifice as wonderful and as robust as the Gothic cathedrals of stone, the medieval representations of the physical and spiritual universe. But at the point where knowledge runs out, only imagination can take us further. I think this is what Einstein was driving at.

The invitation is often to suppose that this imagination operates only at the borders of physical theory: at, you might say, the cliff-face of physics that tends to dominate its public image, where we find exotica like string theory, black holes, cosmology and the Higgs boson. But physics, perhaps more than any other science, has a subtle, fractal-like texture in which gaps in knowledge appear everywhere, at all scales. Imagination was needed to start to understand that strange state of matter made of grains: powders and sand, part fluid and part solid. It is currently blossoming in a field known as topological materials, in which the electrical and magnetic properties are controlled by the abstract mathematical shapes that describe the way electrons are distributed, with twists akin to those in the famous one-sided Möbius strip. It was imagination that prompted physicists and engineers to make structures capable of acting as ‘invisibility shields’ that manipulate and guide light in hitherto inconceivable ways. In all these cases, as in science more broadly, the role if the imagination is not so much to guide us towards answers as to formulate interesting and fruitful new questions.

What does this imagination consist of? We’d do well to give that question more attention. I would suggest that it is, among other things, a way of seeing possibilities: a rehearsal of potential worlds. That’s what justifies Einstein’s comparison to the work of the artist: imagination, as Shakespeare put it, “bodies forth the forms of things unknown.” The scientist’s theories, as much as the poet’s pen, “turns them to shapes and gives to airy nothing a local habitation and a name.” That name could be “general relativity” – why not?

What’s the source, though? Many ideas in fundamental physics grow from what might seem the rather arid soil of mathematics. Supersymmetry and string theory are predicated in particular on the conviction that the deepest principles of the physical world are governed by symmetry. What this word means at the level of fundamental theory might seem less apparent to the outsider than what it implies in, say, the shape of a Grecian urn or the pattern of wallpaper, but at root it is not so very difference: symmetry is about an equivalence of parts and their ability to be transformed one into another, as a left hand becomes a right through the mirror reflection of the looking glass.

Well, it might seem arid, this mathematics. But imagination is as vital here as it is in art. What mathematicians value most in their colleagues is not an ability to churn out airtight proofs of abstract theorems but a kind of creativity that perceives links between disparate ideas, an almost metaphorical way of making connections in which intuition is the architect and proof can come later. Both mathematicians and theoretical physicists commonly speak of having a sense that they are right about an idea long before they can prove it; that proof is “just the engineering” needed to persuade others that the idea will hold up.

Let’s be cautious, though, about making “engineering” the prosaic, plodding part of science though. The common perception is that theorists do the dreaming and experimentalists just build the apparatus for putting dreams to the test. That’s just wrong. For one thing, it’s typically experiment that drives theory, not the other was around: it’s only when we have new instruments for examining the world that we discover gaps in our understanding, demanding explanation. What’s more, experiment too is fuelled by imagination. No one tries to see something unprecedented – farther out into space (which means, because light’s speed is finite, farther back in time), or into the world of single atoms, or into the spectrum of radiation outside the band of light our eyes can register – unless they have conjured up images of what might be there. Sure, you need some existing theory to guide your experimental goals, to show potentially fruitful directions for your gaze; but no one sails into uncharted territory if they think all they’ll find is more of the same, or nothing at all. “If you can’t imagine something marvellous, you are not going to find it”, says physics Nobel laureate Duncan Haldane. “The barrier to discovering what can be done is actually imagination.” And the power and artistry of the experimenter’s imagination comes not just from dreaming of what there is to be found in terra incognita, but also from devising a means to travel there.

When I speak of dreams, I don’t just mean it metaphorically, nor just in the sense of waking reverie. To judge from the testimony of scientists themselves, dreams can function as sources of inspiration. True, we should be a little wary of that; the notion of receiving insight in a dream became a romantic trope in the nineteenth century, and careful historical analysis often reveals some hard and very deliberate graft, as well as a very gradual process of understanding, behind scientific advances that were recast retrospectively as dream-revelations. But it happens. Several contemporary physicists have attested to insights that came to them in dreams, as the conscious mind that has been long pondering a problem loosens its bonds on the margins of sleep and admits a little more of the illogic on which imagination thrives.

All the same, we shouldn’t think that the physicist’s imagination always works in the abstract, in the realm of pure thought. Very often, it takes visual form: finding the right symbolic representation of a problem, such as Feynman’s famous “diagrams” for studying questions in the field of quantum electrodynamics (in essence, the theory of how light and matter interact), can unlock the mind in ways that more abstract algebraic mathematics or calculus can’t. Pen and paper can be the fuel of the imagination. As Cambridge physicist Michael Cates (incumbent of the chair previously held by Stephen Hawking and Isaac Newton) has said, “I need a piece of paper in front of me and I’m pushing symbols around on the page… so there’s this interaction between processing in your head and moving symbols around.” Never underestimate the traditional blackboard as a tactile, erasable aid to the imagination. The productivity of such aids is no surprise. Ask a child to think of a story, and it’s ne easy matter. Give them a doll’s house full of figurines, and they’re away.

*

Yet whether it is theoretical or experimental, this imagination in science (as in art) is not idle fantasy. It is a condensation of experience: it takes what you know and plays with it. I do mean “plays”: imagination is nothing if not ludic. But it is also the very stuff of thought. One interpretation of cognition, in the context of artificial intelligence, is that it is largely about figuring out the possible consequences of actions we might make in the world: an “inner rehearsal” of imaginary future scenarios. Imagination in science extends that process beyond the self to the world: given that we know this, mightn’t things also be arranged like that?

It’s much more than a guess, then, and as Shakespeare hints, has almost the power of an invocation. Truly, the scientific imagination can invoke into being something that was not there before. Isaac Newton was cautious about his “force of gravity”, knowing that he risked (and indeed incurred from his arch-rival Gottfried Leibniz) accusations of occultism. Yet all the same this “force” became – and remains – a ‘thing’ in physics, even if we can regard it as a figure of speech, a convenient conceptual tool that general relativity invites us to regard otherwise as curvature of spacetime. It’s a process entirely analogous to the way Shakespeare goes on to speak, in A Midsummer Night’s Dream, of how correlation leads us to imagine causation:
Such tricks hath strong imagination,
That if it would but apprehend some joy,
It comprehends some bringer of that joy.

In this way we’re reminded that imagination shares the same etymological root as “magic” – which, in the age just before the time of Isaac Newton, did not necessarily mean superstitious agency but the “hidden forces” by which natural magicians comprehended and claimed to manipulate nature. In that regard Newton wasn’t, as John Maynard Keynes claimed, the “last of the magicians”, in the sense of his having a belief in occult forces (such as gravity, acting invisibly across space). No, if that was Newton’s “magic” then today’s physicists share a conviction in it, for any model in physics awards imagination this role of employing imagined causative agencies – things we might not perceive directly but which manifest through their effects, such as dark matter, dark energy, or the Higgs field – to explain what we see.

Now, though, physics places demands on the imagination as never before. I’m struck by how dark matter and dark energy, say, commandeer known concepts (mass, energy) that may or may not turn out to be appropriate. Even more challenging are efforts to provide some physical picture of quantum mechanics, the kind of physics generally used to describe atoms and fundamental particles. These objects don’t seem to conform to our intuitions derived from the everyday world of rocks and stones, tennis balls and space rockets. They can, for example, sometimes display behaviour we associate not with particles but with waves. They appear to be able to influence one another instantaneously over long distances; they are said to exist “in several states or places at once.”

Yet these descriptions are attempts – often clumsy, sometimes misleading – to make quantum mechanics fit into the forms of our conventional “classical” imagination. Arguments and misperceptions follow, or a disheartening decision to draw a veil over quantum improprieties by calling them “weird”. We can and should do better, but this will require a reshaping, an expansion, of our imaginative faculties. We have to develop a kind of intuition that is not constrained by our daily experience – because if there’s one thing we can be sure about in quantum mechanics, it’s that it demands the possibility of phenomena that lie outside this experience.

To venture into unknown territory, where imagination is at a premium, is a risk. To put it bluntly, your imagination is more likely to lead you astray than toward the truth. It is no magical guarantor of insight. Will you take that risk? Mathematical physicist Jon Keating has put the problem succinctly: “[How can we] encourage people to make them feel more comfortable with the failure that comes with most creative and imaginative ideas?” Unless we get better at that, educationally or institutionally, science will suffer.

And it’s very possible that physicists won’t alone accomplish the feats of imagination needed to crack their hardest problems. They may need to find inspiration from philosophy, art, literature, aesthetics. Imagination doesn’t recognize categories and boundaries – it is a power that aims to encircle the world.

Tuesday, August 13, 2019

Still trying to kill the cat

Some discussion stemming from Erwin Schrödinger’s birthday prompts me to set out briefly why his cat is widely misunderstood and is actually of rather limited value in truly getting to grips with the conundrums of quantum mechanics.

Schrödinger formulated the thought experiment during correspondence with Einstein in which they articulated what they found objectionable in the view of QM formulated by Niels Bohr and his circle (the “Copenhagen interpretation”, which should probably always be given scare quotes since it never corresponded to a unique, clearly adduced position). In that view, one couldn’t speak about the properties of quantum objects until they were measured. Einstein and Schrödinger considered this absurd, and in 1935 Schrödinger enlisted his cat to explain why. Famously, he imagined a situation in which the property of some quantum object, placed in a superposition of states, determines the fate of a cat in a closed box, hidden from the observer until it is opened. In his original exposition he spoke of how, according to Bohr’s view, the wavefunction of the system would, before being observed, “express this by having in it the living and the dead cat (pardon the expression) mixed or smeared out in equal parts.”

This is (even back then) more careful wording than the thought experiment is usually afforded today, talking specifically about the wavefunction and not about the cat. Even so, a key problem with Schrödinger’s cat if taken literally as a thought experiment is that it refers to no well defined property. In principle, Schrödinger could have talked instead about a macroscopic instrument with a pointer that could indicate one of two states. But he wanted an example that was not simply hard to intuit – a pointer in a superposition of two states, say – but was semantically absurd. “Live” and “dead” are not simply two different states of being, but are mutually exclusive. Then the absurdity is all the more apparent.

But in doing so, Schrödinger undermined his scenario as an actual experiment. There is not even a single classical measurement, let alone a quantum state one can write down, that defines “live” or “dead”. Of course, it is not hard to find out if a cat is alive or dead – but it is very hard to identify a single variable whose measurement will allow you to fix a well defined instant where the cat goes from live to dead. Certainly, no one has the slightest idea how to write down a wavefunction for a live or dead cat, and it seems unlikely that we could even imagine what they might look like or what would distinguish them.

This is then not, at any rate, an experiment devised (as is often said) to probe the issue of the quantum-classical boundary. Schrödinger gives no indication that he was thinking about that, except for the fact that he wanted a macroscopic example in order to make the absurdity apparent. It’s now clear how hard it would be to think of a way of keeping a cat sufficiently isolated from the environment to avoid (near-instantaneous) decoherence – the process by which “quantumness” generally becomes “classical” – while being able to sustain it in principle in a living state.

Ignoring all this, popular accounts typically take the thought experiment as a literal one rather than as a metaphor. As a rule, they then go on to (1) misunderstand the nature of superpositions as being “in two states at once”, and (2) misrepresent the Copenhagen interpretation as making ontological statements about a quantum system before measurement, and thereby tell us merrily that, if Bohr and colleagues are right, “the cat is both alive and dead at the same time!”

My suspicion is that, precisely because it is so evocative, Schrödinger’s thought experiment does not merely suffer from these misunderstandings but invites them. And that is why I would be very happy to see it retired.

Of course, there is more discussion of all these things in my book Beyond Weird.