Thursday, March 19, 2015

The Saga of the Sunstones



In the Dark Ages, the Vikings set out in their longships to slaughter, rape, pillage, and conduct sophisticated measurements in optical physics. That, at least, has been the version of horrible history presented recently by some experimental physicists, who have demonstrated that the complex optical properties of the mineral calcite or Iceland spar can be used to deduce the position of the sun – often a crucial indicator of compass directions – on overcast days or after sunset. The idea has prompted visions of Norse raiders and explorers peering into their “sunstones” to find their way on the open sea.

The trouble is that nearly all historians and archaeologists who study ancient navigation methods reject the idea. Some say that at best the fancy new experiments and calculations prove nothing. Historian Alun Salt, who works for UNESCO’s Astronomy and World Heritage Initiative, calls the recent papers “ahistorical” and doubts that the work will have any effect “on any wider research on navigation or Viking history”. Others argue that the sunstone theory was examined and ruled out years ago anyway. “What really surprises me and other Scandinavian scholars about the recent sunstone research is that it is billed as news”, says Martin Rundkvist, a specialist in the archaeology of early medieval Sweden.

This debate doesn’t just bear on the unresolved question of how the Vikings managed to cross the Atlantic and reach Newfoundland without even a compass to guide them. It also goes to the heart of what experimental science can and can’t contribute to an understanding of the past. Is history best left to historians and archaeologists, or can “outsiders” from the natural sciences have a voice too?

What a saga

The sunstone hypothesis certainly isn’t new. It stems largely from a passage in a thirteenth-century manuscript called St Olaf’s Saga, in which the Icelandic hero Sigurd tells King Olaf II Haraldsson of Norway where the sun is on a cloudy day. Olaf checks Sigurd’s claim using a mysterious sólarsteinn or sunstone:
Olaf grabbed a Sunstone, looked at the sky and saw from where the light came, from which he guessed the position of the invisible Sun.

An even more suggestive reference appears in another thirteenth-century record of a Viking saga, called Hrafns Saga, which gives a few more clues about how the stone was used:
the weather was sick and stormy… The King looked about and saw no blue sky… then the King took the Sunstone and held it up, and then he saw where the Sun beamed from the stone.

In 1967 Danish archaeologist Thorkild Ramskou suggested that this sunstone might have been a mineral such as the aluminosilicate cordierite, which is dichroic: as light passes through, rays of different polarization are transmitted by different amounts, depending on the orientation of its crystal planes (and thus its macroscopic facets) relative to the plane of polarization. This makes cordierite capable of transmitting or blocking polarized rays selectively – which is how normal polarizing filters work. (Ramskou also suggested that the mineral calcite, a form of calcium carbonate, would work as a sunstone, based on the fact that calcite is birefringent: rays with different polarizations are refracted to different degrees depending on the orientation with respect to the crystal planes. But that’s not enough, because calcite is completely transparent: changing its orientation makes no difference to how much polarized light passes through. You need dichroism for this idea to work, not birefringence.)

Because sunlight becomes naturally polarized as it is scattered in the atmosphere, if cordierite is held up to sunlight and rotated it turns darker, becoming most opaque when the crystal planes are at right angles to the direction of the sun’s rays. Even if the sun itself is obscured by mist or clouds and its diffuse light arrives from all directions, the most intense of the polarized rays still come straight from the hidden sun. So if a piece of dichroic mineral is held up to the sky and rotated, the pattern of darkening and lightening can be used to deduce, from the orientation of the crystal’s facets (which reveal the orientation of the planes of atoms), the direction of the sun in the horizontal plane, called its azimuth. If you know the time of day, then this angle can be used to calculate where north lies.

Ramskou pointed out that polarizing materials were once used in a so-called Twilight Compass by Scandinavian air pilots who flew over the north pole. Their ordinary compasses would have been useless then, but the Twilight Compass allowed them to get their bearings from the sun. So maybe the Vikings did the same out on the open sea? Might they have chanced upon this handy property of calcite, found in abundance on Iceland? Perhaps all Viking ships set sail with a sunstone to hand, so that even on overcast or foggy days when the sun wasn’t visible they could still locate it and find their bearings.

The idea has been discussed for years among historians of Viking navigation. But only recently has it been put to the test. In 1994, astronomer Curt Roslund and ophthalmologist Claes Beekman of Gothenburg University showed that the pattern of darkening produced by a dichroic mineral in diffuse sunlight is too weak to give a reliable indication of the sun’s location. They added that such a fancy way to find the hidden sun seems to be unnecessary for navigation anyway, because it’s possible to locate the sun quite accurately with the naked eye when it is behind clouds from the bright edges of the cloud tops and the rays that emanate from behind the cloud. The sunstone idea, they said, “has no scientific basis”.

That was merely the opening sally of a seesawing debate. In 2005, Gabór Horváth at the Loránd Eötvös University in Budapest, a specialist in animal vision, and his colleagues tested subjects using photographs of partly cloudy skies in which the sun was obscured, and found that they couldn’t after all make a reasonably accurate deduction of where the sun was. Two years later Horváth and collaborators measured the amount and patterns of polarization of sunlight in cloudy and foggy skies and concluded that both are after all adequate for the “polarizer” sunstones to work in cloudy skies, but not necessarily in foggy skies. All this seemed enough to rehabilitate the plausibility of the sunstone hypothesis. But would it work in practice?

Double vision

Optical physicists Guy Ropars and Albert Le Floch at the University of Rennes had been working for decades on light polarization effects in lasers. In the 1990s they came across the sunstone idea and the objections of Roslund and Beekman. While Horváth’s studies seemed to show that it wasn’t after all as simple as they had supposed to find the sun behind clouds, Ropars and Le Floch agreed with their concern that the simple darkening of a dichroic crystal due to polarization effects is too weak to do that job either. The two physicists also pointed out that Ramskou’s suggestion of using birefringent calcite this way won’t work. But, they said, calcite has another property that presents a quite different way of using it as a sunstone.

When a calcite crystal is oriented so that a polarized ray strikes at right angles to the main facet of the rhombohedral crystals, but at exactly 45 degrees to the optical axis of the crystal – at the so-called isotropy point – it turns out that the light in the rays at this position are completely depolarized. As a result, it’s possible to find the azimuth of a hidden sun by exploiting the sensitivity of the naked eye to polarized light. When polarized white light falls on our eye’s fovea, we can see a pattern in which two yellowish blobs fan out from a central focus within a bluish background. This pattern, called Haidinger’s brushes, is most easily seen by looking at a white sheet of paper illuminated with white polarized light, and rotating the filter. We can see it too on a patch of blue sky overhead when the sun is near (or below) the horizon by rotating our head. By placing a calcite crystal in the line of the polarized rays oriented to its isotropy point relative to the sun’s azimuth, the polarization is removed and Haidinger’s brushes vanish. Comparing the two views by moving the crystal rapidly in and out of the line of sight, the researchers found that the sun’s azimuth can be estimated to within five degrees.


Haidinger’s brushes: an exaggerated view.

But it’s a rather cumbersome method, relies on there being at least a high patch of unobstructed sky, and would be very tricky on board a pitching ship. There is, however, a better alternative.

Because calcite is birefringent, when a narrow and partially polarized light ray passes through it, the ray is split in two, an effect strikingly evident with laser beams. One ray behaves as it would if just travelling through glass, but the other is deviated by an amount that depends on the thickness of the crystal and the angle of incidence. This is the origin of the characteristic double images seen through birefringent materials. And whereas Roslund and Beekman had argued that changes in brightness for a dichroic substance rotated in dim, partially polarized light are likely to be too faint to distinguish, the contrast between the split-beam intensities as calcite is rotated are much stronger and easier to spot. “The sensitivity of the system is then increased by a factor of about 100”, Ropars explains. At the isotropy point, the two rays will have exactly the same brightness, regardless of how polarized the light is. This means that, if we can accurately judge this position of equal brightness, the orientation of the crystal at that point can again be used to figure out the azimuth from which the most intense rays are coming.



Double images and split laser beams in calcite, due to birefringence.

The human eye happens to be extremely well attuned to comparing brightness contrasts of fairly low-level lighting. So the researchers’ tests using partially polarized light shone through a calcite crystal showed that, under ideal conditions, the direction of the light rays could be estimated to within 1 degree even for low overall light intensities, equivalent to a sun below the horizon at twilight. The method, they say, will work even up to the point where the first stars appear in the sky.

Showing all this is the lab is one thing, but can it be turned into a navigational instrument? Ropars, Albert Le Floch and their coworkers have already made one. They call it the Viking Sunstone Compass.

It’s a rather beautiful wooden cylinder with a hole in the top, through which light falls from the zenith of the sky onto a calcite crystal attached to a rotating pivot turned by a little handle on the lid. There’s a gap in the side through which the observer looks at the two bright spots projected from the crystal. “You simply rotate the crystal to equalize the intensities of the beams”, says Ropars. A pointer on the lid then indicates the orientation of the crystal and the azimuth of the sun, from which north can be deduced by taking into account the time of day. Ropars says that, even though of course the Vikings lacked good chronometers, they seem to have known about sundials. What’s more, studies have shown that people’s internal body clocks (their circadian rhythm) can enable us to estimate the time of day to within about a quarter of an hour.


The Viking Sunstone Compass made by researchers at the University of Rennes. Note the double bright spots in the cavity.

But never mind Vikings – the Rennes team could probably make a mint by marketing these elegant devices as a luxury item for sailors. Ropars says that a US company is now hoping to commercialize the device based on their prototype.

All at sea

When the findings were reported, they spawned a flurry of excited news headlines, many claiming that the mysteries of Viking navigation had finally been solved. It’s not surprising, for the image of brawny Vikings making use of such a brainy method is irresistible. But what, in the end, did the experiments really tell us about history?

There’s nothing in principle that might have prevented the ancient Greeks from developing steam power or microscopes. We are sure that they didn’t because there is absolutely no evidence for it. So an experiment demonstrating that, say, ancient Greek glass-making methods allow one to make the little glass-bead microscope lenses used by Antoni van Leeuwenhoek in the seventeenth century is historically meaningless. What, then, can we conclude about Viking sunstones?

Because the Viking voyages between the ninth and eleventh centuries were so extensive – they sailed to the Caspian Sea, across the Mediterranean to Constantinople, and over the Atlantic to North America – there is a pile of archaeological and historical research on how on earth they did it. The prevailing view is that, in the Dark and Middle Ages, as much sailing as possible was done in sight of land, so that landmarks could guide the way. But of course you can’t cross the Atlantic that way. So if no land was in sight, sailors used environmental signposts: the stars (the Vikings knew how to find north from the Pole Star), the sun and moon, winds and ocean currents. They also relied on the oral reports of previous voyagers to know how long it should take to get to particular places.

What if none of these clues was available? What did they do if becalmed in the open sea on a cloudy day? Well, then they admitted that they were lost – as they put it, hafvilla, “wayward at sea”. The written records indicate that under such circumstances they would convene to discuss the problem, relying on the instincts of the most experienced sailors to set a course.

However, some archaeologists and historians, like Ramskou, have argued that they could also have used navigational instruments. The problem is that there is precious little evidence for it. The Scandinavian coast is dotted with Viking ship finds, some of them wrecks and others buried to hold the dead in graves. But not one has provided any artifacts that could be navigational tools. Nevertheless, the archaeological record is not entirely barren. In 1948 a Viking-age wooden half-disk carved with sun-like serrations was unearthed under the ruins of a monastery at Uunartoq in Greenland. It was interpreted by the archaeologist Carl Sølver as a navigational sundial, an idea endorsed by Ramskou in the 1960s. More recently another apparent wooden sundial was found at the Viking site on the island of Wolin, off the coast of Poland in the Baltic. A rectangular metal object inscribed in Latin, found at Canterbury and tentatively dated to the eleventh century, has also been interpreted as a sundial, while a tenth-century object from Menzlin in Germany might be a nautical weather-vane.



A Viking ship grave at Oseberg in Norway, and the Uunartoq Viking sundial.

So the “instrumental school” of Viking navigation has a few tenuous sources. But no sunstones. That hasn’t previously deterred the theory’s champions. One of them was Leif Karlsen, an amateur historian whose 2003 book Secrets of the Viking Navigators announced his convictions in its subtitle: “How the Vikings used their amazing sunstones and other techniques to cross the open ocean”. One problem with such a bold claim is that the sunstone hypothesis had already been carefully examined in 1975 by the archaeologist Uwe Schnall, who argued that not only is there no evidence for it but there is no clear need either. “Since then, to my knowledge, no research has contradicted this conclusion”, says Willem Mörzer Bruyns, a retired curator of navigation at the Netherlands Maritime Museum in Amsterdam.

In making his case, however, Karlsen presented a new exhibit. In 2002, just as his book was being completed, archaeologists discovered a calcite crystal in the remains of a shipwreck offshore from the Channel Island of Alderney. It has been made misty by centuries of immersion in seawater and abrasion by sand, but it still has the familiar rhombohedral shape. Finally, tangible proof that sailors carried sunstones! Well, not quite. Not only is it totally unknown why the crystal was on board, but the ship is from Elizabethan England, not the Viking age.


The Alderney “sunstone”.

All the same, Ropars and colleagues claim that it supports their theory that these crystals were used for navigation. They point out, for example, that it was found close to a pair of navigational dividers. But, says Bruyns, “navigational instruments were kept in the captain’s and officers’ quarters, where their non-navigational valuables were also stored.” All the same Bruyns is sympathetic to the idea that, rather than being a primary navigational device, the crystal might have been used to correct for compass errors caused by local magnetic variations (such as proximity to iron cannons), which was done at that time by looking at the sun’s position on the horizon when it rose or set. Ropars points out that birds use the same recalibration of their magnetic sensors using polarization of sunlight at sunrise and sunset. “We’re now looking for possible mentions of sunstones in the historical Navy reports of the 15th and 16th centuries”, he says. But however intriguing that idea is, it has no bearing on a possible use of sunstones for navigation in the pre-compass era. “The Alderney finding is from a completely different period and culture to the Vikings”, Ropars acknowledges.

Finding the right questions

One way to view the latest work on sunstones is that it could at least have ruled out the hypothesis in principle. But don’t historians need a good reason to regard a hypothesis as plausible in the first place, before they get concerned about whether it is possible in practice? Otherwise there is surely no end to the options one would need to exclude. And there is the difficult issue of the documentary record. Lots of what went on a millennium and more ago was not written down, and much of what was is now lost. All the same, there is a rich literature, at least from the Middle Ages, of the techniques and skills of trades and professions, while early pioneers of optics like Roger Bacon and Robert Grosseteste in the thirteenth century offer a pretty extensive summary of what was then known on the subject. It’s not easy to see how they would have neglected sunstones, if these were widely used in navigation. Ropars says that the Icelandic sagas aren’t any longer the only textual source for sunstones, for the Icelandic medieval historian Arni Einarsson pointed out in 2010 that sunstones are also mentioned in the inventory lists of some Icelandic monasteries in the fourteenth and fifteenth centuries, where they were apparently used as time-keeping tools for prayer sessions. But monks weren’t sailors.

The basic problem, says Salt, is that scientists dabbling in archaeology often try to answer questions that, from the point of view of history and anthropology, no one is asking. This has been a bugbear of the discipline of archaeoastronomy, for example, in which astronomers and others attempt to provide astronomical explanations of historical records of celestial events, such as darkening of the skies or the appearance of new stars and other portents. Explanations for the Star of Bethelem have been particularly popular, but here too Salt thinks that it is hard to find any examples of a historically interesting question being given a compelling answer. [See, e.g. J. British Astron. Assoc. 114, 336; 2004]. One of the most celebrated examples, also revolving around optical physics, was the suggestion by artist David Hockney and physicist Charles Falco that painters in the Renaissance such as Jan van Eyck used a camera obscura to achieve their incredible realism. The theory is now generally discounted by art historians.

“‘Could the Vikings have used sunstones’ is a different question to ‘did the Vikings use sunstones”, which is what most historians are interested in,” says Salt. “A paper that tackles a historical problem by pretty much ignoring the historical period your artefact comes from seems to me to be eccentric.” Ropars agrees that “experimental science can exclude historical hypotheses, but isn’t sufficient to validate them.” But he is optimistic about the value of collaborations between scientists and historians or archaeologists, when the historical facts are sufficiently clear for the scientists to develop a plausible model of what might have occurred.

Could it be, though, that we’re looking at the sunstone research from the wrong direction? One of its most attractive outcomes is not an answer to a historical question, but a rich mix of mineralogy, optics and human vision that has inspired the invention of a charming device which, using only methods and materials accessible to the ancient world, enables navigation under adverse conditions. It would be rather lovely if the modern “Viking Sunstone Compass” were to be used to cross the Atlantic in a reconstructed Viking ship, as was first done in 1893. It would prove nothing historically, but it would show how speculations about what might have been can stimulate human ingenuity. And maybe that’s enough.


The reconstructed Viking ship the Sea Stallion sets sail.

Further reading
J. B. Friedman & K. M. Figg (eds), Trade, Travel and Exploration in the Middle Ages: An Encyclopedia, from p. 441. Routledge, London, 2000.

A. Englert & A. Trakadas (eds), Wulfstan’s Voyage, from p.206. Viking Ship Museum, Roskilde, 2009.

G. Horváth et al., Phil Trans. R. Soc. B 366, 772 (2011).

G. Ropars, G. Gorre, A. Le Floch, J. Enoch & V. Lakshminarayanan, Proc. R. Soc. A 468, 671 (2011).

A. Le Floch, G. Ropars, J. Lucas, S. Wright, T. Davenport, M. Corfield & M. Harrisson, Proc. R. Soc. A 469, 20120651 (2013).

G. Ropars, V. Lakshminarayanan & A. Le Floch, Contemp. Phys. 55, 302 (2014).

____________________________________________________________________

Note: A version of this article appears in New Scientist this week. A pdf of this article is available on my website here.

Wednesday, March 18, 2015

The graphene explosion


I haven’t found any reports of the opening of Cornelia Parker’s new solo show at the Whitworth in Manchester. Did the fireworks go off? Did the detonator work? Here, anyway, is what I wrote for Nature Materials before the event.

_______________________________________________________________________

If all has gone according to the plan as this piece went to press, Manchester will have been showered with meteorites. An exhibition at the University of Manchester’s Whitworth art gallery by the artist Cornelia Parker is due to be opened on 13th February with a firework display in which pieces of meteoritic iron will be shot into the sky.

The pyrotechnics won’t be started simply by lighting the blue touchpaper. The conflagration will be triggered by a humidity sensor, switched by the breath of physicist Kostya Novoselov, whose work on graphene at Manchester University with Andre Geim won them both the 2010 physics Nobel prize. The sensor is itself made from graphene, obtained from flakes of graphite taken from drawings by William Blake, J. M. W. Turner, John Constable and Pablo Picasso as well as from a pencil-written letter by Ernest Rutherford, whose pioneering work on atomic structure was conducted at Manchester.

That graphene (oxide) can serve as an ultra-sensitive humidity sensor was reported by Bi et al. [1], and has since been refined to give a very rapid response [2]. Adsorption of water onto the graphene oxide film alters its capacitance, providing a sensing mechanism when the film acts as an insulating layer between two electrodes. These sensors are now being developed by Nokia. The devices used for Parker’s show were provided by Novoselov’s group after the two of them were introduced by the Whitworth’s director Maria Balshaw. Novoselov extracted the graphite samples from artworks owned by the galley, using tweezers under careful supervision.

“I love the idea of working on a nano level”, Parker has said. “The idea of graphene, something so small, being a catalyst.” She is not simply talking figuratively: doped graphene has indeed been explored as an electrocatalyst for fuel cells [3,4].

Parker has a strong interest in interacting with science and scientists. In 1997 she produced a series of works for Nature examining unexpected objects in a quasi-scientific context [5]. Much of her work focuses on connotations of materiality, associations arising from what things are made of and the incongruity of materials repurposed or set out of place. Her installation Thirty Pieces of Silver (1988-9) used an assortment of silver objects such as instruments and cutlery flattened by a steamroller. She has worked with the red crepe paper left over from the manufacture of Remembrance Day poppies, with lead bullets and gold teeth extruded into wire, and with her own blood. Perhaps even her most famous work, Cold Dark Matter: An Exploded View (1991) – the reconvened fragments of an exploded shed – was stimulated as much by the allure of the “matter” as by the cosmological allusion.

“I like the garden shed aspect of scientists”, she has said, “the way they like playing about with materials.” Unusually for an artist, she seems more excited by the messy, ad hoc aspects of practical science – the kind of experimentation for which Rutherford was so renowned – than by grand, abstract ideas. The fact that Novoselov and Geim made some of their graphene samples using Scotch tape to strip away layers from graphite no doubt added to its appeal. Parker also recognizes that materials tell stories. There’s a good chance that both Blake and Rutherford would have used graphite from the plumbago mines of Borrowdale in Cumbria, about 80 miles north of Manchester and the source of the Keswick pencil industry. So even Parker’s graphene might be locally sourced.

1. Bi, H. et al., Sci. Rep. 3, 2714 (2013).
2. Borini, S. et al., ACS Nano 7, 11166-11173 (2013).
3. Geng, D. et al., Energy Environ. Sci. 4, 760-764 (2011).
4. Fei, H. et al., ACS Nano 8, 10837-10843 (2014).
5. Anon., Nature 389, 335, 548, 668 (1997).

Friday, March 06, 2015

Alchemy on the page

Here’s an extended version of my article in Chemistry World on the "Books of Secrets" exhibition currently at the Chemical Heritage Foundation in Philadelphia.

___________________________________________________________________

You thought your chemistry textbook can be hard to follow sometimes? Consider what a student of chemistry might be faced with in the early seventeenth century:
“Antimony is the true bath of gold. Philosophers call it the examiner and the stilanx. Poets say that in this bath Vulcan washed Phoebus, and purified him from all dirt and imperfection. It is produced from the purest Mercury and Sulphur, under the genus of vitriol, in metallic form and brightness. Some philosophers call it the White Lead of the Wise Men, or simply the Lead…”

This is a small part of the description in The Aurora of the Philosophers, a book attributed to the sixteenth-century Swiss alchemist and physician Paracelsus (1493-1541), for making the “arcanum of Antimony”, apparently a component of the “Red Tincture” or philosopher’s stone, which could transmute base metals into gold. It is, Paracelsus averred, a “very red oil, like the colour of a ruby… with a most fragrant smell and a very sweet taste” (which you could discover at some peril). The book contains very detailed instructions for how to make this stuff – provided that you know what “aquafortis”, “crocus of Mars” and “calcined tutia” are, and that you take care to control the heat of the furnace, in case (the author warns) your glass vessels and perhaps even the furnace itself should shatter.

All this fits the image of the alchemist depicted by Pieter Bruegel the Elder in a print of around 1558, which shows a laboratory in turmoil, littered with paraphernalia and smoky from the fire, where a savant works urgently to make gold while his household descends into disarray all around him. Bruegel’s engraving set the tone for pictures of alchemists at work over the next two centuries or so, in which they were often shown as figures of fun, engaged on a fool’s quest and totally out of touch with the real world.


Pieter Bruegel the Elder, The Alchemist (c.1558)

But that caricature doesn’t quite stand up to scrutiny. For one thing, despite all its arcane language that only fellow adepts would understand, Paracelsus’s experimental procedure is in fact quite carefully recorded: it’s not so different, once you grasp the chemical names and techniques, from something you’d find in textbooks of chemistry four centuries later. The aim – transmutation of metals – might seem misguided from this distance, but there’s nothing so crazy about the methods.

Second, the frenzied experimentation in Brueghel’s picture, in which the deluded alchemist commits his last penny to the crucible, is being directed by a scholar who sits at the back reading a book. (The text is, however, satirical: the scholar points to the words “Alge mist”, a pun on “alchemist” meaning all is failed, and we see the alchemist’s future in the window as he leads his family to the poorhouse.)

Books are ubiquitous in paintings of alchemists, which became a genre in their own right in the seventeenth century. Very often the alchemist is shown consulting a text, and even when he is doing the bellowing and experimenting himself, a book stands open in front of him. Sometimes it’s the act of reading, rather than experimenting, that supplies the satire: in a painting by the Dutch artist Mattheus van Helmont (no relation, apparently, to the famous chemist Jan Baptista van Helmont), the papers tumble from the desk to litter the floor in ridiculous excess. “The use of books and texts in alchemical practice may not be discussed frequently, but it becomes obvious when looking at the actual manuscripts used by alchemists and at the multitude of paintings that depict them”, says Amanda Shields, curator of fine art at the Chemical Heritage Foundation (CHF) in Philadelphia.


After David Teniers the Younger, Alchemist with Book and Crucible (c.1630s)


Mattheus van Helmont, The Alchemist (17th century)

The complex relationship of alchemists to their books is explored in a current exhibition at the CHF called "Books of Secrets: Writing and Reading Alchemy". It was motivated by the Foundation’s recent acquisition of a collection of 12 alchemical manuscripts, mostly from the fifteenth century. They were bought from a dealer after having been auctioned by the Bibliotheca Philosophica Hermetica, a private collection of esoteric books based in Amsterdam and funded by the Dutch businessman Joost Ritman. Among the new acquisitions was one of just six existing complete copies of the highly influential Pretiosa margarita novella (Precious New Pearl) supposedly by the fourteenth-century Italian alchemist Petrus Bonus. The CHF already possessed one of the most substantial collections of paintings of alchemists in the world, mostly from the seventeenth to the nineteenth centuries, and while being keenly aware of the difference between the dates of the books and the paintings, Shields and the CHF’s curator of rare books James Voelkel saw an opportunity to use these two resources to explore what books meant for the alchemists and early chemists: who wrote them, who they were intended for, who actually bought them, and how they were read.

Telling secrets

Of course, there weren’t really any students of chemistry in the early seventeenth century. That discipline didn’t exist for at least another hundred years, and its emergence from alchemy was convoluted and disputed. Arguably the first real textbook of chemistry was Cours de chymie, published by the Frenchman Nicaise Lefebvre in 1660, who would have been identified by the transitional terms chymist or iatrochemist, the latter indicating the use of chemistry in medicine. Alchemy was still very much in the air throughout the seventeenth century: both Robert Boyle and Isaac Newton devoted a great deal of effort to discovering the philosopher’s stone, and neither of them doubted that the transmutation of metals was possible. But it wasn’t by any means all about making gold. In the sixteenth century just about any chemical manipulation, whether to make medicines, pigments and dyes, or simple household substances such as soap, would have been regarded as a kind of alchemy.

This is why the whole notion of an “alchemical literature” is ambiguous. Some writers, such as the late sixteenth-century physician Michael Maier, who directed alchemical experiments in the court of the Holy Roman Emperor Rudolf II in Prague, wrote about the subject in mystical and highly allegorical terms that would have been opaque to a craftsperson. Others, such as the Saxon Georg Bauer (known as Agricola), wrote highly practical manuals such as Agricola’s treatise on mining and metallurgy, De re metallica (1556). Paracelsus’s works, which became popular in the late sixteenth century (he died in 1541), were a mixture of abstruse “chemical philosophy” and straightforward recipes for making drugs and medicines. And aside from such intellectual writers both inside and outside the universities, during the Renaissance there arose a sometimes lucrative tradition of “how to” manuals known as Kunstbüchlein, which were hotch-potch collections of recipes from all manner of sources, including classical encyclopaedists such as Pliny and ill-reputed medieval books of magic. These often styled themselves as “books of secrets”, which of course made them sound very alluring – but often they were miscellanies more likely to give you a mundane recipe for curing toothache than the secret of how to turn lead into gold.

In other words, “secrets” weren’t necessarily about forbidden knowledge at all. According to historian of science William Eamon of New Mexico State University in Las Cruces, “the term was used to describe both trade secrets, in the sense of being concealed, and also “tricks of the trades,” in other words techniques.” Eamon adds that the word “secrets” also “carried a lot of weight owing to the medieval tradition of esoteric knowledge”, which remained prominent in the alchemical tradition of the Renaissance. This glamour meant that the term could be useful for selling books. But how could you allude to secrets while writing them down for all the world to read? Some writers argued that there was virtually a moral imperative to do so. In his introduction to the hugely popular Kunstbüchlein titled simply Secreti (1555), Alessio Piemontese (a pseudonym, probably for the Italian writer Girolamo Ruscelli) told an elaborate and perhaps concocted story of how, by withholding secrets from a physician, he had once been responsible for the death of the physician’s patient.

This tradition of compilations of “secrets” was an old one. The historian of experimental science Lynn Thorndike has suggested that “the most popular book in the Middle Ages” might have been a volume called the Secretum secretorum or “Secret of secrets” (how much more enticing a title could you get?), which has obscure origins probably in the Islamic literature from around the tenth century. It was often attributed to Aristotle, but it’s pretty certain that he never wrote it – as with so many medieval books, the association with a famous name is just a selling point. The book does, however, reflect the Islamic writers’ enthusiasm for Aristotle, and as well as alchemy it includes sections on medicine, astrology, numerology, magic and much else. It was a kind of pocketbook of all that the scholar might want to know – in the words of one historian, a “middle-brow classic for the layman.”

But even if some of these “secrets” seemed hardly worth keeping, alchemy was different – for it really could seem dangerous. If it was possible to make gold, what would that do to the currency and the economy? It was largely this kind of worry, rather than any perception that alchemy was wrong-headed, that gave it a bad reputation. In 1317 Pope John XXII made alchemy illegal and imposed harsh sentences on anyone found guilty of trying to make gold. There was, however, also concern – some of it justified – that alchemists were swindlers who were duping people with fake gold. The image of the alchemist as a trickster who blinded gullible clients with incomprehensible jargon was crystallized in Ben Jonson’s 1610 play The Alchemist, in which his wily charlatan Subtle is a figure of fun. What’s more, alchemy was often associated with religious non-conformism. Paracelsus was unorthodox enough to upset all parties during the Reformation, but he was often linked to the Protestant cause and was sometimes called the “Luther of medicine.” When the French iatrochemists, who adopted Paracelsian ideas, battled with the medical traditionalists in the royal court at the end of the sixteenth century, the dispute was as much about religion – Catholics versus French Protestants (Huguenots) – as it was about medicine.

In view of all this, the genuine alchemist had to tread carefully until at least the seventeenth century. He was vulnerable to suspicion, ridicule and condemnation. That’s one reason why alchemical texts were often written with “intentional obscurity”, according to Voelkel. If you wrote cryptically, you could always argue your way out of accusations that you’d said something heretical or illegal. But the alchemical writers also felt that their knowledge held real power and so should be made unintelligible to lay people. A third motivation will be familiar to anyone who has ever read postmodernist academics: if you wrote too plainly, people might think that what you were saying is trivial, whereas if it was hard to understand then it seems profound and mysterious. Even if the recipes were straightforward, you wouldn’t get far without knowing the “code names” (Decknamen) for chemical substances: that “stinking spirit” is sulphur, and the “grey wolf” or “sordid whore” is stibnite (antimony sulphide), say.

Probably all of these motives for concealment and obfuscation were important to some degree, says Eamon – but he suspects that the major factor in the recondite character of many alchemical books was “to enhance the status and mystery of the work.” Also, he adds, “one shouldn’t underestimate the sheer inertia of tradition: secrecy was a very ancient tradition and always connected with that idea of initiation. Its hold over alchemy was strong even after there was little need for it.” Even Robert Boyle, whose The Sceptical Chymist has often been misinterpreted as a dismissal of all of alchemy rather than just its mystical and cryptic excesses, “employed elaborate coding devices to conceal his recipes”, Eamon says – especially those involved in gold-making. Despite insisting that adepts should be less obscure and cagey, Boyle wasn’t averse to it himself. “He may simply have protecting his reputation”, says Eamon - he didn’t want to be associated with an art many regarded as foolish. Isaac Newton, whose notebooks attest to extensive alchemical experimentation, was similarly guarded about that work.

The alchemist’s library

Given the diversity of sources, what would an alchemist have had in his library? The answer would depend somewhat on the kind of alchemy (or chymistry) they did, says Eamon. “The more practically inclined alchemists would probably have owned few books,” he says, “and they would probably have been heavy on books on metallurgy such as Agricola’s De re metallica and works such as the Kunstbüchlein.” Alchemists who were more interested in gold-making and the more esoteric mysteries of the art “would have been drawn to works such as those of [the pseudonymous] Basil Valentine, one of the more celebrated chemists of the period, such as The Triumphal Chariot of Antimony.” The medieval texts attributed to the Arabic writer Jabir ibn Hayyan (Latinized to Geber) would also have been popular among this sort of alchemist, Eamon adds.

Alchemists who wrote about distillation, such as the Frenchman John of Rupescissa and authors who wrote under the name of the Spanish philosopher Ramon Llull, were popular in the sixteenth century, especially for alchemists mainly interested in medicine. “Works by Paracelsus and his followers would also be represented in the chymist’s library”, says Eamon. “For many alchemists, books of secrets would also have been quite useful, of which the most popular was Alessio Piemontese’s Secreti.”

The English writer John Evelyn claimed of Robert Boyle that he learnt “more from men, real experiments, & in his laboratory… than from books”. But in fact Boyle had a very large library that included many alchemical works. “Unfortunately the library was dispersed after Boyle’s death and no library catalogue exists,” says Eamon, “but historians have been able to identify several of his books from his notes.” These included, for example, Agricola’s De re metallica and works by Johann Glauber, Paracelsus and Daniel Sennert. Newton’s library is much better catalogued, and included well-used copies of Paracelsus’s On the Transmutation of Metals and an English translation of Novum lumen chymicum by the Moravian Paracelsian alchemist Michael Sendivogius.

A dialogue in the lab

The CHF exhibition shows that such alchemical books weren’t at all treated like sacred texts. While they were still hand-copied these books could cost a fortune, but that didn’t mean they were kept in pristine form. They are well thumbed and evidently much used, sometimes showing signs of a benchtop life just as the later paintings imply. One book, a collection of recipes from Italian and English sources dated around 1470-75, has pages begrimed with what looks like soot. When the conservator used by the CHF, Rebecca Smyrl at the Conservation Center for Art and Historic Artifacts in Philadelphia, offered to remove the offending substance, Voelkel implored her not to, for he figured that this might be the debris from an actual experiment.


Cooked in the furnace: are these soot stains in a fifteenth-century alchemical text the debris from use in the lab?

What’s more, the readers scribbled all over the pages. Since paper itself was expensive, you might as well use the original text as your notebook, and margins were left deliberately generous to accommodate the annotations. In a copy of Christophorus Parisiensis’ Opera from 1557 there is not a square centimeter wasted, and the notes are recorded in a neat hand almost to tiny to read without magnification. Readers didn’t just mine the book for information: they engaged in a dialogue with the author, making corrections or arguing about interpretations. “There was a real conversation going on”, says Erin McLeary, director of the CHF museum. These markings attest that the books were anything but status symbols to be filed away ostentatiously on the shelf. “Reading was a huge part of alchemical practice”, says Voelkel.


The pages of a sixteenth-century alchemical book with marginal notes from a reader.

The CHF’s newly acquired manuscripts are particularly revealing because they date from the moment when print culture was emerging. The printing press lowered the financial and practical barriers to book ownership. “It made alchemical books widely available and relatively affordable”, says Eamon. “You can already see the decline of the notion of books as luxury items in the early sixteenth century.” Printing enabled the Kunstbüchlein artisan’s manuals to become bestsellers in the early sixteenth century: “they were cheaply printed, widely translated, and produced in large numbers”, says Eamon. Alessio Piemontese’s Secreti went through over 100 editions, and its likely author Ruscelli seems to have been something of a hack (the polite term was poligrafo) churning out whatever his publisher demanded. Print culture drove the trend of writing books in vernacular languages rather than Latin (which many potential buyers couldn’t read), and this opening up of new audiences was exploited as much by religious dissenters – Martin Luther was one of the first to spot the possibilities – as by publishers of scientific tracts, such as the Aldine Press of the Venetian humanist Aldus Pius Manutius.

The transition is fascinating to see in the CHF’s books. The early typefaces were designed to look like handwritten text, and some of the abbreviations used by scribes, such as the ampersand (&) were carried over to print – in this case with the origin as a stylized Latin et still evident. Some early printed books left a space at the start of chapters for the ornate initial capital letters to be added by hand. Quite often, the owners decided to save on the expense, so that the chapters begin with a blank.

As time passed and alchemy turned into chymistry and then chemistry, the image of the alchemist recorded by the painters became more tolerant and less satirical. In the hands of one of the most prolific and influential artists of this genre, the Antwerp-born David Teniers the Younger (1610-1690), the alchemist is less Breugel’s foolish agent of chaos and more a sober laboratory worker. If his floor is still strewn with vessels of brass, glass and clay, that’s simply because it allows Teniers to show off his skill at painting textures. In The Village Chemist (1760) by Justus Juncker, the physician sits calmly taking notes in his well-lit study-workshop; François-Marius Granet’s The Alchemist (early 19th century) shows a sober, monk-like figure in a spacious, sparsely furnished chamber; and Charles Meer Webb’s The Search for the Alchemical Formula (1858) makes the alchemist a romanticized, Gothic savant.

But what are they all doing? Reading (and writing). The text was always there.


François-Marius Granet, The Alchemist (early 19th century)


Charles Meer Webb, The Search for the Alchemical Formula (1858)

Further reading
W. Eamon, Science and the Secrets of Nature (Princeton University Press, 1996).
L. M. Principe & L. DeWitt, Transmutations: Alchemy in Art (Chemical Heritage Foundation, 2002).
L. M. Principe, The Aspiring Adept (Princeton University Press, 2000).

Friday, February 27, 2015

Mitochondria: who mentioned God?

Oh, they used the G word. The Guardian put “playing God” in the headline of my article today on mitochondrial replacement, and now everyone on the comments thread starts ranting about God. I’m not sure God has had much to say in this debate so far, and it’s a shame to bring him in now. But for the sake of the record, I’ll just add here what I said about this phrase in my book Unnatural. I hope that some of the people talking about naturalness and about concepts of the soul in relation to embryos might be able to take a peek at that book too. So here’s the extract:

“Time and again, the warning sounded by the theocon agenda is that by intervening in procreation we are ‘playing God’. Paul Ramsey made artful play of this notion in his 1970 book Fabricated Man, saying that ‘Men ought not to play God before they learn to be men, and after they have learned to be men they will not play God.’ To the extent that ‘playing God’ is simply a modern synonym for the accusation of hubris, this charge against anthropoeia is clearly very ancient. Like evocations of Frankenstein, the phrase ‘playing God’ is now no more than lazy, clichéd – and secular – shorthand, a way of expressing the vague threat that ‘you’ll be sorry’. It is telling that this notion of the man-making man becoming a god was introduced into the Frankenstein story not by Mary Shelley but by Hollywood. For ‘playing God’ was never itself a serious accusation levelled at the anthropoetic technologists of old – one could tempt God, offend him, trespass on his territory, but it would have been heretical seriously to entertain the idea that a person could be a god. As theologian Ted Peters has pointed out,
“The phrase ‘playing God?’ has very little cognitive value when looked at from the perspective of a theologian. Its primary role is that of a warning, such as the word ‘stop’. In common parlance it has come to mean just that: stop.’”

And yet, Peters adds, ‘although the phrase ‘playing God’ is foreign to theologians and is not likely to appear in a theological glossary, some religious spokespersons employ the idea when referring to genetics.’ It has, in fact, an analogous cognitive role to the word ‘unnatural’: it is a moral judgement that draws strength from hidden reservoirs while relying on these to remain out of sight.”

OK, there you go. Now here’s the pre-edited article.

____________________________________________________________________

It was always going to be a controversial technique. Sure, conceiving babies this way could alleviate suffering, but as a Tory peer warned in the Lords debate, “without safeguards and serious study of safeguards, the new technique could imperil the dignity of the human race, threaten the welfare of children, and destroy the sanctity of family life.” Because it involved the destruction of embryos, the Catholic Church inevitably opposed it. Some scientists warned of the dangers of producing “abnormal babies”, there were comparisons with the thalidomide catastrophe and suggestions that the progeny would be infertile. Might this not be just the beginning of a slippery slope towards a “Frankenstein future” of designer babies?

I’m not talking about mitochondrial replacement and so-called “three person babies”, but about the early days of IVF in the 1970s and 80s, when governments dithered about how to deal with this new reproductive technology. Today, with more than five million people having been conceived by IVF, the term “test-tube baby” seems archaic if not a little perverse (not least because test tubes were never involved). What that debate about assisted conception led to was not the breakup of the family and the birth of babies with deformities, but the formation of the HFEA in the Human Fertilisation and Embryology Act of 1990, providing a clear regulatory framework in the UK for research involving human embryos.

It would be unscientific to argue that, because things turned out fine on that occasion, they will inevitably do so for mitochondrial replacement. No one can be wholly certain what the biological consequences of this technique will be, which is why the HFEA will grant licenses to use it only on the careful worded condition that they are deemed “not unsafe”. But the parallels in the tone of the debate then and now are a reminder of the deep-rooted fears that technological intervention in procreation seems to awaken.

Scientists supportive of such innovations often complain that the opponents are motivated by ignorance and prejudice. They are right to conclude that public engagement is important – in a poll on artificial insemination in 1969, the proportion of people who approved almost doubled when they were informed about the prospects for treating infertility rather than just being given a technical account. But they shouldn’t suppose that science will banish these misgivings. They resurface every time there is a significant advance in reproductive technology: with pre-implantation genetic diagnosis, with the ICSI variant of IVF and so on. They will undoubtedly do so again.

In all these cases, much of the opposition came from people with a strong religious faith. As one of the versions of mitochondrial replacement involves the destruction of embryos, it was bound to fall foul of Catholic doctrine. But rather little was made of that elsewhere, perhaps an acknowledgement that in terms of UK regulation that battle was lost some time ago. (In Italy and the US, say, it is a very different story.) The Archbishops’ Council of the Church of England, for example, stressed that it was worried about the safety and ethical aspects of the technique: the Bishop of Swindon and the C of E’s national adviser for medical ethics warned of “unknown interactions between the DNA in the mitochondria and the DNA in the nucleus [that] might potentially cause abnormality or be found to influence significant personal qualities or characteristics.” Safety is of course paramount in the decision, but the scientific assessments have naturally given it a great deal of attention already.

Lord Deben, who led opposition to the bill in the Lords, addressed this matter head on by denying that his Catholicism had anything to do with it. “I hope no one will say that I am putting this case for any reason other than the one that I put forward,” he said. We can take it on trust that this is what he believes, while finding it surprising that the clear and compelling responses to some of his concerns offered by scientific peers such as Matt Ridley and Robert Winston left him unmoved.

Can it really be coincidental, though, that the many of the peers speaking against the bill are known to have strong religious convictions? Certainly, there are secular voices opposing the technology too, in particular campaigners against genetic manipulations in general such as Marcy Darnovsky of the Center for Genetics and Society, who responded to the ongoing deliberations of the US Food and Drug Administration over mitochondrial transfer not only by flagging up alleged safety issues but also insisting that we consider babies conceived this way to be “genetically modified”, and warning of “mission creep” and “high-tech eugenics”. “How far will we go in our efforts to engineer humans?” she asked in the New York Times.

Parallels between these objections from religious and secular quarters suggest that they reflect a deeper and largely unarticulated sense of unease. We are unlikely to progress beyond the polarization into technological boosterism or conservative Luddites and theologians unless we can get to the core of the matter – which is evidently not scriptural, the Bible being somewhat silent about biotechnological ethics.

Bioethicist Leon Kass, who led the George W. Bush administration’s Council on Bioethics when in 2001 it blocked public funding of most stem-cell research, has argued that instinctive disquiet about some advances in assisted conception and human biotechnology is “the emotional expression of deep wisdom, beyond reason’s power fully to articulate it”: an idea he calls the wisdom of repugnance. “Shallow are the souls”, he says, “that have forgotten how to shudder.” I strongly suspect that, beneath many of the arguments about the safety and legality of mitochondrial replacement lies an instinctive repugnance that is beyond reason’s power to articulate.

The problem, of course, is that what one person recoils from, another sees as a valuable opportunity for human well-being. Yet what are these feelings really about?

Like many of our subconscious fears, they are revealed in the stories we tell. Disquiet at the artificial intervention in procreation goes back a long way: to the tales of Prometheus, of the medieval homunculus and golem, and then to Goethe’s Faust and Shelley’s Victor Frankenstein, E.T.A. Hoffmann’s automaton Olympia, the Hatcheries of Brave New World, modern stories of clones and Ex Machina’s Ava. On the surface these stories seem to interrogate humankind’s hubris in trying to do God’s work; so often they turn out on closer inspection to explore more intimate questions of, say, parenthood and identity. They do the universal job of myth, creating an “other” not as a cautionary warning but in order more safely to examine ourselves. So, for example, when we hear that a man raising a daughter cloned from his wife’s cells (not, I admit, an unproblematic scenario) will be irresistibly attracted to her, we are really hearing about our own horror of incestuous fantasies. Only in Hollywood does Frankenstein’s monster turn bad because he is tainted from the outset by his origins; for Shelley, it is a failure of parenting.

I don’t think it is reading too much into the “three-parent baby” label to see it as a reflection of the same anxieties. Many children already have three effective parents, or more - through step-parents, same-sex relationships, adoption and so forth. When applied to mitochondrial transfer, this term shows how strongly personhood has become equated now with genetics, and indicates to geneticists that they have some work to move the public on from the strictly deterministic view of genetics that the early rhetoric of the field unwittingly fostered.

We can feel justifiably proud that the UK has been the first country to grapple with the issues raised by this new technology. It has shown already that embracing reproductive technologies can be the exact opposite of a slippery slope: what IVF led to was not a Brave New World of designer babies, but a clear regulatory framework that is capable of being permissive and casuistic, not bound by outmoded principles. The UK is not alone in declining to prohibit the technique, but it is right to have made that decision actively.

It is also right that that decision canvassed a wide range of opinions. Some scientists have questioned why religious leaders should be granted any special status in pronouncing on ethics. But the most thoughtful of them often turn out to have a subtle and humane moral sensibility of the kind that faith should require. There is a well-developed strand of philosophical thought on the moral authority of nature, and theology is a part of it. But on questions like this, we have a responsibility to examine our own responses as honestly as we can.

Monday, February 23, 2015

Why dogs aren't enough in Many Worlds

I'm very glad some folks are finding this exchange on Many Worlds instructive. That was really all I wanted: to get a proper discussion of these issues going. The tone that Sean Carroll found “snide and aggressive” was intended as polemical: it’s just a rhetorical style, you know? What I certainly wanted to avoid (forgive me if I didn’t) was any name-calling or implications of stupidity, fraud, chicanery etc. (It doesn’t surprise me that some of the responses failed to do the same.) My experience has been that it is necessary to light a fire under the MWI in order to get a response at all. Indeed, even then it is proving very difficult to keep the feedback to the point and not get led astray by red herrings. For example, Sean made a big point of saying:
“The people who object to MWI because of all those unobservable worlds aren’t really objecting to MWI at all; they just don’t like and/or understand quantum mechanics.”
I’m genuinely unsure if this is supposed to be referring to me. Since I said in my article
“Certainly, to say that the world(s) surely can’t be that weird is no objection at all”
then I kind of assume it isn’t – so I’m not sure why he brings the point up. I even went to the trouble of trying explicitly to ward off attempts to dismiss my arguments that way:
“Many Worlders harp on about this complaint precisely because it is so easily dismissed.”
Puzzling.

But what Sean said next seems to get (albeit obliquely) to the heart of the matter:
“Hilbert space is big, regardless of one’s personal feelings on the matter.”

Whatever these arguments are about, they are surely not about what Hilbert space looks like, since Hilbert space is a mathematical construct – that is simply true by definition, and there is no argument about it. The argument is about what ontological status we ascribe to the state vectors that appear in Hilbert space. I do see the MW reasoning here: the reality we currently experience corresponds to a state vector in Hilbert space, and so why do we have any grounds for denying reality to the other states into which it can evolve by smooth unitary transformation? The problem, of course, is that a single state in quantum mechanics can evolve into multiple states. Yet if we are going to exclude any of those from having objective reality, we surely must have some criterion for doing so. Absent that, we have the MWI. I do understand that reasoning.

So it seems that the arguments could be put like this: is it an additional axiom to say “All states in Hilbert space accessible from an initial one that describes our real world are also describing real worlds” – or is it not? To objectors, it is, and a very expensive one at that. To MWers, it is merely what we do for all theories. “Give us one good reason why it shouldn’t apply here”, they say.

It’s a fair point. One objection, which has nothing whatsoever to do with the vastness of Hilbert space, is to say, well, no one has seriously posited such a vast number of multiple and in some sense “parallel” (initially) worlds before, so it seems fair to ask you to work a bit harder, since don’t we in science say that extraordinary claims require extraordinary evidence?* Might we not ask you to work a bit harder in this particular case to establish the relationship between what the formalism says and what exists in physical reality? After all, whether or not we admit all accessible states in Hilbert space a physical reality, we seem to get identical observational consequences. So right now, the only way we can choose between them is philosophically. And we don’t usually regard philosophy as the final arbiter in science.

___________________________________________
*For example, Sean emphasizes that the many worlds are a prediction, not a postulate of the theory. But most other theories (all others?) can tell us some specific things that they don’t predict too about what we will see happen. But I’m not clear if the MWI can rule out any particular thing actually coming to pass that is consistent with the laws of physics. For example, the Copenhagen interpretation (just to take an example) can exclude the “prediction” that human life came to an end following a nuclear conflict sparked by the Bay of Pigs incident. Correct me if I am wrong, but the MWI cannot rule out this “prediction”. It cannot rule out the “prediction” that Many Worlders were never bothered by this irritating science writer. Even if MWI does not exactly say “everything happens”, can it tell us there is anything in particular (consistent with the laws of physics) that does not?
____________________________________________

So up to this point, I can appreciate both points of view. What makes me uncomfortable is that the MWers seem so determined to pretend that what they are telling us is actually not so remarkable after all. What’s so surprising, they ask, about the idea that you can instantly duplicate a consciousness, again and again and again? What is frustrating is the blithe insistence that we should believe this, I suspect the most extraordinary claim that science has ever made, on the basis simply of Occam’s (fallible) razor. This is not, do please note, at all the same as worrying about “too many worlds”.

Still, who cares about my discomfort, right? But I wanted to suggest that it’s not just a matter of whether we are prepared to accept this extraordinary possibility. We need to acknowledge that it is rather more complicated than coming to terms with a cute gaggle of sci-fi Doppelgängers. This is not about whether or not people are “all that different from atoms”. It is about whether what people say can be ascribed a coherent meaning. Those responses that have acknowledged this point at all have tended to say “Oh who cares about selfhood and agency? How absurd to expect the theory to deal with unplumbed mysteries like that!” To which I would say that interpretations of quantum theory that don’t have multiple physical worlds don’t even have to think about dealing with them. So perhaps even that Ocaam’s razor argument is more complicated than you think.

It’s been instructive to see that the MWI is something of a hydra: there are several versions, or at least several views on it. Some say that the “worlds” bit is itself a red herring, a bit of gratuitous sci-fi that we could do without. Others insist that the worlds must be actual: Sean says that people must be copied, and that only makes any kind of sense if the world is copied around them. Some say that invoking problems with personhood is irrelevant since Many Worlds would be true anyway even without people in it. (The inconvenience with this argument is that there are people in it.) Sean, interestingly, says that copying people is not only real but essential, “for deriving the Born rule” in MWI. This is a pointer to his fascinating paper on “self-locating uncertainty”. Here he and Charles Sebens points out that, in the MWI where branch states are rendered distinct and non-interacting by decoherence, the finite time required for an observer to register which branch she is on means that there is a tiny but inescapable interval during which she exists as two identical copies but doesn’t know which one she is. In this case, Carool and Sebens argue, the rational way to “apportion credence to the different possibilities” is to use the Born rule, which allows us to calculate from the wavefunction the likelihood of finding a particular result when we make a measurement. This, they say, is why probability seems to come into the situation at all, given that the MWI says that everything that can happen does happen with 100% probability.

This sounds completely bizarre: a rule of quantum physics works because of us? But I think I can see how it makes sense. The universe doesn’t care about the Born rule: it’s not forever calculating “probabilities”. Rather, the Born rule is only needed in our mathematical theory of quantum phenomena – and this argument offers an explanation of why it works when it is put there. Now, there is a bit of heavy pulling still to do in order to get from a “rational way to make predictions while we are caught in that brief instant after the universe has split but before we have been able to determine which branch we are in” and a component of the theory that we use routinely even while we are not agreed that this situation arises in the first place. I’m still not clear how that bit works. Neither is it fully clear to me how we are ever really in that limbo between the universe splitting and us knowing which branch we took, given that, in one view of the Many Worlds at least, the universe has split countless times again during that interval. Maybe the answer would be that all those subsequent split produce versions that are identical with respect to the initial “experiment”, unless they involve processes that interact with the “experiment” and so are part of it anyway. I don’t know.

I do think I can see the answer to my question to Sean (not meant flippantly) of whether it has to be humans who split in order to get the Born rule, and not merely dogs. The answer, I think, is that dogs won’t do because dogs don’t do quantum mechanics. What seems weird is that we’re then left with an aspect of quantum theory that, in this argument, is the way it is not because of some fundamental underlying physical reason so much as because we asked the question in the first place. It feels a bit like Einstein’s moon: was the Born rule true before we invented quantum theory? Or to put it another way, how is consciousness having this agency without appearing explicitly anywhere in the theory? I’m not advancing these as critiques, just saying it seems odd. I’m happy to believe that, within the MWI, the logic of this derivation of the Born rule is sound.

But doesn’t that mean that deriving the Born rule, a longstanding problem in QM, is evidence for the MWI? Sadly not. There are purported derivations within the other interpretations too. None is universally accepted.

The wider point is that, if this is Sean’s reason for insisting we include dividing people in MWI, then the questions about identity raised in my article stand. You know, perhaps they really are trivial? But no one seems to want to say why. This refusal to confront the apparent logical absurdities and contradictions of a theory which predicts that “everything” really happens is curious. It feels as though the MWers find something improper about it – as though this is not quite the respectable business for a physicist who should be contemplating rates of decoherence and the emergence of pointer states and so on. But if you insist on a theory like this, you’re stuck with all its implications – unless, that is, you have some means of “disappearing worlds” that scramble the ability to make meaningful statements about anything.

Saturday, February 21, 2015

Many Worlds: can we make a deal?

OK, picking up from my last post, I think I see a way whereby we can leave this. Advocates of the Many World Interpretation will agree that it does not pretend to say anything about humans and stuff, and that expecting it to do so is as absurd as expecting someone to write down and solve the Schrödinger equation for a football game. They will agree that all those popular (and sometimes technical) books and articles telling us about our alternative quantum selves and Many-Worlds morality and so forth, are just the wilder speculative fringes of the theory that struggle with problems of logical coherence. They agree that statements like DeWitt’s that “every quantum transition taking place on every star, in every galaxy, in every remote corner of the universe is splitting our local world on earth into myriads of copies” aren’t actually what the theory says at all. They acknowledge a bit more clearly that the Alices and Bobs in their papers are just representations of devices that can make an observation (yes, I know this is all they have ever been intended as anyway.) They agree that when they say “The world is described by a quantum state”, they are using “world” in quite a special sense that makes no particular claims about our place(s) or even our existence(s) in it*. They admit that if one tries to broaden this sense of “world”, some difficult conundrums arise. They admit that the mathematical and ontological status of these “worlds” are not the same thing, and that the difference is not resolved by saying that the “worlds” are “really” there in Hilbert space, waiting to be realized.

Then – then – I’m happy to say, sure, the Many Worlds Interpretation, which yes indeed we might better relabel the Everettian Interpretation (shall we begin now?), is a coherent way to think about quantum theory. Possibly even a default way, though I shall want to seek advice on that.

Is that a deal?

*I submit that most physicists and chemists, if they write down the Schrödinger equation for, say, a molecular orbital, are not thinking that they are actually writing down the equation for a “world” but with some bits omitted. One might respond “Well, they should, unless they are content to be “shut up and calculate” scientists”. But I would submit that they are just being good scientists in recognizing the boundaries of the system their equations describe and are not trying to make claims about things they don’t know about or understand.

Friday, February 20, 2015

The latest on the huge number of unobservable worlds

OK, I get the point. Sean Carroll really doesn’t care about problems of the ontology of personhood in the Many World Interpretation. I figured that, as a physicist, these would not be at the forefront of his mind, which is fair enough. But philosophically they are valid questions – which is why David Lewis thought a fair bit about them in his Model Realism theory. It seems to me that a supposedly scientific theory that walks up and says “Sorry, but you are not you – I can’t say what it is you are, but it’s not what you think you are” is obliged to take questions afterwards. I wrote my article in Aeon to try to get those questions, so determinedly overlooked in many expositions of Many Worlds (though clearly acknowledged, if not really addressed, by one of its thoughtful proponents Lev Vaidman) on the table.

But no. We’re not having that, apparently. Sean Carroll’s response doesn’t even mention them. Perhaps he feels as Chad Orzel does: “Who cares? All that stuff is just a collection of foggily defined emergent phenomena that arising from vast numbers of simple quantum systems. Absent a concrete definition, and most importantly a solid idea of how you would measure any of these things, any argument about theories of mind and selfhood and all that stuff is inescapably incoherent.” I’m sort of hoping that isn’t the case. I’m hoping that when Carroll writes of an experiment on a spin superposition being measured by Alice, “There's a version of Alice who saw up and a version who saw down”, he doesn’t really think we can treat Alice – I mean real-world Alices, not the placeholder for a measuring device – like a CCD camera. It’s the business of physics to simplify, but we know what Einstein said about that.

All he picks up on is the objection that I explicitly call minor in comparison: the matter of testing the MWI. His response baffles me:
"The MWI does not postulate a huge number of unobservable worlds, misleading name notwithstanding. (One reason many of us like to call it “Everettian Quantum Mechanics” instead of “Many-Worlds.”) Now, MWI certainly does predict the existence of a huge number of unobservable worlds. But it doesn’t postulate them. It derives them, from what it does postulate."

(I don’t quite get the discomfort with the “Many Worlds” label. It seems to me that is a reasonable name for a theory that “predicts the existence of a huge number of unobservable worlds.” Still, call it what you will.)

I’m missing something here. By and large, scientific theories make predictions, and then we do experiments to see if those predictions are right. MWI predicts “a huge number of worlds”, but apparently it is unreasonable to ask if we might examine that prediction in the laboratory.

But, Carroll says, “You don’t hold it against a theory if it makes some predictions that can’t be tested. Every theory does that. You don’t object to general relativity because you can’t be absolutely sure that Einstein’s equation was holding true at some particular event a billion light years away.” The latter is a non-sequitur: accepting a prediction that can’t be tested is not the same as accepting the possibility of exceptions. And you might reasonably say that there is a difference between accepting a theory even if you can’t get experimentally at what it implies in some obscure corner of parameter space and accepting a theory that “predicts a huge number of unobservable worlds”, some populated by other versions of you doing unobservable things. But OK, might we then have just one prediction that we can test please?

I was dissatisfied with Carroll’s earlier suggestion that you can test MWI just by finding a system that violates the Schrödinger equation or the principle of superposition, because, as I pointed out, it is not a unique interpretation of quantum theory in that regard. His response? “So what?” Alternatives to MWI, he says, have to add to its postulates (or change them), and so they too should predict something we can test. And some do. I understand that Carroll thinks the MWI is uniquely exempt from having to defend its interpretation in particular in the experimental arena, because its axioms are the minimal ones. The point I wanted to raise in my article, though, was that the wider implications of the MWI make it less minimal than its advocates claim. If a “minimal” physical theory predicted something that seemed nonsensical about how cells work, but a more complex theory with an experimentally unsupported postulate took away that problem, would we be right to assert that the minimal theory must be right until there was some evidence for that other postulate? Of course, there may be a good argument for why trashing any coherent notion of self and identity and agency is not a problem. I’d love to hear it. I’d rather it wasn’t just ignored.

“Those worlds happen automatically” – sure, I see that. They are a prediction – sure, I see that. But this point-blank refusal to think any more about them? I don’t get that. Perhaps if Many Worlders were to stop, just stop, trying to tell us anything about how those many unobservable worlds are peopled, to stop invoking copies of Alice as placeholders for quantum measurements, to stop talking about quantum brothers, to say simply that they don’t really have a clue what their interpretation can mean for our notions of identity, then I would rest easier. And so would many, many other physicists. That, I think, would make them a lot happier than being told they don’t understand quantum theory or that they are being silly.

I’m concerned that this sounds like a shot at Sean Carroll. I really don’t want that. Not only is he a lot smarter than me, but he writes so damned well on such intensely interesting stuff. I’m not saying that to flatter him away. I just wanted to get these things discussed.

Many Worlds - a longer view

Here is the pre-edited version of my article for Aeon on the Many Worlds Interpretation of quantum theory. I’m putting it here not because it is any better than the published version (Aeon’s editing was as excellent and improving as ever), but because it gives me a bit more room to go into some of the issues.

In my article I stood up for philosophy. But that doesn’t mean philosophers necessarily get it right either. In the ensuing discussion I have been directed to a talk by philosopher of science David Wallace. Here he criticizes the Copenhagen view that theories are there to make predictions, not to tell us how the world works. He gets a laugh from his audience for suggesting that, if this were so, scientists would have been forced to ask for funding for the LHC not because of what we’d learn from it but so that we could test the predictions made for it.

This is wrong on so many levels. Contrasting “finding out about the world” against “testing predictions of theories” is a totally false opposition. We obviously test predictions of theories to find out if they do a good job of helping us to explain and understand the world. The hope is that the theories, which are obviously idealizations, will get better and better at predicting the fine details of what we see around us, and thereby enable us to tell ever more complete and satisfying stories about why things are this way (and, of course, to allow us to do some useful stuff for “the relief of man’s estate). So there is a sense in which the justification for the LHC derided by Wallace is in fact completely the right one, although that would have been a very poor way of putting it. Almost no one in science (give or take the [very] odd Nobel laureate who capitalizes Truth like some religious crank) talks about “truth” – they recognize that our theories are simply meant to be good working descriptions of what we see, with predictive value. That makes them “true” not in some eternal Platonic sense but as ways of explaining the world that have more validity than the alternatives. No one considers Newtonian mechanics to be “untrue” because of general relativity. So in this regard, Wallace’s attack on the Copenhagen view is trivial. (I don’t doubt that he could put the case better – it’s just that he didn’t do so here.)

What I really object to is the idea, which Wallace repeats, that Many Worlds is simply “what the theory tells you”. To my mind, a theory tells you something if it predicts the corresponding states – say, the electrical current flowing through a circuit, or the reaction rate of an enzymatic process. Wallace asserts that quantum theory “predicts” a you seeing a live Schrödinger’s cat and a you seeing a dead one. I say, show me the equation where those “yous” appear (along with the universes they are in). The best the MWers can do is to say, well, let’s just denote those things as Ψ(live cat) and Ψ(dead cat), with Ψ representing the corresponding universes. Oh please.

Some objectors to my article have been keen to insist that the MWI really isn’t that bizarre: that the other “yous” don’t do peculiar things but are pretty much just like the you-you. I can see how some, indeed many, of them would be. But there is nothing to exclude those that are not, unless you do so by hand: “Oh, the mind doesn’t work that way, they are still rational beings.” What extraordinary confidence this shows in our ability to understand the rules governing human behaviour and consciousness in more parallel worlds than we can possibly imagine: as if the very laws of physics will make sure we behave properly. Collapsing the wavefunction seems a fairly minor sleight of hand (and moreover one we can actually continue to investigate) compared to that. The truth is that we no nothing about the full range of possibilities that the MWI insists on, and nor can we ever do so.

One of the comments underneath my article – and others will doubtless repeat this – makes the remark that Many Worlds is not really about “many universes branching off” at all. Well, I guess you could choose to believe Anonymous Pete instead of Brian Greene and Max Tegmark, if you wish. Or you could follow his link to Sean Carroll’s article, which is one of the examples I cite in my piece of why MWers simple evade the “self” issue altogether.

But you know, my real motivation for writing my article is not to try to bury the MWI (the day I start imagining I am capable of such things, intellectually or otherwise, is the day to put me out to grass), but to provoke its supporters into actually addressing these issues rather than blithely ignoring them while bleating about the (undoubted) problems with the alternatives. Who knows if it will work.

_____________________________________________________________________

In 2011, participants at a conference on the placid shore of Lake Traunsee in Austria were polled on what the conference was about. You might imagine that this question would have been settled before the meeting was convened – but since the subject was quantum theory, it’s not surprising that there was still much uncertainty. The conference was called “Quantum Physics and the Nature of Reality”, and it grappled with what the theory actually means. The poll, completed by 33 of the participating physicists, mathematicians and philosophers, posed a range of unresolved questions, one of which was “What is your favourite interpretation of quantum mechanics?”

The mere question speaks volumes. Isn’t science supposed to be decided by experiment and observation, free from personal preferences? But experiments in quantum physics have been obstinately silent on what it means. All we can do is develop hunches, intuitions and, yes, favourite ideas.

Which interpretations did these experts favour? There were no fewer than 11 answers to choose from (as well as “other” and “none”). The most popular (42%) was the view put forward by Niels Bohr, Werner Heisenberg and their colleagues in the early days of quantum theory, now known as the Copenhagen Interpretation. In third place (18%) was the Many Worlds Interpretation (MWI).

You might not have heard of most of the alternatives, such as Quantum Bayesianism, Relational Quantum Mechanics, and Objective Collapse (which is not, as you might suppose, saying “what the hell”). Maybe you’ve not heard of the Copenhagen Interpretation either. But the MWI is the one with all the glamour and publicity. Why? Because it tells us that we have multiple selves, living other lives in other universes, quite possibly doing all the things that we dream of but will never achieve (or never dare). Who could resist that idea?

Yet you should. You should resist it not because it is unlikely to be true, or even because, since no one knows how to test it, the idea is not truly scientific at all. Those are valid criticisms, but the main reason you should resist it is that it is not a coherent idea, philosophically or logically. There could be no better contender for Wolfgang Pauli’s famous put-down: it is not even wrong.

Or to put it another way: the MWI is a triumph of canny marketing. That’s not some wicked ploy: no one stands to gain from its success. Rather, its adherents are like giddy lovers, blinded to the flaws beneath the superficial allure.

The measurement problem

To understand how this could happen, we need to see why, more than a hundred years after quantum theory was first conceived, experts are still gathering to debate what it means. Despite such apparently shaky foundations, it is extraordinarily successful. In fact you’d be hard pushed to find a more successful scientific theory. It can predict all kinds of phenomena with amazing precision, from the colours of grass and sky to the transparency of glass, the way enzymes work and how the sun shines.

This is because quantum mechanics, the mathematical formulation of the theory, is largely a technique: a set of procedures for calculating what properties substances have based on the positions and energies of their constituent subatomic particles. The calculations are hard, and for anything more complicated than a hydrogen atom it’s necessary to make simplifications and approximations. But we can do that very reliably. The vast majority of physicists, chemists and engineers who use quantum theory today don’t need to go to conferences on the “nature of reality” – they can do their job perfectly well if, in the famous words of physicist David Mermin, they “shut up and calculate”, and don’t think too hard about what the equations mean.

It’s true that the equations seem to insist on some strange things. They imply that very small entities like atoms and subatomic particles can be in several places at the same time. A single electron can seem to pass through two holes at once, interfering with its own motion as if it was a wave. What’s more, we can’t know everything about a particle at the same time: Heisenberg’s uncertainty principle forbids such perfect knowledge. And two particles can seem to affect one another instantly across immense tracts of space, in apparent (but not actual) violation of Einstein’s theory of special relativity.

But quantum scientists just accept such things. What really divides opinion is that quantum theory seems to do away with the notion, central to science from its beginnings, of an objective reality that we can study “from the outside”, as it were. Quantum mechanics insists that we can’t make a measurement without influencing what we measure. This isn’t a problem of acute sensitivity; it’s more fundamental than that. The most widespread form of quantum maths, devised by Erwin Schrodinger in the 1920s, describes a quantum entity using an abstract concept called a wavefunction. The wavefunction expresses all that can be known about the object. But a wavefunction doesn’t tell you what properties the object has; rather, it enumerates all the possible properties it could have, along with their relative probabilities.

Which of these possibilities is real? Is an electron here or there? Is Schrödinger’s cat alive or dead? We can find out by looking – but quantum mechanics seems to be telling us that the very act of looking forces the universe to make that decision, at random. Before we looked, there were only probabilities.

The Copenhagen Interpretation insists that that’s all there is to it. To ask what state a quantum entity is in before we looked is meaningless. That was what provoked Einstein to complain about God playing dice. He couldn’t abandon the belief that quantum objects, like larger ones we can see and touch, have well defined properties at all times, even if we don’t know what they are. We believe that a cricket ball is red even if we don’t look at it; surely electrons should be no different? This “measurement problem” is at the root of the arguments.

Avoiding the collapse

The way the problem is conventionally expressed is to say that measurement – which really means any interaction of a particle with another system that could be used to deduce its state – “collapses” the wavefunction, extracting a single outcome from the range of probabilities that the wavefunction encodes. But the quantum mechanics offers no prescription for how this collapse occurs; it has to be put in by hand. That’s highly unsatisfactory.

There are various ways of looking at this. A Copenhagenist view might be simply to accept that wavefunction collapse is an additional ingredient of the theory, which we don’t understand. Another view is to suppose that wavefunction collapse isn’t just a mathematical sleight-of-hand but an actual, physical process, a little like radioactive decay of an atom, which could in principle be observed if only we had an experimental technique fast and sensitive enough. That’s the Objective Collapse interpretation, and among its advocates is Roger Penrose, who suspects that the collapse process might involve gravity.

Proponents of the Many Worlds Interpretation are oddly reluctant to admit that their preferred view is simply another option. They often like to insist that There Is No Alternative – that the MWI is the only way of taking quantum theory seriously. It’s surprising, then, that in fact Many Worlders don’t even take their own view seriously enough.

That view was presented in the 1957 doctoral thesis of the American physicist Hugh Everett. He asked why, instead of fretting about the cumbersome nature of wavefunction collapse, we don’t just do away with it. What if this collapse is just an illusion, and all the possibilities announced in the wavefunction have a physical reality? Perhaps when we make a measurement we only see one of those realities, yet the others have a separate existence too.

An existence where? This is where the many worlds come in. Everett himself never used that term, but his proposal was championed in the 1970s by the physicist Bryce De Witt, who argued that the alternative outcomes of the experiment must exist in a parallel reality: another world. You measure the path of an electron, and in this world it seems to go this way, but in another world it went that way.

That requires a parallel, identical apparatus for the electron to traverse. More, it requires a parallel you to measure it. Once begun, this process of fabrication has no end: you have to build an entire parallel universe around that one electron, identical in all respects except where the electron went. You avoid the complication of wavefunction collapse, but at the expense of making another universe. The theory doesn’t exactly predict the other universe in the way that scientific theories usually make predictions. It’s just a deduction from the hypothesis that the other electron path is real too.

This picture really gets extravagant when you appreciate what a measurement is. In one view, any interaction between one quantum entity and another – a photon of light bouncing off an atom – can produce alternative outcomes, and so demands parallel universes. As DeWitt put it, “every quantum transition taking place on every star, in every galaxy, in every remote corner of the universe is splitting our local world on earth into myriads of copies”.

Recall that this profusion is deemed necessary only because we don’t yet understand wavefunction collapse. It’s a way of avoiding the mathematical ungainliness of that lacuna. “If you prefer a simple and purely mathematical theory, then you – like me – are stuck with the many-worlds interpretation,” claims MIT physicist Max Tegmark, one of the most prominent MWI popularizers. That would be easier to swallow if the “mathematical simplicity” were not so cheaply bought. The corollary of Everett’s proposal is that there is in fact just a single wavefunction for the entire universe. The “simple maths” comes from representing this universal wavefunction as a symbol Ψ: allegedly a complete description of everything that is or ever was, including the stuff we don’t yet understand. You might sense some issues being swept under the carpet here.

What about us?

But let’s stick with it. What are these parallel worlds like? This hinges on what exactly the “experiments” that produce or differentiate them are. So you’d think that the Many Worlders would take care to get that straight. But they’re oddly evasive, or maybe just relaxed, about it. Even one of the theory’s most thoughtful supporters, Russian-Israeli physicist Lev Vaidman, seems to dodge the issue in his entry on the MWI in the Stanford Encyclopedia of Philosophy:

“Quantum experiments take place everywhere and very often, not just in physics laboratories: even the irregular blinking of an old fluorescent bulb is a quantum experiment.”

Vaidman stresses that every world has to be formally accessible from the others: it has to be derived from one of the alternatives encoded in the wavefunction of one of the particles. You could say that the universes are in this sense all connected, like stations on the London Underground. So what does this exclude? Nobody knows, and there is no obvious way of finding out.

I put the question directly to Lev: what exactly counts as an experiment? An event qualifies, he replied “if it leads to more than one ‘story’”. He added: “If you toss a coin from your pocket, does it split the world? Say you see tails – is there parallel world with heads?” Well, that was certainly my question. But I was kind of hoping for an answer.

Most popularizers of the MWI are less reticent. In the “multiverse” of the Many Worlds view, says Tegmark, “all possible states exist at every instant”. One can argue about whether that’s the quite same as DeWitt’s version, but either way the result seems to accord with the popular view that everything that is physically possible is realized in one of the parallel universes.

The real problem, however, is that Many Worlders don’t seem keen to think about what this means. No, that’s too kind. They love to think about what it means – but only insofar as it lets them tell us wonderful, lurid and beguiling stories. The MWI seduces us by multiplying our selves beyond measure, giving us fantasy lives in which there is no obvious limit to what we can do. “The act of making a decision”, says Tegmark – a decision here counting as an experiment – “causes a person to split into multiple copies.”

That must be a pretty big deal, right? Not for theoretical physicist Sean Carroll of the California Institute of Technology, whose article “Why the Many-Worlds formulation of quantum mechanics is probably correct” on his popular blog Preposterous Universe makes no mention of these alter egos. Oh, they are there in the background all right – the “copies” of the human observer of a quantum event are casually mentioned in the midst of the 40-page paper by Carroll that his blog cites. But they are nothing compared with the relief of having to fret about wavefunction collapse. It’s as though the burning question about the existence of ghosts is whether they observe the normal laws of mechanics, rather than whether they would radically change our view of our own existence.

But if some Many Worlders are remarkably determined to avert their eyes, others delight in this multiplicity of self. They will contemplate it, however, only insofar as it lets them tell us wonderful, lurid and beguiling stories about fantasy lives in which there is no obvious limit to what we can do, because indeed in some world we’ve already done it.

Most MWI popularizers think they are blowing our minds with this stuff, whereas in fact they are flattering them. They delve into the implications for personhood just far enough to lull us with the uncanniness of the centuries-old Doppelgänger trope, and then flit off again. The result sounds transgressively exciting while familiar enough to be persuasive.

Identity crisis

In what sense are those other copies actually “us”? Brian Greene, another prominent MW advocate, tells us gleefully that “each copy is you.” In other words, you just need to broaden your mind beyond your parochial idea of what “you” means. Each of these individuals has its own consciousness, and so each believes he or she is “you” – but the real “you” is their sum total. Vaidman puts the issue more carefully: all the copies of himself are “Lev Vaidman”, but there’s only one that he can call “me”.

““I” is defined at a particular time by a complete (classical) description of the state of my body and of my brain”, he explains. “At the present moment there are many different “Levs” in different worlds, but it is meaningless to say that now there is another “I”.” Yet it is also scientifically and, I think, logically meaningless to say that there is an “I” at all in his definition, given that we must assume that any “I” is generating copies faster than the speed of thought. A “complete description” of the state of his body and brain never exists.

What’s more, this half-baked stitching together of quantum wavefunctions and the notion of mind leads to a reductio ad absurdum. It makes Lev Vaidman a terrible liar. He is actually a very decent fellow and I don’t want to impugn him, but by his own admission it seems virtually inevitable that “Lev Vaidman” has in other worlds denounced the MWI as a ridiculous fantasy, and has won a Nobel prize for showing, in the face of prevailing opinion, that it is false. (If these scenarios strike you as silly or frivolous, you’re getting the point.) “Lev Vaidman” is probably also a felon, for there is no prescription in the MWI for ruling out a world in which he has killed every physicist who believes in the MWI, or alternatively, every physicist who doesn’t. “OK, those Levs exist – but you should believe me, not them!” he might reply – except that this very belief denies the riposte any meaning.

The difficulties don’t end there. It is extraordinary how attached the MWI advocates are to themselves, as if all the Many Worlds simply have “copies” leading other lives. Vaidman’s neat categorization of “I” and “Lev” works because it sticks to the tidy conceit that the grown-up "I" is being split into ever more "copies" that do different things thereafter. (Not all MWI descriptions will call this copying of selves "splitting" - they say that the copies existed all along - but that doesn't alter the point.)

That isn't, however, what the MWI is really about – it's just a sci-fi scenario derived from it. As Tegmark explains, the MWI is really about all possible states existing at every instant. Some of these, it’s true, must contain essentially indistinguishable Maxes doing and seeing different things. Tegmark waxes lyrical about these: “I feel a strong kinship with parallel Maxes, even though I never get to meet them. They share my values, my feelings, my memories – they’re closer to me than brothers.”

He doesn't trouble his mind about the many, many more almost-Maxes, near-copies with perhaps a gene or two mutated – not to mention the not-much-like Maxes, and so on into a continuum of utterly different beings. Why not? Because you can't make neat ontological statements about them, or embrace them as brothers. They spoil the story, the rotters. They turn it into a story that doesn't make sense, that can't even be told. So they become the mad relatives in the attic. The conceit of “multiple selves” isn’t at all what the MWI, taken at face value, is proposing. On the contrary, it is dismantling the whole notion of selfhood – it is denying any real meaning of “you” at all.

Is that really so different from what we keep hearing from neuroscientists and psychologists – that our comforting notions of selfhood are all just an illusion concocted by the brain to allow us to function? I think it is. There is a gulf between a useful but fragile cognitive construct based on measurable sensory phenomena, and a claim to dissolve all personhood and autonomy because it makes the maths neater. In the Borgesian library of Many Worlds, it seems there can be no fact of the matter about what is or isn’t you, and what you did or didn’t do.

State of mind

Compared with these problems, the difficulty of testing the MWI experimentally (which would seem a requirement of it being truly scientific) is a small matter. ‘It’s trivial to falsify [MWI]’, boasts Carroll: ‘just do an experiment that violates the Schrödinger equation or the principle of superposition, which are the only things the theory assumes.’ But most other interpretations of quantum theory assume them (at least) too – so an experiment like that would rule them all out, and say nothing about the special status of the MWI. No, we’d quite like to see some evidence for those other universes that this particular interpretation uniquely predicts. That’s just what the hypothesis forbids, you say? What a nuisance.

Might this all simply be a habit of a certain sort of mind? The MWI has a striking parallel in analytic philosophy that goes by the name of modal realism. Ever since Gottfried Leibniz argued that the problem of good and evil can be resolved by postulating that ours is the best of all possible worlds, the notion of “possible worlds” has supplied philosophers with a scheme for debating the issue of the necessity or contingency of truths. The American philosopher David Lewis pushed this line of thought to its limits by asserting, in the position called model realism, that all worlds that are possible have a genuine physical existence, albeit isolated causally and spatiotemporally from ours. On what grounds? Largely on the basis that there is no logical reason to deny their existence, but also because accepting this leads to an economy of axioms: you don’t have to explain away their non-existence. Many philosophers regard this as legerdemain, but the similarities with the MWI of quantum theory are clear: the proposition stems not from any empirical motive but simply because it allegedly simplifies matters (after all, it takes only four words to say “everything possible is real”, right?). Tegmark’s so-called Ultimate Ensemble theory – a many-worlds picture not explicitly predicated on quantum principles but still including them – has been interpreted as a mathematical expression of modal realism, since it proposes that all mathematical entities that can be calculated in principle (that is, which are possible in the sense of being “computable”) must be real. Lewis’s modal realism does, however, at least have the virtue that he thought in some detail about the issues of personal identity it raises.

If I call these ideas fantasies, it is not to deride or dismiss them but to keep in view the fact that beneath their apparel of scientific equations or symbolic logic they are acts of imagination, of “just supposing”. Who can object to imagination? Not me. But when taken to the extreme, parallel universes become a kind of nihilism: if you believe everything then you believe nothing. The MWI allows – perhaps insists – not just on our having cosily familial ‘quantum brothers’ but on worlds where gods, magic and miracles exist and where science is inevitably (if rarely) violated by chance breakdowns of the usual statistical regularities of physics.

Certainly, to say that the world(s) surely can’t be that weird is no objection at all; Many Worlders harp on about this complaint precisely because it is so easily dismissed. MWI doesn’t, though, imply that things really are weirder than we thought; it denies us any way of saying anything, because it entails saying (and doing) everything else too, while at the same time removing the “we” who says it. This does not demand broad-mindedness, but rather a blind acceptance of ontological incoherence.

That its supporters refuse to engage in any depth with the questions the MWI poses about the ontology and autonomy of self is lamentable. But this is (speaking as an ex-physicist) very much a physicist’s blind spot: a failure to recognize, or perhaps to care, that problems arising at a level beyond that of the fundamental, abstract theory can be anything more than a minor inconvenience.

If the MWI were supported by some sound science, we would have to deal with it – and to do so with more seriousness than the merry invention of Doppelgängers to measure both quantum states of a photon. But it is not. It is grounded in a half-baked philosophical argument about a preference to simplify the axioms. Until Many Worlders can take seriously the philosophical implications of their vision, it’s not clear why their colleagues, or the rest of us, should demur from the judgement of the philosopher of science Robert Crease that the MWI is ‘one of the most implausible and unrealistic ideas in the history of science’ [see The Quantum Moment, 2014]. To pretend that the only conceptual challenge for a theory that allows everything conceivable to happen (or at best fails to provide any prescription for precluding the possibilities) is to accommodate Sliding Doors scenarios shows a puzzling lacuna in the formidable minds of its advocates. Perhaps they should stop trying to tell us that philosophy is dead.