It was fun to write this piece for Nautilus on who would have made some of the great discoveries in science if their actual discoverers had not lived. And very nice to see it is provoking discussion, as I’d hoped – there is nothing definitive in my suggestions. Here are two more case histories, for which there was not room in the final article.
____________________________________________________________________________
Fullerenes – Wolfgang Krätschmer and Donald Huffman
In 1985, British spectroscopist Harry Kroto visited physical chemists Richard Smalley and Robert Curl at Rice University in Houston, Texas, to see if their machine for making clusters of atoms could produce some of the exotic carbon molecules Kroto thought might be formed in space. Their experiments led to the discovery of hollow, spherical molecules called C60 or buckminsterfullerene, and of a whole family of related hollow-shell carbon molecules called fullerenes. They were awarded the 1996 Nobel prize in chemistry for the work.
Fullerenes had been seen before 1985; they just hadn’t been recognized as such. They can in fact be formed in ordinary candle flames, but the most systematic experiments were conducted in 1982-3 by experimental physicist Wolfgang Krätschmer at the Max Planck Institute for Nuclear Physics in Heidelberg, Germany. Krätschmer had teamed up with physicist Donald Huffman of the University of Arizona, for they both were, like Kroto, interested in the constituents of interstellar space.
Huffman studied dust grains scattered through the cosmos from which stars may form. He and Krätschmer began collaborating in the 1970s while Huffman was on sabbatical in Stuttgart, and initially they looked at tiny particles of silicate minerals. But Huffman believed that some of the absorption of starlight by grains in the interstellar medium could be due to tiny particles of something like soot in the mix: basically, flakes of graphite-like carbon.
In 1982 he visited Krätschmer to carry out experiments in which they heated graphite rods in a vacuum and measured the light absorbed by the sooty debris. They made and saw C60, which absorbs ultraviolet light at a particular wavelength. But they didn’t realize what it was, and decided their apparatus was just making unintelligible carbon “junk”.
It wasn’t until the duo saw the paper by Kroto and colleagues in 1985 that the penny dropped. But if it hadn’t been for that, the interest of astronomers in interstellar dust would probably have returned scrutiny anyway to those experiments in Heidelberg, and the truth would have emerged. As it was, the graphite-vaporizing equipment of Krätschmer and Huffman offered a way to mass-produce fullerenes more cheaply and simply than the Rice cluster machine. Once this was understood in 1990, fullerene research exploded worldwide.
Continental drift – Roberto Mantovani, or…
There are discoveries for the time seems right, and others for which it’s just the opposite. For one reason or another they are rejected by the prevailing scientific opinion, offering us the retrospective, appealingly tragic tale of the lone maverick who was spurned only vindicated much later, perhaps posthumously. That’s pretty much how it was for Alfred Wegener’s theory of continental drift. In the 1930s, Wegener, a German meteorologist (so what did he know about geology?), proposed that the Earth’s surface was not fixed, but that the continental land masses wander over time into different configurations, and were in the distant past disposed far from where they stand today. To doubt the evident solidity of the planetary surface seemed absurd, and it wasn’t until the discovery of seafloor spreading – the formation of fresh ocean crust by volcanic activity – in the 1960s that continental drift became the paradigm for geology.
In such circumstances, it seems rather unlikely that anyone else would have come up with Wegener’s unorthodox idea in his own era. But they did. Not just one individual but several others imagined something like a theory of plate tectonics in the early twentieth century.
The most immediate sign of continental drift on the world map is the suspiciously close fit of the east coast of South America with the west coast of Africa. But that line of argument, advanced by American geologist Frank Bursley Taylor in 1908, seems almost too simplistic. Taylor got other things right too, such as the way the collision of continents pushes up mountain ranges. But his claim that the movements were caused by the close approach of the moon when it was suddenly captured by the Earth in the Cretaceous period was rather too baroque for his contemporaries.
In 1911, an amateur American geologist named Howard Baker also proposed that the continents are fragments of an epicene supercontinent that was torn apart. His mechanism was even more bizarre than Taylor’s: the moon was once a part of the Earth that got ripped off by its rapid spinning, and the continents moved to fill the gap.
In comparison, the theory of Italian geologist (and violinist) Roberto Mantovani, first published in 1889 and developed over the next three decades, was rather easier to swallow. He too argued that the continents were originally a single landmass that was pulled apart thanks to an expansion of the Earth driven by volcanic activity. Wegener acknowledged some “astonishingly close” correspondences between Mantovani’s reconstruction and his own.
All of these ideas contain tantalizing truths: breakup of an ancient supercontinent (now called Pangea), opening of ocean basins, mountain building and volcanism as the driving force. (Even Baker’s idea that the moon was once a part of the Earth is now widely believed, albeit for totally different reasons.) But like a reconstruction of Pangea from today’s map, the parts didn’t fit without gaps, and no one, including Wegener, could find a plausible mechanism for the continental movements. If we didn’t have Wegener, then Mantovani, or even Taylor or Baker, could step into the same foundational narrative of the neglected savant. All intuited some element of the truth, and their stories show that there’s often an element of arbitrariness in what counts as a discovery and who gets the credit.
Thursday, December 15, 2016
Saturday, November 26, 2016
The Return by Hisham Matar: why it's a special book
These were my comments on Hisham Matar’s book The Return for the Baillie Gifford Prize award event on 15 November. The prize, for which I was a judge, was awarded to Hisham’s close friend Philippe Sands for his extraordinary book East West Street.
___________________________________________________________________________
When we produced our shortlist, and indeed our longlist, I felt pleased with and proud of it. But as my acquaintance with the shortlisted books has deepened, and perhaps particularly in the light of the political climate into which they emerge, I have felt something more than that. I’ve become passionate about them.
But it was passion that I felt about Hisham Matar’s book from the first reading. It tells of his quest to find out what happened to his father Jaballa in Libya during the Qaddafi dictatorship, after Jaballa was imprisoned in the notorious Abu Salim jail for his principled opposition to the regime. The Return of the title is Hisham’s return to Libya in 2012, 33 years after his family was exiled, when the Qaddafi regime had been overthrown. That was during what we now know to be a tragically brief period of grace before a descent into social and economic chaos created by the power vacuum.
Yes, the subject sounds difficult and bleak, but please believe me that this book is not that, not only that. It is wise and funny, it is perceptive to absurdity, to beauty and to friendship, as well as to terror and cruelty. Several times it was said in our judging meetings that Hisham’s book has a novelistic quality,
If this story were not factual, I would expect to see The Return on the Man Booker shortlist, and novelists could learn a great deal from Hisham’s impeccable handling of every scene, each of which unfolds at just the rate and in just the order it should, with precisely the words it needs and no more.
But calling the book novelistic could sound like a double-edged comment, as if to imply that perhaps the truth is sometimes held hostage to a nice turn of phrase. That is absolutely not the case. It feels hard to do justice to the brilliant construction of the book, the masterful handling of plot, suspense and intrigue, without seeming to reduce the magnitude of the subject to the dimensions of a thriller. But these aspects are really a mark of the achievement here, because even as they make the book a totally engrossing read, not once do they obscure the moral and artistic integrity of what Hisham has created.
Of course, he is an acclaimed novelist himself, but here he shows that there are qualities in literature far more significant than the apparent division between fact and fiction.
But it is factual. That is a sad and terrible thing, but it also makes The Return a sort of gift, an honouring of the history and suffering of individuals and a country.
There is something in it that brings to my mind Primo Levi’s testament If This is a Man. Like that book, this one can’t use art to expunge the awful, inhuman events that motivated it. But, in its quiet dignity, it shows us why we persist, and in the end, I think, why we prevail, in spite of them.
Sunday, October 16, 2016
Did the Qin emperor need Western help? I don't think so.
Did the First Emperor of China import sculptors from classical Greece to help build the Terracotta Army? That’s the intriguing hypothesis explored in an entertaining BBC documentary called The Greatest Tomb on Earth, presented by Dan Snow, Alice Roberts and Albert Lin. (See also here.)
If it was true, it would revolutionize our view of the early history of China. It’s widely assumed that there was no significant, direct contact between China and the West until the time of Marco Polo (although you would not have guessed from this programme that diffusion of artifacts along trade routes happened much earlier, certainly in Roman times around the first century AD).
But I didn’t buy the story for a moment. It turned out to be a classic example of building up a case by an accumulation of weak, speculative evidence and then implying that somehow they add up to more than the sum of the parts. Look at each piece of evidence alone, and there’s virtually nothing there. But repeat often enough that they fit together into a convincing story and people might start to believe you.
Archaeologist Albert Lin adduced evidence of the ancient road that connected that ancient capital of present-day Xi’an, near the site of the mausoleum of the Qin emperor Qin Shi Huangdi, to the West, perhaps via Alexander’s empire in India. Well, at least, it was claimed that “there was probably a road reaching [from the tomb] at least to Lintao” on the borders of the Qin Empire. Buy what Lin actually found was a short section of undated track – it looked maybe a kilometre or so long – heading northwest through farmland within the confines of the tomb complex in Sha’anxi. Lintao is almost 400 km away. Later in the programme Dan Snow claimed that on this basis “We have evidence of an ancient road network that could have brought Westerners to China”. No, they really don’t. (And why do we need to find an ancient physical road anyway, given that it does seem clear that trade was happening all the way from the Mediterranean region to China at least in Roman times?)
Another strand of evidence was the notion that large-scale, lifelike figurines suddenly appeared in the Qin tomb, looking somewhat like those of classical Greece, when nothing like this had been seen before in China. How else could this artistic leap have been made, if not with the assistance of Greek sculptors imported by the emperor? That, at least, was the case argued by Lukas Nickel of the University of Vienna, based solely on asserted coincidences of artistic styles. We were offered no indication of how the Qin emperor – who, until he became ruler of “all” of China extending more or less to present-day Sichuan, was king of the state of Qin in the Wei valley – how this emperor somehow knew that there were barbarians nigh on 2000 miles further west across the Tibetan plateau who had advanced sculptural skills.
There were some puzzles, to be sure. To make some of their bronze castings, the Qin metalworkers seemed to have used something like the so-called “lost-wax technique”, using reinforcing rods, of which examples are known in ancient Egypt. “It’s clear this process is too complex to stumble on by accident”, said Snow. But obviously it was stumbled on by accident – how else was it ever invented anywhere? Given the known metallurgical skills of the ancient Chinese – bronze casting began in the Shang era, a millennium and a half before the Qin dynasty, and some of the Shang artifacts are exquisite – how can we know what they had achieved by the third century BC? Besides, I was left unsure what was so exciting about seeing a lost-wax method in the Qin artifacts, given that we already know this technique was known in China by the 6th century BC. Still, Snow concluded that “We now have strong evidence of Western metalworkers in China in the third century BC”. No, we don’t.
Then a skull from the mausoleum site, apparently of a sacrificed concubine of the emperor, was said to look unlike a typically East Asian skull. Like, perhaps, the more Caucasoid skull types of the minority races in what is today Xinjiang? That’s consistent with the data – the skull is certainly not Western in its proportions, said Alice. It could come from further afield too, on the basis of this data – but there’s absolutely no reason to suppose it did. Still, we were left with the hint that the emperor might have employed workers brought in from far outside the border of his empire. There was no support for that idea.
We were also introduced to an apparently recent paper reporting evidence of DNA of Western lineage in people from Xinjiang. Quite apart from the fact that this says nothing about the import of Western artistic techniques in China during the Qin dynasty, it was very odd to see it offered as a new discovery. The notion that there were people of Western, Caucasoid origin in Xinjiang long, long ago has been discussed for decades, ever since the discovery in the early twentieth century of mummified bodies of distinctly non-Chinese – indeed, virtually Celtic – appearance, with blond to red hair and “Europoid” body shapes in the Tarim basin of Xinjiang. The existence of a proto-European or Indo-European culture in this region from around 1800 BC has been particularly promoted since the 1990s by American sinologist Victor Mair. DNA testing from the early 2000s confirmed that the mummies seem to have had at least a partly European origin.
What is particularly odd about the neglect of the Tarim mummies in the context of this programme is that Mair and others have even suggested that this Indo-European culture may have brought Western metallurgical technology from west to east long before the Qin era, by the usual processes of cultural diffusion. They think that the bronze technology of the Shang era might have been stimulated this way. Others say that ironworking might have been transmitted via this culture around the tenth century BC, when it first appears in Xinjiang (see V. C. Piggott, The Archaeometallurgy of the Asian Old World, 1999).
I enjoyed the programme a lot. It identifies some interesting questions. But the idea of West-East cultural influence in the ancient world is not at all as new as was implied, and to my eye the evidence for direct import of Western “expertise” by Qin Shi Huangdi to make his army for the afterlife is extremely flimsy at this point. It would make a great story, but right now a story is all it is.
Incidentally, several folks on Twitter spoke about the popular idea that the Qin emperor’s mausoleum contains lakes of mercury. You can read more about that particular issue here.
Thursday, October 06, 2016
Making paint work: Vik Muniz's Metachromes
This is the catalogue essay to accompany the exhibition Metachromes by Brazilian artist Vik Muniz at Ben Brown Fine Arts in London, 6 October to 12 November.
____________________________________________________________
Why did so many artists abandon painting over the course of the twentieth century? There is no point looking for a single answer, but among the ones we might consider is that painters lost their trust in paint. It’s something rarely talked about, this relationship of painters to paint – or at least, it is rarely talked about except by painters themselves, to whom it is paramount. Paint represents the graft and the craft of painting, and for that very reason it is all too often neglected by art critics and historians, who have tended to regard it merely as a somewhat messy means to a sublime end. But many leading artists since Matisse have been making art not with paint but about paint, and in the process displaying their uneasy relationship with it.
No one put this better than Frank Stella: “I tried to keep the paint as good as it is in the can.” Two things leap out here, as British artist David Batchelor suggests in his book Chromophobia. First, for Stella paint comes in cans, not in tubes (it is an industrial mass product). Second, it looks good in the can. Indeed, perhaps it looks better in the can than it will once you start trying to apply it. The challenge of a blank canvas is familiar: it demands that the painter find something to fill up that blankness, something that will have been worth the effort. Blankness means it’s up to you. But paint in a can is a challenge of a different order. Here it is, already sensual, beautiful and pure – qualities that the artist might hope to retain in the finished work, but the paint sitting in the can says ‘you think you can do better than this?’ Probably not.
Paint had become too perfect. Anyone who has tried to make paint the way a Renaissance master (or more probably, his apprentices) would have done will know that it emerges as unpromising stuff: sticky, gritty, oily. It was the artist’s task to wrestle beauty from this raw earth, which must have seemed a noble and mysterious thing. In the Middle Ages there was barely time even to take note of the paint: blended from pigments and egg yolk, it dried in minutes, so you had better get to work and not sit there admiring it in the dish. But industrialization changed all that. Pigment was machine-ground with the power of horses or steam until the powder was fine and smooth. It was mixed with oils and additives in great vats like those one can still see in the factories of artists’ suppliers such as Winsor and Newton: an almost obscene orgy of viscous colour. Cheaper pigments and new binding media led to the production of colour by the can, made not for daubing onto canvas but for brushing in flat swathes over walls and ceilings. These were no longer the rust-reds and dirty yellows of Victorian décor, but deep pinks, azure, viridian, the colours of sunsets and forests and named for them too.
That makes it sound as though artists were spoilt for choice, and in a sense they were: the range of colours expanded enormously, and most of this rainbow was cheap. But not all the colours were reliable: they might fade or discolour within weeks or years. Instability of paint is a problem as old as painting. But in the past painters knew their materials: they knew what colours they could mix and which they should not, which are prone to ageing and which withstand time. From the early nineteenth century, however, painters became ever less familiar with what was in their materials. These were substances made in chemicals factories, and not even the paint vendors understood them. Even if the technical experts (then called colourmen) guaranteed them for five years, how would they look in fifty? At first, paint manufacturers had little idea about such matters either, and they did not seem to care very much. Disastrous errors of judgement were made at least until the 1960s, as anyone who has seen what became of Mark Rothko’s Harvard murals will attest: valued at $100,000 when completed in 1962, they were in too embarrassing a state to remain on display by 1979.
But it was not just this lack of technical understanding that led painters to distrust paint. Every medium had its message, and the message of oil paint was now deemed a bourgeois one, to be disowned by any self-respecting artistic radical. It “smacked of garrets and starving artists”, according to British artist John Hoyland. Any sign of a brushstroke spoke of painterly traditionalism, and was to be avoided at all costs. The impassive matt finish of acrylics was the thing: it gave the artist a neutral colour field to play with, unencumbered (so they liked to think) by history. For some, this embracing of new paint media arose out of economic necessity: commercial paints bound in synthetic resins were cheaper, especially if you planned (as many did) to work on a colossal scale. For others, new media offered new styles: Jackson Pollock needed a “liquid, flowing kind of paint”, Stella seized on metallic radiator paints to step beyond the prismatic rainbow. But paints made from plastics also spoke of modernity. Nitrocellulose enamel spray paints are used on cars and toasters, so why not, as Richard Hamilton decided, use them for paintings of cars and toasters? “It’s meant to be a car”, he said, “so I thought it was appropriate to use car colour.”
The idea, then, was that the artist would no longer try to hide the materials in the manner of a nineteenth-century French academician like Ingres, but was constantly referring to them, reminding the viewer that the picture is made from stuff. That’s true even of the flat anonymity of the household paints used by an artist like Patrick Caulfield, which at first seem to be concealing their identity as ‘paint’ at all: they’re saying ‘this is only a surface coated with colour, you know’ – or as Caulfield puts it, “I’m not Rembrandt.” The paint is not pretending to be anything else.
Part of the pleasure of Vik Muniz’s works is that they often do pretend to be something else, but so transparently that you notice and relish the medium even more. “Oh, those are diamonds! That’s chocolate, that’s trash, those are flowers.” His Metachrome series is particularly rich in allusion, because the works confront this issue of the material of painting in ways that highlight several of the problems of paint, which have vexed and in the end sometimes inspired painters. They leave the medium – pastel sticks – literally embedded in the work, and not as accidental remnants but as constructive elements. On the one hand this creates a Brechtian sense of ‘here’s how it was done’, a denial of illusion. These become not just images of something, but works about creating art. It reminds us of the disarming honesty of French painter and sculptor Jean Dubuffet’s remark: “There is no such thing as colour, only coloured materials.” By reconstructing the paintings of famous artists with the pigment-saturated tools still visible, Muniz demystifies the original objects. And art itself then becomes more humble, but also more valuable: not an idea, nor a commodity, nor an icon, but a product of human craft and ingenuity.
This wouldn’t count for so much if the aesthetic element weren’t respected either. The solidity and chromatic depth of these pieces of coloured material are richly satisfying. We can enjoy them as substance, source and subject. They seem to invite us to pluck them up, and to start making marks ourselves.
Points of colour
One thing brought to mind by Metachromes is the specks and lumps of colour and crystal seen in cross-sectional micrographs that art conservators use routinely to study the multilayered techniques of the Old Masters. Those images reveal that the colour may be surprisingly dispersed: sometimes the grains are as sparse as raisins in a fruit bun, so that it seems a wonder the paint layer does not look patchy or translucent. (The effect may be cumulative: in Titian or van Eyck the bold colours come from the painstaking application of layer after layer.) But what these micrographs also reveal is colour unmixed: greens broken down into blues and yellows, flesh tones a harlequin jumble of hues mixed with white and black. They remind us that the rich hues of the Old Masters are an optical illusion conjured from a very limited palette: they had only a few greens to play with, their blues were sparser still. The illusion is sustained by scale: the flecks are so small that the eye can’t distinguish them unaided, and they blend into uniformity. When this optical mixture produces so great a perceptual shift as yellow and blue to green, the effect seems like alchemy. We get accustomed to this method of making green in the nursery, but still it seems odd to be confronted by such stark evidence that our eyes are deceiving us, that there is only yellow and blue at the root of it all.
Colour mixing is genuinely perplexing. Isaac Newton explained it in 1665, but the explanation made no sense to artists. Yellow, he said, comes from mixing red and green. Add blue and you get white. This was clearly not the way paints behaved.
Newton’s experiment is commonly misunderstood. He did not show for the first time that sunlight could be split into the rainbow spectrum; that had been known since time immemorial, for all you needed was a block of clear glass or crystal. Some suspected, however, that this colouration of sunlight might be the result of a transformation performed by the prism itself. Newton showed that if one of the coloured rays of the spectrum – red, say – is passed through a second prism, it emerges unchanged: these are, he said, “uncompounded colours”, irreducible to anything else. And if the entire spectrum is squeezed back through a focusing lens, it reconstitutes the original white ray. Colour, then, comes from plucking this rainbow, extracting some rays and reflecting others. Black objects swallow them all, white objects reject them.
According to Newton, the colours we see are in the light that conveys them. Goethe (whose antipathy to Newton I have never really fathomed) spoke for many when he said that thanks to Newton “the theory of colour has been forced to enter a realm where it does not belong, to appear before the judgement seat of the mathematician.” Dyers, he says, were the first to perceive the inadequacy of Newton’s theory, because “phenomena forcefully confront the true practitioner, the producer of goods, every day. He experiences the application of his ideas as profit or less.” And so he knew much better than to waste valuable dyes by attempting to make yellow from red and green.
Goethe’s largely misconceived theory of colour continues to exert a strange appeal, but what he did not appreciate is that there are two kinds of colour mixing. One works for dyes and pigments, and it involves the removal of prismatic wedges from sunlight: take away the reds and violets, and you are left with green. This is subtractive mixing. The other works for light rays, and it involves a gradual reconstitution of the spectrum from its component rays. The retina tickled with red and green light, for example, reports back to the brain in just the same way as it does when struck by pure yellow light. This is additive mixing. The Scottish physicist James Clerk Maxwell explained this in 1855, and his discoveries quickly filtered down in popularized forms to painters, some of whom dreamed of finding within them a prescription for filling their canvases with sunlight.
Additive mixing can be achieved in various ways. In television screens, trios of light-emitting pixels in the additive primaries – crudely speaking, red, green and blue-violet – are juxtaposed at too small a scale for the eye to resolve from a normal viewing distance, and their light is blended on the retina. Maxwell showed that he could achieve the same thing with pigments by painting segments of disks and spinning them at great speed, something that the polymathic Englishman Thomas Young had also done at the beginning of the nineteenth century. The Impressionists took away the message that all visual experience is constructed from spectral colours, so that these were the only ones they should use – no more ochres or umbers. They tried to banish black from the palette, and while white remained indispensable, their whitewashed walls and snow were broken up into bright primaries. When Claude Monet needed browns and blacks to depict the smoky train station at Saint-Lazare, he mixed them from primaries, although you would never guess it.
Paul Signac and Georges Seurat went further. They hoped to do away with subtractive mixing entirely, seeing that it almost inevitably degraded the brightness of the colours. Instead, their pointillist style, with small dots of pure pigments placed alongside each other like meadow flowers scattered among grass, was intended to let the colours mix optically, on the retina rather than on the canvas, when viewed from the right distance. These Neo-Impressionists believed that this would give their works greater luminosity.
Curiously, Newton himself had described something akin to this. He said that a mixture of dry pigment containing yellow orpiment, bright purple, light green and blue looked at a distance of several paces like brilliant white. The scattering of coloured grains in Muniz’s pictures hint at the same thing. There are places where the colour seems uncertain, treacherous, on the verge of shifting: it depends on where you’re standing, and might change as you step back.
But for the Neo-Impressionists, pointillist optical mixing did not really work. Partly this was because they had only a hazy grasp of the new colour theory, corrupted by an old notion about the yellow-orange colour of sunlight. Partly they were inconsistent in applying the technique, varying the size of their tache strokes. Signac admitted his disappointment with the experiment: “red and green dots make an aggregate which is grey and colourless” – although this pearly sheen is now an aspect of what we enjoy in the atmosphere of their peinture optique. As it drifted from its scientific origins, pointillism became just another mannerism, a style rather than an experiment. It’s a style that Muniz plays with here, the strokes and marks of the artist sometimes playfully but effectively substituted by the pastel sticks themselves. They are Titian’s grains of colour writ large, confessing their illusionism – but still performing it anyway, if we let them.
Work with dirt
“Sometimes we want to know how things are made. Sometimes we don’t,” says Muniz. Metachromes forces us to confront the fact that painting is made from what Philip Guston called ‘coloured dirt’. It is not smoothed flat or artfully raised into ridges by brush or knife: the dirt is simply there, heaped and glued onto the surface, and provoking us to wonder what this stuff actually is.
Many art historians and critics don’t care much for that. They will talk of ‘cobalt’ as a shorthand for a strong blue, as though they think this is the hue of that silvery metal itself – and never mind the fact that the blue cobalt-based pigment of van Dyck had very little to do with the cobalt blue of van Gogh – the latter a nineteenth-century innovation that the artist called a “divine colour”.
Paint disguises this grainy minerality. Seeing it restored in Muniz’s images, you can’t help but wonder about the chemistry. Colours like this are rare in the earth, for even gorgeous gems such as sapphire and ruby turn pale and disappointing when finely ground. When geology alone supplied the palette, reds were rusty and yellows were tawny, both of them ochres. Malachite, a copper ore, gave a pleasant bluish green, but not the vibrant green that light-harvesting chlorophyll brings to grass and the leaves of flowers. Of purple there was almost nothing – a rare manganese mineral, if you were lucky; or dyestuffs that blanched in the light. For orange you took your life in your hands with realgar, the highly toxic sulfide of arsenic.
Blue alone is well served by nature: it could be extracted, at immense cost and labour, from lapis lazuli, a mineral mined in Afghanistan, and brought across the seas – or as the name has it, ultra marina. Mere grinding wasn’t enough to turn lapis into this midnight blue: the blue component (called lazurite) had to be extracted from the impurities, which otherwise made the powder greyish. This was done by mixing the powder with wax into a dough and kneading it repeatedly in water to flush out the blue. The best ultramarine cost more than its weight in gold, and was reserved only for the most venerated of subjects; skies made do with cheaper blues, unless you were Giotto. When Johannes Itten, the mercurial colour theorist of the Bauhaus, insists that blue connotes meekness and profundity in medieval images of the Virgin, he forgets that to the medieval artist symbolism embraced the material too: the ultramarine robes of the mother of Christ honour her through their vast expense.
Wetness transforms
Today ultramarine is produced by the tonne – not from lapis lazuli, which is still very costly, but as an industrial chemical made in a furnace from soda, sand, alumina and sulfur. This was a triumph of nineteenth-century chemistry. Some of the leading chemists of that age were assigned the task of finding good synthetic substitutes for ultramarine, and they succeeded with cobalt blue; but it did not match the real thing. In 1824 the Society for the Encouragement of National Industry in France offered a prize of 6000 francs to anyone who could devise a way of making ultramarine synthetically. Its elemental constituents had been deduced in 1806, but the curious thing is that, unlike most pigments, ultramarine does not derive its colour from the presence of a particular metal in its crystal lattice (iron, copper, lead, mercury, chromium, cobalt and zinc are among the usual sources). Here, sulfur is the unlikely origin of the rich blue, and to understand it properly you need quantum chemistry.
The prize drew plenty of charlatans, but within four years it was claimed by the colour-maker Jean-Baptiste Guimet from Toulouse – a claim that was challenged, with justification but without success, by a German chemist at Tübingen. ‘French’ ultramarine offered Giotto’s glories for a fraction of the cost, although at first artists could not bring themselves to believe that it could be as good as the natural material.
The ultramarine you will buy in the tube today is made with this synthetic product, and it is probably better than the gritty grindings Titian used. But you should see the pigment before it becomes paint. It seems to emit a glow just beyond the visible range, it has a depth and velvety lustre that the liquid binder can only diminish. It is a colour to gaze on for long moments. Here is Frank Stella’s dilemma redoubled: if you think the paint looks good in the can, you should see pigment before it becomes paint.
That was what bothered Yves Klein in the 1950s. “What clarity and lustre, what ancient brilliance”, he said of raw pigments. But the touch of a binding medium is fatal to this texture: “The affective magic of the colour had vanished. Each grain of powder seemed to have been extinguished individually by the glue or whatever material was supposed to fix it to the other grains as well as to the support.” What goes wrong? The way light bounces off the pigment particles is modified by the medium, even if it is perfectly transparent, because light entering it cannot but be refracted, the rays bent as they are in water. This effect of the medium depends in its refractive index – a scientific measure, if you like, of its ray-bending power. So the same pigments may look different in different media: vermilion mixed with egg yolk is a rich orange-scarlet, but when Renaissance painters began mixing it with oils the result was less impressive, and soon they turned to other reds, to the crimsons and magentas of red lakes.
By the 1950s, painters already had several alternative binders to oil at their disposal – nitrocellulose, acrylics, alkyds, most of them petrochemical-based resins. Was there a binder, Klein wondered, that would fix the pigment particles in place without destroying their lustre? This was a chemical matter, for which the artist needed technical assistance. He got it from his architect friend Bernadette Allain, and most importantly from a Parisian manufacturer of paint, Édouard Adam. In 1955 they found that a resin produced by the chemicals company Rhône-Poulenc, called Rhodopas M60A, thinned with ethanol and ethyl acetate, had the desired effect. In Klein’s words, “it allowed total freedom to the specks of pigment such as they are found in powder form, perhaps combined with each other but nevertheless autonomous.”
Klein used this binder with the pigment that best deserved it: ultramarine. He premiered his brilliant blue sculpture-canvases in Milan in 1957 with an exhibition called ‘Proclamation of the Blue Epoch’. This blue became his trademark: coating blocks, impregnating sponges, covering twigs and body casts. It was International Klein Blue, patented to preserve its integrity in 1960, two years before the artist’s untimely death.
Another solution was to simply refuse to degrade the pure pigment with any kind of binder. In his Ex Voto for the Shrine of St Rita (1961), Klein encases ultramarine powder along with a synthetic rose pigment and gold leaf in clear plastic boxes. In Metachromes, Muniz offers a homage to Klein’s pigment triptych, this celebration of raw, synthetic “coloured dirt”.
Like Klein’s works, Metachromes poses the question: when does the material “leave the can” and become a work of art? It’s not a question that needs an answer. It’s there to remind us that “what is made” should require us to consider too “how it is made” and “what it is made of” – that we are not merely Homo sapiens but Homo faber, and that it is because we are both that we survive.
Wednesday, October 05, 2016
Music with national characteristics
In the wake of the most ugly nationalism we have seen from a British government for some decades, here’s my oh-so-topical column on music cognition for the Italian magazine Sapere.
____________________________________________________________________
The nationalistic flag-waving at the Last Night of the Proms, the climax of an annual series of summer concerts in London, has always left me cold, and this year more than ever – though sadly, the threatened influx of European Union flags in protest at the British vote to exit the EU failed to materialize. In the current context, the obligatory and patriotic “Pomp and Circumstance” march by Edward Elgar seemed more painful than ever.
Is it, though, just familiarity (especially the “Land of Hope and Glory” section) that makes this piece seem so distinctly English? Apparently not. For it turns out that national characteristics are objectively imprinted on some classical compositions from the late nineteenth and early twentieth centuries, when Elgar, as well as other audibly “British” composers such as Gustav Holst and Ralph Vaughan Williams, were working.
Musical nationalism was big during this era, and it’s not difficult to sense that also in the stylings of Ravel and Debussy (France), Smetana and Janáček (present-day Czech Republic) and Granados and Albéniz (Spain). Some of these composers infused their works with the local culture by using folk melodies, but there’s more to the “national” feel than that.
Neuroscientist Aniruddh Patel and his coworkers have analysed themes from English and French composers of this era, specifically excluding those based on folk melodies, to compare their rhythms and rise and fall of pitch with the corresponding properties in spoken English and French.
The two languages have distinct patterns of stress, for example: English words tend to be stressed on the first syllable, French on the last (“Philip” versus “Philippe”). And there’s more variation in both vowel duration and pitch in English than in French. These differences can be precisely quantified, and Patel and colleagues found that they are mirrored in the respective countries’ music of that time. How far this is true of other nations isn’t yet clear, nor might it apply in earlier times when much of European music was in thrall to the innovations of the Italians.
Still, this is one of the ways in which music and language are closely related. It rather undermines Janáček’s conviction that music is universal: that without words you can’t tell a Czech folk song from and English or French one. It seems you really can, and for the same reason that you can distinguish the languages. Personally I welcome these local flavourings in music as much as I do in cuisine – although perhaps it also explains why Elgar strikes me as so much English stodge.
Thursday, July 28, 2016
Why, in politics, science is about more than science
This is a longer version of a comment in the latest issue of Research Fortnight. It took a while, and inevitably events have somewhat moved on. But I'd argue that the same basic issue remains: scientists need to look at (and worry about) the broader issues, not just the immediate ramifications for funding and employment (extremely important though those are). They shouldn't regard this as a done deal that they now just have to make the best of, less still accept the rhetoric of "the people have spoken". There is too much at stake. With that in mind, I was pleased to lend a bit of help in getting together Nature's collection of reflections on lessons to be learned.
_______________________________________________________________
I cannot believe that even those 12-14 percent of British scientists who – if Nature’s poll is reliable – voted for the UK to leave the European Union in the referendum on 23 June could have doubted that there would be some short-term harm to science. I suspect that they will also not have questioned the predictions, now amply confirmed, from economists that the markets would suffer a rather catastrophic shock. Those scientists could, after all, hardly do their job without some faith in expert opinion, even if that is how some politicians apparently now do theirs.
One must assume that the “pro-Leave” scientists calculated that this short-term pain would produce long-term gain – if not necessarily for British science, then at least for other important aspects of how the nation functions. I feel rather confident in asserting that none will have welcomed the current upsurge in racism and xenophobia unleashed in Britain. It’s scarcely less hard to imagine that they will have questioned the value of openness and freedom of movement to the kind of culture that enables science to thrive. And I would be astonished if any foresaw the farcical chaos and irresponsibility that has ensued since the referendum among the leaders of the Leave campaign, and the climate of total political, economic and social uncertainty it has engendered.
Perhaps there was some calculus in all of this that made sense to them. It should go without saying that they had every right to act on that calculation.
To the 80 percent or so of scientists who preferred the UK to remain in the EU, however, this maths is probably rather mysterious. Perhaps there is some rational case for imagining that science, and more implausibly still the economy, would weather the storm and emerge the better for it. Certainly, EU involvement in British science was not an unalloyed benefit, just as it was not for British industry, agriculture or public policy. So drastic a surgical solution to the problems seems reckless, but the case for it is not self-evidently crazy.
Yet while I am prepared, even eager, to think that Leave voters among scientists made their decision for reasons quite unconnected to the racist, deceitful, anti-intellectual tenor of the campaign that swayed public opinion, it baffling to me how anyone could have persuaded themselves that such a vote would not strengthen that platform and those who stood on it. I am ready to doubt that it added anything to the convictions of Leave scientists that their position was endorsed by Nigel Farage, Marine Le Pen, Donald Trump, Vladimir Putin, Rupert Murdoch and the English Defence League. I’m genuinely puzzled how they found considerations that overrode any qualms about keeping such company. I would be interested to know the answer.
Still, the predicted damage has been done, and continues to be done, and will be done for some time to come. That many of the major promises of the Leave campaign have now evaporated, several of its claims are (and always were) transparently false, and its main ringleader Boris Johnson has absconded leaves us entitled to call the campaign nothing short of a hoax. What we can’t know is whether the hoax won the day or whether the revolt was inevitable in any case, given the degree of understandable political disaffection in the nation.
Either way, British science has been made a political football. It isn’t reasonable to expect that the fate of UK research should have featured in the deliberations of people struggling to make a living in communities long abandoned by the political class. Yes, of course the economic strength of a nation depends vitally on the vigour of its scientific and technological capability, but it is the job of politicians to know that. That science simply played no part in the referendum debate is neither surprising nor a meaningful cause for lament. The problem, of course, is that this is precisely why referenda that have major implications for such issues, yet without the remotest prospect of those being taken into consideration for the vast majority of the population, are so dangerous and unwise.
Most European scientists won’t need to be told this by their British colleagues. I’m sure my experience so far – of pretty much universal sympathy at the harm wrought by political games, and of determination to work together to safeguard the European scientific enterprise – is the common one. This won’t – or shouldn’t – be just an exercise in damage limitation, but also an opportunity for science to set an example of international cooperation and trust in the face of rising nationalism and bigotry. The good will is immensely heartening. Many European scientists seem to recognize that what has been revealed in the UK – a distrust of the European project coupled to fears about immigration – is and has for some time been a Europe-wide concern.
What is to be done? British scientists have called for guarantees both that the science budget will be untouched and that the security of European scientific staff and students at UK universities will be affirmed. You’d expect no less – though sadly, you could also have rightly predicted that no such assurances would be forthcoming while the government is leaderless and in utter turmoil. In the meantime, some non-British researchers at UK universities are already contemplating seeking jobs abroad where access to EU funding is undiminished, and recruitment of overseas researchers to British universities has been immediately placed at risk. The UK’s major scientific bodies and universities will have to be relentless and strident with demands for clear and urgent commitments here.
That, however, is not enough. The prospect now of British governance finally being taken in hand by serious politicians, with whom scientists might ultimately imagine negotiating terms and conditions in a systematic, rational way, is reassuring. But it also risks blinding us to the fact that the situation arose in the first place from an unprecedented and pathological paroxysm. Even those scientists who voted Leave must surely concede that the day was carried not because of any reasoned arguments that could be brought to the table but in spite of them. However valid the howl of fury behind it, the referendum outcome did not emerge from a political debate in any meaningful sense of the word. Facts and expert informed opinions were bypassed by demagogues and press barons able to manipulate public opinion in a way that is now going to hit the disenfranchised and deprived sectors of the population – those most in favour of leaving the EU – the hardest. That is why some commentators have justifiably called the referendum “the end of politics”.
To say as much is of course easily portrayed as the condescending conviction of educated elites that the masses don’t know what is good for them. Frankly, there is too much at stake to accept that kind of disingenuousness. What is truly condescending is to pretend that people who were lied to, relentlessly fed false promises and fake scares, and egged on by fickle opportunists have been given the opportunity to make a real choice.
I would fully understand if those who have carefully considered the options and decided that on balance Britain should not be a part of the EU were to consider it a profound perversion of democracy now to declare that they have made the “wrong” decision and to ignore it. The question is how far we can turn a blind eye to abuses of the system. The Leave vote was swayed by far more than the usual share of electioneering lies, by key promises that never had any prospect of being fulfilled, and by leaders who had no real expectation of or plans for victory – foremost among them Johnson, who is now seen to have lacked any real commitment to the cause in the first place. There has been no precedent in British politics in living memory, and the British parliament needs to think very carefully about pretending it is just politics as normal. When even a leading and widely respected Conservative peer (Michael Heseltine) calls Johnson’s actions “contemptible”, you know something serious is afoot. How dishonourable does the political process need to get before it becomes improper to honour it?
The campaign by Johnson and his one-time ally (now his Brutus) Michael Gove – who dismissed economic experts by comparing himself with Einstein – was an astonishingly audacious con trick. Arguably the most troubling aspect is that, in its cheap populism and shameless contempt for facts, it has set a precedent for a potentially even more disastrous “post-political” deception on the other side of the Atlantic.
This much now seems incontrovertible. Scientists of all persuasions should be deeply concerned about these trends, because they threaten the values on which a reasoning society is based. Historians and others, including the former head of the Church of England Rowan Williams, have noted parallels in both the Brexit and Trump campaigns with the manipulation of disaffection, nationalism and prejudice in Germany in 1933. (The use of imagery in the UK Leave campaign with clear echoes of Nazi racist propaganda was one of the more visible and explicit of those comparisons.) On that earlier occasion, scientists and most other academics decided that their priority had to be simply to “safeguard German science”, rather than to recognize any obligation to broader principles of social justice and informed debate. The result was that they were rendered impotent and easily controlled by their new rulers.
The fact that the UK is evidently not about to become a far right nation is not the point. It is imperative to restore a proper political process before that prospect is even on the horizon. We must hope it does not become the point either in Austria, although the Brexit vote casts a shadow now on the impending re-run of the election narrowly lost by the far-right Freedom Party. The anti-EU far right in Frances, Denmark and the Netherlands are emboldened and sharpening their knives now.
There is no painless solution. To blindly pursue the Brexit route will be unquestionably harmful to Britain in the short term – it has been already. To simply ignore it not only would take more political courage than we have any realistic hope of witnessing, but would also almost certainly provoke severe social unrest and leave Britain more divided and disaffected than ever – potentially an effective recruiting climate for extremist groups.
A second referendum might be the least worst option. It doesn’t make the issue any less unsuited to a referendum in the first place, but the British population would at least have been served fair warning of what to expect. (The editor of the immensely influential pro-Leave tabloid The Sun has already confessed to “buyer’s remorse.”) But most importantly of all, the social problems that have thrown British politics further off the rails than it has ever been need to be tackled as an urgent priority. And yet no politician in government has really alluded to this issue at all in the past few days – the disaffected communities look likely to be left to sink or swim in the economic climate their anger has created.
In any event, scientific institutions need to look beyond their own interests and see this as a crisis, not just of funding and personnel, but of due political process and informed, evidence-driven decision making. We owe at least that much to history.
_______________________________________________________________
I cannot believe that even those 12-14 percent of British scientists who – if Nature’s poll is reliable – voted for the UK to leave the European Union in the referendum on 23 June could have doubted that there would be some short-term harm to science. I suspect that they will also not have questioned the predictions, now amply confirmed, from economists that the markets would suffer a rather catastrophic shock. Those scientists could, after all, hardly do their job without some faith in expert opinion, even if that is how some politicians apparently now do theirs.
One must assume that the “pro-Leave” scientists calculated that this short-term pain would produce long-term gain – if not necessarily for British science, then at least for other important aspects of how the nation functions. I feel rather confident in asserting that none will have welcomed the current upsurge in racism and xenophobia unleashed in Britain. It’s scarcely less hard to imagine that they will have questioned the value of openness and freedom of movement to the kind of culture that enables science to thrive. And I would be astonished if any foresaw the farcical chaos and irresponsibility that has ensued since the referendum among the leaders of the Leave campaign, and the climate of total political, economic and social uncertainty it has engendered.
Perhaps there was some calculus in all of this that made sense to them. It should go without saying that they had every right to act on that calculation.
To the 80 percent or so of scientists who preferred the UK to remain in the EU, however, this maths is probably rather mysterious. Perhaps there is some rational case for imagining that science, and more implausibly still the economy, would weather the storm and emerge the better for it. Certainly, EU involvement in British science was not an unalloyed benefit, just as it was not for British industry, agriculture or public policy. So drastic a surgical solution to the problems seems reckless, but the case for it is not self-evidently crazy.
Yet while I am prepared, even eager, to think that Leave voters among scientists made their decision for reasons quite unconnected to the racist, deceitful, anti-intellectual tenor of the campaign that swayed public opinion, it baffling to me how anyone could have persuaded themselves that such a vote would not strengthen that platform and those who stood on it. I am ready to doubt that it added anything to the convictions of Leave scientists that their position was endorsed by Nigel Farage, Marine Le Pen, Donald Trump, Vladimir Putin, Rupert Murdoch and the English Defence League. I’m genuinely puzzled how they found considerations that overrode any qualms about keeping such company. I would be interested to know the answer.
Still, the predicted damage has been done, and continues to be done, and will be done for some time to come. That many of the major promises of the Leave campaign have now evaporated, several of its claims are (and always were) transparently false, and its main ringleader Boris Johnson has absconded leaves us entitled to call the campaign nothing short of a hoax. What we can’t know is whether the hoax won the day or whether the revolt was inevitable in any case, given the degree of understandable political disaffection in the nation.
Either way, British science has been made a political football. It isn’t reasonable to expect that the fate of UK research should have featured in the deliberations of people struggling to make a living in communities long abandoned by the political class. Yes, of course the economic strength of a nation depends vitally on the vigour of its scientific and technological capability, but it is the job of politicians to know that. That science simply played no part in the referendum debate is neither surprising nor a meaningful cause for lament. The problem, of course, is that this is precisely why referenda that have major implications for such issues, yet without the remotest prospect of those being taken into consideration for the vast majority of the population, are so dangerous and unwise.
Most European scientists won’t need to be told this by their British colleagues. I’m sure my experience so far – of pretty much universal sympathy at the harm wrought by political games, and of determination to work together to safeguard the European scientific enterprise – is the common one. This won’t – or shouldn’t – be just an exercise in damage limitation, but also an opportunity for science to set an example of international cooperation and trust in the face of rising nationalism and bigotry. The good will is immensely heartening. Many European scientists seem to recognize that what has been revealed in the UK – a distrust of the European project coupled to fears about immigration – is and has for some time been a Europe-wide concern.
What is to be done? British scientists have called for guarantees both that the science budget will be untouched and that the security of European scientific staff and students at UK universities will be affirmed. You’d expect no less – though sadly, you could also have rightly predicted that no such assurances would be forthcoming while the government is leaderless and in utter turmoil. In the meantime, some non-British researchers at UK universities are already contemplating seeking jobs abroad where access to EU funding is undiminished, and recruitment of overseas researchers to British universities has been immediately placed at risk. The UK’s major scientific bodies and universities will have to be relentless and strident with demands for clear and urgent commitments here.
That, however, is not enough. The prospect now of British governance finally being taken in hand by serious politicians, with whom scientists might ultimately imagine negotiating terms and conditions in a systematic, rational way, is reassuring. But it also risks blinding us to the fact that the situation arose in the first place from an unprecedented and pathological paroxysm. Even those scientists who voted Leave must surely concede that the day was carried not because of any reasoned arguments that could be brought to the table but in spite of them. However valid the howl of fury behind it, the referendum outcome did not emerge from a political debate in any meaningful sense of the word. Facts and expert informed opinions were bypassed by demagogues and press barons able to manipulate public opinion in a way that is now going to hit the disenfranchised and deprived sectors of the population – those most in favour of leaving the EU – the hardest. That is why some commentators have justifiably called the referendum “the end of politics”.
To say as much is of course easily portrayed as the condescending conviction of educated elites that the masses don’t know what is good for them. Frankly, there is too much at stake to accept that kind of disingenuousness. What is truly condescending is to pretend that people who were lied to, relentlessly fed false promises and fake scares, and egged on by fickle opportunists have been given the opportunity to make a real choice.
I would fully understand if those who have carefully considered the options and decided that on balance Britain should not be a part of the EU were to consider it a profound perversion of democracy now to declare that they have made the “wrong” decision and to ignore it. The question is how far we can turn a blind eye to abuses of the system. The Leave vote was swayed by far more than the usual share of electioneering lies, by key promises that never had any prospect of being fulfilled, and by leaders who had no real expectation of or plans for victory – foremost among them Johnson, who is now seen to have lacked any real commitment to the cause in the first place. There has been no precedent in British politics in living memory, and the British parliament needs to think very carefully about pretending it is just politics as normal. When even a leading and widely respected Conservative peer (Michael Heseltine) calls Johnson’s actions “contemptible”, you know something serious is afoot. How dishonourable does the political process need to get before it becomes improper to honour it?
The campaign by Johnson and his one-time ally (now his Brutus) Michael Gove – who dismissed economic experts by comparing himself with Einstein – was an astonishingly audacious con trick. Arguably the most troubling aspect is that, in its cheap populism and shameless contempt for facts, it has set a precedent for a potentially even more disastrous “post-political” deception on the other side of the Atlantic.
This much now seems incontrovertible. Scientists of all persuasions should be deeply concerned about these trends, because they threaten the values on which a reasoning society is based. Historians and others, including the former head of the Church of England Rowan Williams, have noted parallels in both the Brexit and Trump campaigns with the manipulation of disaffection, nationalism and prejudice in Germany in 1933. (The use of imagery in the UK Leave campaign with clear echoes of Nazi racist propaganda was one of the more visible and explicit of those comparisons.) On that earlier occasion, scientists and most other academics decided that their priority had to be simply to “safeguard German science”, rather than to recognize any obligation to broader principles of social justice and informed debate. The result was that they were rendered impotent and easily controlled by their new rulers.
The fact that the UK is evidently not about to become a far right nation is not the point. It is imperative to restore a proper political process before that prospect is even on the horizon. We must hope it does not become the point either in Austria, although the Brexit vote casts a shadow now on the impending re-run of the election narrowly lost by the far-right Freedom Party. The anti-EU far right in Frances, Denmark and the Netherlands are emboldened and sharpening their knives now.
There is no painless solution. To blindly pursue the Brexit route will be unquestionably harmful to Britain in the short term – it has been already. To simply ignore it not only would take more political courage than we have any realistic hope of witnessing, but would also almost certainly provoke severe social unrest and leave Britain more divided and disaffected than ever – potentially an effective recruiting climate for extremist groups.
A second referendum might be the least worst option. It doesn’t make the issue any less unsuited to a referendum in the first place, but the British population would at least have been served fair warning of what to expect. (The editor of the immensely influential pro-Leave tabloid The Sun has already confessed to “buyer’s remorse.”) But most importantly of all, the social problems that have thrown British politics further off the rails than it has ever been need to be tackled as an urgent priority. And yet no politician in government has really alluded to this issue at all in the past few days – the disaffected communities look likely to be left to sink or swim in the economic climate their anger has created.
In any event, scientific institutions need to look beyond their own interests and see this as a crisis, not just of funding and personnel, but of due political process and informed, evidence-driven decision making. We owe at least that much to history.
Tuesday, June 21, 2016
Michael Gove is no Einstein
Is anyone any longer in any serious doubt that the leaders of the Brexit campaign feel they can just come out with whatever fact-free, delirious twaddle jumps into their head and expect us to swallow it?
It’s getting quite surreal now. Michael Gove, faced with a question on LBC Radio about what to make of all the top economists who have warned of the dire consequences of the UK leaving the European Union, decided that there was a parallel here with the way Einstein was treated in Germany in the 1930s.
“We have to be careful about historical comparisons”, said Gove, “but Albert Einstein during the 1930s was denounced by the German authorities for being wrong and his theories were denounced and one of the reasons of course he was denounced was because he was Jewish. They got 100 German scientists in the pay of the government to say that he was wrong and Einstein said, ‘Look, if was wrong, one would have been enough’.”
So, did you get that? Michael Gove is Einstein, and the economists who have decided that Brexit would be economically bad are like Nazis in the pay of the government.
Except that he is simply peddling half-truths and fictions. Gove clearly thinks these “100 scientists” were put up to it by the Nazi authorities. But the infamous book A Hundred Authors Against Einstein was published in 1931, before the Nazis came to power and while Germany was still ruled by the Weimar government - who Einstein supported.
And as the title suggests, they weren’t “100 scientists”. They were a ragbag of academics and other “intellectuals” of various stamps, among which there was only one real physicist, an insignificant (and retired) figure called Karl Strehl. They had no expertise, and evidently had not the faintest idea what to make of relativity. The book wasn’t taken in the slightest bit seriously by the German scientific community, and the vast majority of leading physicists in Germany supported Einstein’s ideas. Of course A Hundred Authors (most of them were present in name only in the book – only a few expressed their views) was motivated in considerable part by anti-Semitism, as well as objections to Einstein’s internationalism. How that is supposed, in Gove’s mind, to bear on the reasons for the economists’ position on Brexit is anyone’s guess. Do they reach conclusions different to his because they are similarly bigoted in some fashion? The parallel is as meaningless as it is fatuous. Gove faced a very serious question here and he had nothing to say behind falsehoods and bluster. If Brexit wins, we can expect a lot more of the same.
It’s getting quite surreal now. Michael Gove, faced with a question on LBC Radio about what to make of all the top economists who have warned of the dire consequences of the UK leaving the European Union, decided that there was a parallel here with the way Einstein was treated in Germany in the 1930s.
“We have to be careful about historical comparisons”, said Gove, “but Albert Einstein during the 1930s was denounced by the German authorities for being wrong and his theories were denounced and one of the reasons of course he was denounced was because he was Jewish. They got 100 German scientists in the pay of the government to say that he was wrong and Einstein said, ‘Look, if was wrong, one would have been enough’.”
So, did you get that? Michael Gove is Einstein, and the economists who have decided that Brexit would be economically bad are like Nazis in the pay of the government.
Except that he is simply peddling half-truths and fictions. Gove clearly thinks these “100 scientists” were put up to it by the Nazi authorities. But the infamous book A Hundred Authors Against Einstein was published in 1931, before the Nazis came to power and while Germany was still ruled by the Weimar government - who Einstein supported.
And as the title suggests, they weren’t “100 scientists”. They were a ragbag of academics and other “intellectuals” of various stamps, among which there was only one real physicist, an insignificant (and retired) figure called Karl Strehl. They had no expertise, and evidently had not the faintest idea what to make of relativity. The book wasn’t taken in the slightest bit seriously by the German scientific community, and the vast majority of leading physicists in Germany supported Einstein’s ideas. Of course A Hundred Authors (most of them were present in name only in the book – only a few expressed their views) was motivated in considerable part by anti-Semitism, as well as objections to Einstein’s internationalism. How that is supposed, in Gove’s mind, to bear on the reasons for the economists’ position on Brexit is anyone’s guess. Do they reach conclusions different to his because they are similarly bigoted in some fashion? The parallel is as meaningless as it is fatuous. Gove faced a very serious question here and he had nothing to say behind falsehoods and bluster. If Brexit wins, we can expect a lot more of the same.
Sunday, June 12, 2016
Best of both worlds in quantum computing
Here's an expanded version of my news story for Nature on Google's new quantum computer. It's a somewhat complicated story, so a bit more explanation might be useful.
____________________________________
Combining the best of two leading approaches might be the way to make a full-scale multipurpose quantum computer.
A universal quantum computer, which can any computational problem, has been a goal of research on quantum computing since its origins three decades ago. A team in California has now made an experimental prototype of such a device. It uses nine solid-state quantum bits (qubits), which can be configured to solve a wide range of problems and has the potential to be scaled up to larger systems.
The new device was made by Rami Barends and coworkers at Google’s research laboratories in Santa Barbara, collaborating with the group of physicist John Martinis at the University of California at Santa Barbara and with a team at the University of the Basque Country in Bilbao, Spain.
“It’s terrific work in many respects, and is filled with valuable lessons for the quantum computing community”, says Daniel Lidar, a quantum-computing expert at the University of Southern California in Los Angeles.
The Google circuit combines some of the advantages of the two main approaches to quantum computing so far. One is to build the computer’s circuits from qubits in particular arrangements geared to an algorithm for solving a specific problem. This is analogous to a tailor-made digital circuit in a conventional microprocessor made from classical bits. Much of the theory of quantum computing is based on this digital approach, which includes methods for the all-important problem of error correction to avoid errors accumulating and derailing a calculation. But so far practical implementations have been possible only with a handful of qubits.
The other approach is called adiabatic quantum computing (AQC). Here, instead of encoding an algorithm in a series of digital-logic operations between qubits, the computer encodes the problem of interest in the states of a pool of qubits, gradually evolving and adjusting the interactions between them to “shape” their collective quantum state. In principle just about any problem can be encoded into the same group of qubits.
This is an analog rather than a digital approach, and is limited by the effects of random noise, which introduces errors that can’t be corrected as systematically as in digital circuits. What’s more, there’s no guarantee that all problems can be solved efficiently this way, says Barends.
While most research on quantum computing uses the digital approach, adiabatic quantum computing has furnished the first commercial devices, made by D-Wave Systems in Burnaby, Canada, for about $15 million apiece. Google owns a D-Wave device, but its own researchers are searching for ways to improve the method.
In particular, they wanted to find some way of implementing error correction. Without it, scaling up AQC to more qubits will be difficult, since errors will accumulate more quickly in larger systems. With that in mind, Barends and colleagues decided to combine the AQC method with the digital approach, which has a well developed theory of error correction [1].
“Implementing adiabatic optimization on a universal quantum computer is not a new idea”, explains Andrew Childs of the University of Maryland. “But now the Google group has actually carried this out, which makes for a nice test of their system.”
To do that, the Google team uses a row of nine qubits, fashioned from cross-shaped films of aluminium about 400 micrometres across from tip to tip, deposited on a sapphire surface. The aluminium becomes superconducting when cooled to 1.1 degrees Kelvin, in which state its electrical resistance falls to zero. (The Google team actually operates the device at just 0.02 K to reduce the thermal noise.) . This is state-of-the-art technology for qubits, Lidar says.
Superconductivity is a quantum-mechanical effect, and a bit of information – a 1 or 0 – can be encoded in different states of the superconducting current. Crucially, these quantum bits can be placed in superposition states, simultaneously encoding a 1 and 0 – the key to the power of quantum computing.
The interactions between neighbouring qubits are controlled by linking them via logic gates. Using these gates, the nine qubits can be steered step by step into a state that encodes the solutions to a problem. As a demonstration, the researchers let their array simulate a system of coupled magnetic “spins”, like a row of magnetic atoms – a problem well explored in condensed-matter physics. They can then interrogate the states of the qubits to determine the lowest-energy state of the spins they represent.
That’s a fairly simple problem to solve on a classical computer too. But the researchers show that their device is also able to handle so-called “non-stoquastic” problems, which aren’t tractable on classical computers. These include simulations of the interactions between many electrons, needed to make exact calculations in quantum chemistry. The ability to simulate molecules and materials at the quantum level could be one of the most valuable applications of quantum computing.
A great advantage of this new approach is that it allows for the incorporation of quantum error correction, says Lidar. Although the researchers didn’t demonstrate that in this work, the Google team has previously shown how error correction might be achieved on their nine-qubit device [2].
“Quantum error correction is needed to allow for addressing really large problems, otherwise with each qubit and coupler you add a source of noise”, says Barends’ co-author Alireza Shabani at Google. “With error correction, our approach becomes a general-purpose algorithm that is in principle scalable to an arbitrarily large quantum computer.”
The Google device is still very much a prototype. “With early small-scale devices like this one, it’s not yet possible to tackle problems that cannot be solved on traditional classical hardware”, says Lidar.
But “in a couple of years it may be possible to work with devices having more than 40 qubits”, he adds. “At that point it will become possible to simulate quantum dynamics that is inaccessible on classical hardware, which will mark the advent of ‘quantum supremacy’.”
1. Barends, R. et al., Nature doi:10.1038/nature17658 (2016) here.
2 . Kelly, J. et al., Nature 519, 66-69 (2015) here.
____________________________________
Combining the best of two leading approaches might be the way to make a full-scale multipurpose quantum computer.
A universal quantum computer, which can any computational problem, has been a goal of research on quantum computing since its origins three decades ago. A team in California has now made an experimental prototype of such a device. It uses nine solid-state quantum bits (qubits), which can be configured to solve a wide range of problems and has the potential to be scaled up to larger systems.
The new device was made by Rami Barends and coworkers at Google’s research laboratories in Santa Barbara, collaborating with the group of physicist John Martinis at the University of California at Santa Barbara and with a team at the University of the Basque Country in Bilbao, Spain.
“It’s terrific work in many respects, and is filled with valuable lessons for the quantum computing community”, says Daniel Lidar, a quantum-computing expert at the University of Southern California in Los Angeles.
The Google circuit combines some of the advantages of the two main approaches to quantum computing so far. One is to build the computer’s circuits from qubits in particular arrangements geared to an algorithm for solving a specific problem. This is analogous to a tailor-made digital circuit in a conventional microprocessor made from classical bits. Much of the theory of quantum computing is based on this digital approach, which includes methods for the all-important problem of error correction to avoid errors accumulating and derailing a calculation. But so far practical implementations have been possible only with a handful of qubits.
The other approach is called adiabatic quantum computing (AQC). Here, instead of encoding an algorithm in a series of digital-logic operations between qubits, the computer encodes the problem of interest in the states of a pool of qubits, gradually evolving and adjusting the interactions between them to “shape” their collective quantum state. In principle just about any problem can be encoded into the same group of qubits.
This is an analog rather than a digital approach, and is limited by the effects of random noise, which introduces errors that can’t be corrected as systematically as in digital circuits. What’s more, there’s no guarantee that all problems can be solved efficiently this way, says Barends.
While most research on quantum computing uses the digital approach, adiabatic quantum computing has furnished the first commercial devices, made by D-Wave Systems in Burnaby, Canada, for about $15 million apiece. Google owns a D-Wave device, but its own researchers are searching for ways to improve the method.
In particular, they wanted to find some way of implementing error correction. Without it, scaling up AQC to more qubits will be difficult, since errors will accumulate more quickly in larger systems. With that in mind, Barends and colleagues decided to combine the AQC method with the digital approach, which has a well developed theory of error correction [1].
“Implementing adiabatic optimization on a universal quantum computer is not a new idea”, explains Andrew Childs of the University of Maryland. “But now the Google group has actually carried this out, which makes for a nice test of their system.”
To do that, the Google team uses a row of nine qubits, fashioned from cross-shaped films of aluminium about 400 micrometres across from tip to tip, deposited on a sapphire surface. The aluminium becomes superconducting when cooled to 1.1 degrees Kelvin, in which state its electrical resistance falls to zero. (The Google team actually operates the device at just 0.02 K to reduce the thermal noise.) . This is state-of-the-art technology for qubits, Lidar says.
Superconductivity is a quantum-mechanical effect, and a bit of information – a 1 or 0 – can be encoded in different states of the superconducting current. Crucially, these quantum bits can be placed in superposition states, simultaneously encoding a 1 and 0 – the key to the power of quantum computing.
The interactions between neighbouring qubits are controlled by linking them via logic gates. Using these gates, the nine qubits can be steered step by step into a state that encodes the solutions to a problem. As a demonstration, the researchers let their array simulate a system of coupled magnetic “spins”, like a row of magnetic atoms – a problem well explored in condensed-matter physics. They can then interrogate the states of the qubits to determine the lowest-energy state of the spins they represent.
That’s a fairly simple problem to solve on a classical computer too. But the researchers show that their device is also able to handle so-called “non-stoquastic” problems, which aren’t tractable on classical computers. These include simulations of the interactions between many electrons, needed to make exact calculations in quantum chemistry. The ability to simulate molecules and materials at the quantum level could be one of the most valuable applications of quantum computing.
A great advantage of this new approach is that it allows for the incorporation of quantum error correction, says Lidar. Although the researchers didn’t demonstrate that in this work, the Google team has previously shown how error correction might be achieved on their nine-qubit device [2].
“Quantum error correction is needed to allow for addressing really large problems, otherwise with each qubit and coupler you add a source of noise”, says Barends’ co-author Alireza Shabani at Google. “With error correction, our approach becomes a general-purpose algorithm that is in principle scalable to an arbitrarily large quantum computer.”
The Google device is still very much a prototype. “With early small-scale devices like this one, it’s not yet possible to tackle problems that cannot be solved on traditional classical hardware”, says Lidar.
But “in a couple of years it may be possible to work with devices having more than 40 qubits”, he adds. “At that point it will become possible to simulate quantum dynamics that is inaccessible on classical hardware, which will mark the advent of ‘quantum supremacy’.”
1. Barends, R. et al., Nature doi:10.1038/nature17658 (2016) here.
2 . Kelly, J. et al., Nature 519, 66-69 (2015) here.
Tuesday, May 31, 2016
Is music brain food?
The latest issue of the Italian science magazine Sapere is all about food. So this seemed a fitting theme for my column on music cognition.
___________________________________________________
‘If music be the food of love, play on, give me excess of it”, says Duke Orsino in Shakespeare’s Twelfth Night. The nineteenth-century German music critic Eduard Hanslick wasn’t impressed by that sentiment. It doesn’t matter what music it is, the Duke implies; I just want a load of it, like a big slice of cheesecake, to make me feel good.
But after all, mightn’t music be simply cheesecake for the ears? That is what the cognitive scientist Steven Pinker suggested in his book How the Mind Works. Music, he proposed, is simply a parasite that exploits auditory and cognitive processes which evolved for other reasons, just as cheesecake exploits a primal urge to grab fats and sugars. As he put it, “Music appears to be a pure pleasure technology, a cocktail of recreational drugs that we ingest through the ear to stimulate a mass of pleasure circuits at once.”
After all, Pinker went on, “Compared with language, vision, social reasoning, and physical know-how, music could vanish from our species and the rest of our lifestyle would be virtually unchanged.”
These claims provoked outrage. Imagine comparing Bach’s B minor Mass to an Ecstasy pill! And by suggesting that music could vanish from our species, Pinker didn’t appear much mind if it did. So his remarks were read as a challenge to prove that music has a fundamental evolutionary value, that it has somehow helped us to survive as a species. It seemed as though the very dignity and value of music itself was at stake.
Pinker might be wrong, of course. Indeed, recent research suggests that there might be neurons in our auditory cortex dedicated solely to music, suggesting that sensitivity to music could be a specific evolutionary adaptation, not a byproduct of other adaptive traits. But whether or not that’s so is rather beside the point. Music is an inevitable product of human intelligence, regardless of whether it’s genetically hard-wired. The human mind naturally possesses the mental apparatus needed for musicality, and will make use of these tools whether we intend it or not. Music isn’t something we do by choice – it’s ingrained in our auditory, cognitive, memory and motor functions, and is implicit in the way we construct a sonic landscape from the noises we hear.
So music couldn’t vanish from our species without fundamentally changing our brains. The sixth-century philosopher Boethius seemed to understand this already: music, he said, “is so naturally united with us that we cannot be free from it even if we so desired.” Cheesecake, on the other hand – I can take it or leave it.
___________________________________________________
‘If music be the food of love, play on, give me excess of it”, says Duke Orsino in Shakespeare’s Twelfth Night. The nineteenth-century German music critic Eduard Hanslick wasn’t impressed by that sentiment. It doesn’t matter what music it is, the Duke implies; I just want a load of it, like a big slice of cheesecake, to make me feel good.
But after all, mightn’t music be simply cheesecake for the ears? That is what the cognitive scientist Steven Pinker suggested in his book How the Mind Works. Music, he proposed, is simply a parasite that exploits auditory and cognitive processes which evolved for other reasons, just as cheesecake exploits a primal urge to grab fats and sugars. As he put it, “Music appears to be a pure pleasure technology, a cocktail of recreational drugs that we ingest through the ear to stimulate a mass of pleasure circuits at once.”
After all, Pinker went on, “Compared with language, vision, social reasoning, and physical know-how, music could vanish from our species and the rest of our lifestyle would be virtually unchanged.”
These claims provoked outrage. Imagine comparing Bach’s B minor Mass to an Ecstasy pill! And by suggesting that music could vanish from our species, Pinker didn’t appear much mind if it did. So his remarks were read as a challenge to prove that music has a fundamental evolutionary value, that it has somehow helped us to survive as a species. It seemed as though the very dignity and value of music itself was at stake.
Pinker might be wrong, of course. Indeed, recent research suggests that there might be neurons in our auditory cortex dedicated solely to music, suggesting that sensitivity to music could be a specific evolutionary adaptation, not a byproduct of other adaptive traits. But whether or not that’s so is rather beside the point. Music is an inevitable product of human intelligence, regardless of whether it’s genetically hard-wired. The human mind naturally possesses the mental apparatus needed for musicality, and will make use of these tools whether we intend it or not. Music isn’t something we do by choice – it’s ingrained in our auditory, cognitive, memory and motor functions, and is implicit in the way we construct a sonic landscape from the noises we hear.
So music couldn’t vanish from our species without fundamentally changing our brains. The sixth-century philosopher Boethius seemed to understand this already: music, he said, “is so naturally united with us that we cannot be free from it even if we so desired.” Cheesecake, on the other hand – I can take it or leave it.
Wednesday, May 25, 2016
Still selfish after all these years?
The 40th anniversary of the publication of Richard Dawkins’ The Selfish Gene is a cause for celebration, as I’ve said.
This anniversary has also reawakened the debate about the book’s title. Do we still think genes are “selfish”? Siddhartha Mukherjee's The Gene makes no mention of the idea, while talking about pretty much everything else. It’s no surprise that Dawkins sticks to his guns, of course. He justifies it in this fashion:
"If you ask what is this adaptation good for, why does the animal do this – have a red crest, or whatever it is - the answer is always, for the good of the genes that made it. That is the central message of The Selfish Gene and that remains true, and reinforced."
This is a statement crafted to brook no dissent. It says nothing about selfishness of genes. It says that adaptations are, well, adaptive, in that they help the organism survive and pass on its genes. But for a gene to be metaphorically selfish, it must surely promote its survival at the expense of other genes.
I’m not going to rehearse again the argument that the “selfish gene” promotes the misconception – which I suspect is now very common – that different genes, not different alleles of the same gene, compete with one another. (In the comment to my blog post above, Matt Ridley points out that there can be exceptions, but at such a stretch as to prove the rule. Still, as Matt says, we're basically on the same page.) The fact is that genes can only propagate with the help of other genes. John Maynard Smith recognized this in the 1970s, and so did Dawkins. He chose the wrong title, and the wrong metaphor, and wrote a superb book about them.
I find it curious that there’s such strong opposition to that fact. For example, I’m struck by how, when the selfish-gene trope is questioned, defenders will often point to rare circumstances in which genes really do seem to be “selfish” – which is to say, where propagation of a gene might be deleterious to the success of an organism (and thus to its other genes). It is hard to overstate how bizarre this argument is. It justifies a metaphor designed to explain the genetic basis of evolutionary adaptation by pointing to a situation in which genetic selection is non-adaptive. You might equally then say that, when genes are truly selfish, natural selection doesn’t “work”.
What is meant to be implied in such arguments is that this selfishness is always there lurking in the character of genes, but that it is usually masked and only bursts free in exceptional circumstances. That, of course, underlines the peril of such an anthropomorphic metaphor in the first place. The notion that genes have any “true” character is absurd. Genetic evolution is a hugely complex process – far more complex than Dawkins could have known in 1976. And complex processes are rarely served well by simple, reductionistic metaphors.
Think of it this way. There are situations in which Darwinian natural selection favours the emergence of sub-optimal fitness (for example, here). This is no big surprise, and certainly doesn’t throw into doubt the fundamental truth of Darwin’s idea. However, we could then, in the spirit of the above, argue that the real character of natural selection is to favour the less-than-fittest, but this is usually masked by the emergence of optimal fitness.
There is an old guard of evolutionary theorists, battle-scarred from bouts with creationism and intelligent design, who are never going to accept this, and who will never see why the selfish gene has become a hindrance to understanding. They can be recognized from the emotive hysteria of their responses to any such suggestion – you will find them clearly identified in David Dobbs’ excellent response to criticisms of his Aeon article on the subject. It is a shame that they have fallen into such a polarized attitude. As the other responses to David’s piece attest, the argument has moved on.
This anniversary has also reawakened the debate about the book’s title. Do we still think genes are “selfish”? Siddhartha Mukherjee's The Gene makes no mention of the idea, while talking about pretty much everything else. It’s no surprise that Dawkins sticks to his guns, of course. He justifies it in this fashion:
"If you ask what is this adaptation good for, why does the animal do this – have a red crest, or whatever it is - the answer is always, for the good of the genes that made it. That is the central message of The Selfish Gene and that remains true, and reinforced."
This is a statement crafted to brook no dissent. It says nothing about selfishness of genes. It says that adaptations are, well, adaptive, in that they help the organism survive and pass on its genes. But for a gene to be metaphorically selfish, it must surely promote its survival at the expense of other genes.
I’m not going to rehearse again the argument that the “selfish gene” promotes the misconception – which I suspect is now very common – that different genes, not different alleles of the same gene, compete with one another. (In the comment to my blog post above, Matt Ridley points out that there can be exceptions, but at such a stretch as to prove the rule. Still, as Matt says, we're basically on the same page.) The fact is that genes can only propagate with the help of other genes. John Maynard Smith recognized this in the 1970s, and so did Dawkins. He chose the wrong title, and the wrong metaphor, and wrote a superb book about them.
I find it curious that there’s such strong opposition to that fact. For example, I’m struck by how, when the selfish-gene trope is questioned, defenders will often point to rare circumstances in which genes really do seem to be “selfish” – which is to say, where propagation of a gene might be deleterious to the success of an organism (and thus to its other genes). It is hard to overstate how bizarre this argument is. It justifies a metaphor designed to explain the genetic basis of evolutionary adaptation by pointing to a situation in which genetic selection is non-adaptive. You might equally then say that, when genes are truly selfish, natural selection doesn’t “work”.
What is meant to be implied in such arguments is that this selfishness is always there lurking in the character of genes, but that it is usually masked and only bursts free in exceptional circumstances. That, of course, underlines the peril of such an anthropomorphic metaphor in the first place. The notion that genes have any “true” character is absurd. Genetic evolution is a hugely complex process – far more complex than Dawkins could have known in 1976. And complex processes are rarely served well by simple, reductionistic metaphors.
Think of it this way. There are situations in which Darwinian natural selection favours the emergence of sub-optimal fitness (for example, here). This is no big surprise, and certainly doesn’t throw into doubt the fundamental truth of Darwin’s idea. However, we could then, in the spirit of the above, argue that the real character of natural selection is to favour the less-than-fittest, but this is usually masked by the emergence of optimal fitness.
There is an old guard of evolutionary theorists, battle-scarred from bouts with creationism and intelligent design, who are never going to accept this, and who will never see why the selfish gene has become a hindrance to understanding. They can be recognized from the emotive hysteria of their responses to any such suggestion – you will find them clearly identified in David Dobbs’ excellent response to criticisms of his Aeon article on the subject. It is a shame that they have fallen into such a polarized attitude. As the other responses to David’s piece attest, the argument has moved on.
Monday, May 09, 2016
SATs are harder than you think
How’s your classical mechanics? Mine’s a bit crap. That’s why I’m having trouble working out the following question.
You have a cylinder that rotates around a horizontal axis, like the sort used to pull up buckets from wells. Around the cylinder is wrapped a rope attached to a weight. As the weight falls and the rope unwinds, you measure the time it takes to descend a certain distance.
Now you increase the mass of the cylinder – say, it’s made from iron, not wood (but of the same size). Does the weight fall more slowly? At risk of embarrassment, I’ll say that I think it does. The torque on the cylinder is the same in both cases, but what changes is the cylinder’s moment of inertia, and thereby (via torque = moment of inertia times angular acceleration) the angular acceleration. So the weight takes longer to descend the same distance when attached to the iron cylinder because the angular acceleration is less.
Also, the greater mass of the cylinder means, via Amonton’s Law, that the friction with the axis is greater in the latter case.
Am I right? Or do I need (it is quite possible) to go back to my A-level mechanics?
The reason I ask is that I am trying to understand a question in the SATs science test (now dropped, by the way) for Year 6, i.e. 11-year-olds.
You might wonder why 11-year-olds are having to grapple with torques and so forth. So am I. But they come up in this question:
Now, I suspect that the answer the pupils are expected to give is that the bigger piece of card incurs more air resistance. That is true. But it is not the only influence at play, since the card obviously adds to the rotor’s mass. So this is a rather complicated question in mechanics.
You might think I’m overthinking the problem. But I can’t see how it is ever a good idea to choose a question for which a little more knowledge makes the problem harder. Or am I just wrong here about the answer?
Elsewhere in the SATs papers you find difficulties that seem to be the result purely of bad questioning. Take this one, from an English Reading and Comprehension test. Pupils have to read the following passage:
Then they are asked
My (10-year-old) daughter was puzzled by this reference to “burning of rocks in space”. What does it mean to burn rocks in space? For one thing, you can’t do it. I mean sure, meteorites will get hot and oxidized as they fall through the atmosphere but not in space. And the frictional heating is not really about burning. “Burning up” is something of a euphemism here, and it does not mean the same thing as “burning”. The intended answer is trivial, of course: “in a flash” just means that the “burning up” happens quickly. But this question is worded in such a way that prevents it from quite making sense.
Is anyone checking this stuff, before it is unleashed on unsuspecting and highly stressed pupils and teachers?
You have a cylinder that rotates around a horizontal axis, like the sort used to pull up buckets from wells. Around the cylinder is wrapped a rope attached to a weight. As the weight falls and the rope unwinds, you measure the time it takes to descend a certain distance.
Now you increase the mass of the cylinder – say, it’s made from iron, not wood (but of the same size). Does the weight fall more slowly? At risk of embarrassment, I’ll say that I think it does. The torque on the cylinder is the same in both cases, but what changes is the cylinder’s moment of inertia, and thereby (via torque = moment of inertia times angular acceleration) the angular acceleration. So the weight takes longer to descend the same distance when attached to the iron cylinder because the angular acceleration is less.
Also, the greater mass of the cylinder means, via Amonton’s Law, that the friction with the axis is greater in the latter case.
Am I right? Or do I need (it is quite possible) to go back to my A-level mechanics?
The reason I ask is that I am trying to understand a question in the SATs science test (now dropped, by the way) for Year 6, i.e. 11-year-olds.
You might wonder why 11-year-olds are having to grapple with torques and so forth. So am I. But they come up in this question:
Now, I suspect that the answer the pupils are expected to give is that the bigger piece of card incurs more air resistance. That is true. But it is not the only influence at play, since the card obviously adds to the rotor’s mass. So this is a rather complicated question in mechanics.
You might think I’m overthinking the problem. But I can’t see how it is ever a good idea to choose a question for which a little more knowledge makes the problem harder. Or am I just wrong here about the answer?
Elsewhere in the SATs papers you find difficulties that seem to be the result purely of bad questioning. Take this one, from an English Reading and Comprehension test. Pupils have to read the following passage:
Then they are asked
My (10-year-old) daughter was puzzled by this reference to “burning of rocks in space”. What does it mean to burn rocks in space? For one thing, you can’t do it. I mean sure, meteorites will get hot and oxidized as they fall through the atmosphere but not in space. And the frictional heating is not really about burning. “Burning up” is something of a euphemism here, and it does not mean the same thing as “burning”. The intended answer is trivial, of course: “in a flash” just means that the “burning up” happens quickly. But this question is worded in such a way that prevents it from quite making sense.
Is anyone checking this stuff, before it is unleashed on unsuspecting and highly stressed pupils and teachers?
Wednesday, April 27, 2016
Where's the soul?
I worry much more than I should about whether embryos have souls. That’s to say, I worry about how those folks who believe that at some stage humans are granted a soul by the grace of God make sense of this question.
But as I discovered while reviewing Henry Greely’s book The End of Sex, Father Tadeusz Pacholcyzk – who has a doctorate in neuroscience from Yale and writes for the National Catholic Bioethics Center in Philadelphia – has at least cleared up one thing for me. Whether or not embryos have a soul should, he says, have no bearing on our judgement about the rights and wrongs of using human embryo tissue for research into stem cells, or presumably for research into anything else. He clarifies that Catholic tradition has no unanimous verdict or tradition on the precise moment of ensoulment. However, Saint Augustine, rarely consulted for his knowledge of embryology, “seemed to shift his opinion back and forth during his lifetime between immediate and delayed ensoulment”. No wonder; it’s a tough question. Much, much tougher, indeed, than Augustine could ever have imagined, because of course we can’t expect him to have known that only about 12% of fertilized eggs in vivo will develop beyond three months of pregnancy. We had best assume, then, that ensoulment is delayed until some time after that, for otherwise heaven will be overwhelmingly filled with souls of embryos less than three months old. I don’t think any of the Christian Fathers ever imagined that heaven should be as odd a place as that.
The point, Pacholcyzk says, is irrelevant in any case, because a human embryo at any stage is destined for a soul “and should not be cannibalized for stem cell extraction”. (The use of “cannibalize” to denote dismemberment for spare parts applies, by the way, only to machines. For living organisms, it refers to the eating of one’s own species. But heck, it sounds bad, doesn’t it?) We must assume that the creation of embryos for any other purpose than procreation is also prohibited by Catholic teaching. In fact, Pacholcyzk says, it is even more immoral to destroy an embryo that had not received an immortal soul (although we don’t, remember, know if anyone actually does this, because we don’t know when ensoulment happens) than to destroy an ensouled embryo – worse than murder! – “because the immortal soul is the principle by which that person could come to an eternal destiny with God in heaven”. That person? Yes, an embryo is always a person – or rather, “the privileged sanctuary of one meant to develop as a human person.”
But evidently, the majority of human embryos are not, as Pacholcyzk insists, “meant [by God, one assumes] to develop as a human person” – they don’t get beyond three months. Or has God really made such a hash of human procreation, so that all these embryos destined for personhood keep failing to attain it?
The corollary to all this must be that the Catholic Church disapproves of IVF too, since that generally involves the creation of embryos that are not given the opportunity to grow to personhood. And as the Catholic World Report reminded us in 2012, it does indeed:
“Catholic teaching prohibits in vitro fertilization, maintaining that a child has the right to be conceived in the marital embrace of his parents. Human sexuality has two components, the unitive and procreative; IVF separates these components and makes the procreative its only goal. Pope Paul VI said that there is an “inseparable connection, willed by God, and unable to be broken by man on his own initiative, between the two meanings of the conjugal act: the unitive meaning and the procreative meaning.”
“There are other issues involved. IVF makes the child a commodity produced in a laboratory, and makes doctors, technicians, and even business people part of the conception process. The sperm used is usually obtained by masturbation, which the Church teaches is immoral. The sperm or eggs used may not come from the couple desiring the child; because one of the spouses may be infertile, it may be necessary to use the sperm or eggs from an outsider.”
That phrase, making a child conceived through IVF “a commodity produced in a laboratory”, is one of the most obscene I have ever heard from the church in modern times. God’s love is infinite – but you, Louise Brown (and four million others), are just a commodity produced in a laboratory.
Of course, Catholic countries don’t tend to feel they can be quite this hardline with their citizens, and so they cook up some crude compromise, such as Italy insisting that all embryos created in IVF (a maximum of three) must be implanted. This flouts Catholic teaching, and also flouts the right of people using IVF to the best chance of making it work. Everyone loses.
Actually, there is a form of IVF that the Catholic church will sanction. It is called gamete intra-Fallopian transfer, or (cutely) GIFT. Here’s how I described it in my book Unnatural. The woman’s eggs are collected as in IVF and mixed with sperm in vitro. This mixture is then immediately transferred back to the woman’s Fallopian tubes, so that fertilization can occur inside the body. One claimed benefit of GIFT is that the embryo can begin its earliest development in ‘natural surroundings’ rather than in an ‘artificial environment’. It’s not clear that a developing embryo cares in the slightest about this distinction, and indeed GIFT both is more invasive than standard IVF and makes it impossible to select the embryo of best apparent quality from several prepared in vitro. But it’s OK with the church, provided that the sperm is collected using a condom (a perforated, leaky one, mind) in sexual intercourse and not by masturbation – because everything then seems to be happening in its ‘natural’ place, with just a momentary sleight-of-hand involving a Petri dish. This obsession with the ‘proper’ mechanics, notwithstanding the lengths that are necessary here to achieve it, speaks of a deeply strange attitude towards the relation between sex and procreation, not to mention the bizarre and, I should have thought, highly disrespectful notion of a God who watches as if with clipboard in hand (but ready to avert his eyes at the crucial point) to tick off each step when it happens as it ‘ought’.
Generally I want to find ways to respect what people believe. But the Catholic position on IVF is on a par, in its inhumanity, with its position on condom use. If I sound sarcastic about it, please don’t read that as flippancy. It is fury. If these folks could content themselves with expressing their prejudices as blind faith and dogma, I would find it more palatable than if they tried to justify them with idiotic attempts at rational argument. I’m told that “Father Tad... studied in Rome, where he did advanced studies in theology and in bioethics.” I don’t find a shred of ethical reasoning in his comments on embryo research. It is unreason of the most retrograde kind.
But as I discovered while reviewing Henry Greely’s book The End of Sex, Father Tadeusz Pacholcyzk – who has a doctorate in neuroscience from Yale and writes for the National Catholic Bioethics Center in Philadelphia – has at least cleared up one thing for me. Whether or not embryos have a soul should, he says, have no bearing on our judgement about the rights and wrongs of using human embryo tissue for research into stem cells, or presumably for research into anything else. He clarifies that Catholic tradition has no unanimous verdict or tradition on the precise moment of ensoulment. However, Saint Augustine, rarely consulted for his knowledge of embryology, “seemed to shift his opinion back and forth during his lifetime between immediate and delayed ensoulment”. No wonder; it’s a tough question. Much, much tougher, indeed, than Augustine could ever have imagined, because of course we can’t expect him to have known that only about 12% of fertilized eggs in vivo will develop beyond three months of pregnancy. We had best assume, then, that ensoulment is delayed until some time after that, for otherwise heaven will be overwhelmingly filled with souls of embryos less than three months old. I don’t think any of the Christian Fathers ever imagined that heaven should be as odd a place as that.
The point, Pacholcyzk says, is irrelevant in any case, because a human embryo at any stage is destined for a soul “and should not be cannibalized for stem cell extraction”. (The use of “cannibalize” to denote dismemberment for spare parts applies, by the way, only to machines. For living organisms, it refers to the eating of one’s own species. But heck, it sounds bad, doesn’t it?) We must assume that the creation of embryos for any other purpose than procreation is also prohibited by Catholic teaching. In fact, Pacholcyzk says, it is even more immoral to destroy an embryo that had not received an immortal soul (although we don’t, remember, know if anyone actually does this, because we don’t know when ensoulment happens) than to destroy an ensouled embryo – worse than murder! – “because the immortal soul is the principle by which that person could come to an eternal destiny with God in heaven”. That person? Yes, an embryo is always a person – or rather, “the privileged sanctuary of one meant to develop as a human person.”
But evidently, the majority of human embryos are not, as Pacholcyzk insists, “meant [by God, one assumes] to develop as a human person” – they don’t get beyond three months. Or has God really made such a hash of human procreation, so that all these embryos destined for personhood keep failing to attain it?
The corollary to all this must be that the Catholic Church disapproves of IVF too, since that generally involves the creation of embryos that are not given the opportunity to grow to personhood. And as the Catholic World Report reminded us in 2012, it does indeed:
“Catholic teaching prohibits in vitro fertilization, maintaining that a child has the right to be conceived in the marital embrace of his parents. Human sexuality has two components, the unitive and procreative; IVF separates these components and makes the procreative its only goal. Pope Paul VI said that there is an “inseparable connection, willed by God, and unable to be broken by man on his own initiative, between the two meanings of the conjugal act: the unitive meaning and the procreative meaning.”
“There are other issues involved. IVF makes the child a commodity produced in a laboratory, and makes doctors, technicians, and even business people part of the conception process. The sperm used is usually obtained by masturbation, which the Church teaches is immoral. The sperm or eggs used may not come from the couple desiring the child; because one of the spouses may be infertile, it may be necessary to use the sperm or eggs from an outsider.”
That phrase, making a child conceived through IVF “a commodity produced in a laboratory”, is one of the most obscene I have ever heard from the church in modern times. God’s love is infinite – but you, Louise Brown (and four million others), are just a commodity produced in a laboratory.
Of course, Catholic countries don’t tend to feel they can be quite this hardline with their citizens, and so they cook up some crude compromise, such as Italy insisting that all embryos created in IVF (a maximum of three) must be implanted. This flouts Catholic teaching, and also flouts the right of people using IVF to the best chance of making it work. Everyone loses.
Actually, there is a form of IVF that the Catholic church will sanction. It is called gamete intra-Fallopian transfer, or (cutely) GIFT. Here’s how I described it in my book Unnatural. The woman’s eggs are collected as in IVF and mixed with sperm in vitro. This mixture is then immediately transferred back to the woman’s Fallopian tubes, so that fertilization can occur inside the body. One claimed benefit of GIFT is that the embryo can begin its earliest development in ‘natural surroundings’ rather than in an ‘artificial environment’. It’s not clear that a developing embryo cares in the slightest about this distinction, and indeed GIFT both is more invasive than standard IVF and makes it impossible to select the embryo of best apparent quality from several prepared in vitro. But it’s OK with the church, provided that the sperm is collected using a condom (a perforated, leaky one, mind) in sexual intercourse and not by masturbation – because everything then seems to be happening in its ‘natural’ place, with just a momentary sleight-of-hand involving a Petri dish. This obsession with the ‘proper’ mechanics, notwithstanding the lengths that are necessary here to achieve it, speaks of a deeply strange attitude towards the relation between sex and procreation, not to mention the bizarre and, I should have thought, highly disrespectful notion of a God who watches as if with clipboard in hand (but ready to avert his eyes at the crucial point) to tick off each step when it happens as it ‘ought’.
Generally I want to find ways to respect what people believe. But the Catholic position on IVF is on a par, in its inhumanity, with its position on condom use. If I sound sarcastic about it, please don’t read that as flippancy. It is fury. If these folks could content themselves with expressing their prejudices as blind faith and dogma, I would find it more palatable than if they tried to justify them with idiotic attempts at rational argument. I’m told that “Father Tad... studied in Rome, where he did advanced studies in theology and in bioethics.” I don’t find a shred of ethical reasoning in his comments on embryo research. It is unreason of the most retrograde kind.
Wednesday, March 23, 2016
On the attack
One of the easiest ways to bring humour to music is with timbre. It’s cheap (literally) but still funny to play Led Zeppelin’s “Whole Lotta Love” or Richard Strauss’s “Also Sprach Zarathustra” on kazoo, as the Temple City Kazoo Orchestra did in the 1970s. Most things played on kazoo are funny. It just has a comical timbre.
Such performances inadvertently make a serious point about timbre, which is that it can matter more than the notes. This is overlooked when music is considered as notes on paper. Yet musicologists have largely neglected it, for the simple reason that we don’t really know what it is. One definition amounts to a negative: if two sound signals differ while being identical in pitch and loudness, the difference is down to timbre.
One feature of timbre is the spectrum of pitches in a note: the amplitudes of the various overtones. These are quite different, for example, for a trumpet and a violin both the same note. But our sense of timbre depends also on how this spectrum, and the overall volume, changes over time, particularly in the initial “attack” period of the first few fractions of a second. These are acoustic properties, though, and it might be more relevant to ask what are the perceptual qualities by which we distinguish timbre. Some music psychologists claim that these are things like “brightness” and attack, others argue that we interpret timbre in terms of the physical processes we imagine causing the sound: blowing, plucking, striking and so on. It’s significant too that we often talk of the “colour” of the sound.
Arnold Schoenberg thought it should be possible to write music based on changes of timbre rather than pitch. It’s because we don’t know enough about how the brain organizes timbre that this notion didn’t really work. All the same, Schoenberg and his pupils created a style called Klangfarbenmelodie (sound colour melody) in which melodies were parceled out between instruments of different timbre, producing a mesmeric, shimmering effect. Anton Webern’s arrangement of a part of Bach “The Musical Offering” is the most renowned example.
There’s one thing for sure: timbre is central to our appreciation of music, and if we relegate it below more readily definable qualities like pitch and rhythm then we miss out on a huge part of what conditions our emotional response. It would be fair to say that critical opinion on the music of heavy-metal band Motörhead, led by the late bass guitarist Lemmy Kilmister, was divided. But if ever there was a music defined by timbre, this was it.
Such performances inadvertently make a serious point about timbre, which is that it can matter more than the notes. This is overlooked when music is considered as notes on paper. Yet musicologists have largely neglected it, for the simple reason that we don’t really know what it is. One definition amounts to a negative: if two sound signals differ while being identical in pitch and loudness, the difference is down to timbre.
One feature of timbre is the spectrum of pitches in a note: the amplitudes of the various overtones. These are quite different, for example, for a trumpet and a violin both the same note. But our sense of timbre depends also on how this spectrum, and the overall volume, changes over time, particularly in the initial “attack” period of the first few fractions of a second. These are acoustic properties, though, and it might be more relevant to ask what are the perceptual qualities by which we distinguish timbre. Some music psychologists claim that these are things like “brightness” and attack, others argue that we interpret timbre in terms of the physical processes we imagine causing the sound: blowing, plucking, striking and so on. It’s significant too that we often talk of the “colour” of the sound.
Arnold Schoenberg thought it should be possible to write music based on changes of timbre rather than pitch. It’s because we don’t know enough about how the brain organizes timbre that this notion didn’t really work. All the same, Schoenberg and his pupils created a style called Klangfarbenmelodie (sound colour melody) in which melodies were parceled out between instruments of different timbre, producing a mesmeric, shimmering effect. Anton Webern’s arrangement of a part of Bach “The Musical Offering” is the most renowned example.
There’s one thing for sure: timbre is central to our appreciation of music, and if we relegate it below more readily definable qualities like pitch and rhythm then we miss out on a huge part of what conditions our emotional response. It would be fair to say that critical opinion on the music of heavy-metal band Motörhead, led by the late bass guitarist Lemmy Kilmister, was divided. But if ever there was a music defined by timbre, this was it.
Thursday, March 17, 2016
The Roman melting pot
Here's my column for the March issue of Nature Materials.
_________________________________________________________
Recycling of materials is generally good for the planet, but it makes life hard for archaeologists. Analysis of ancient materials, for example by studying element or isotope compositions, can provide clues about the provenance of the raw materials and thus about the trade routes and economies of past cultures. But that business becomes complex, even indecipherable, if materials were reused and perhaps reprocessed in piecemeal fashion.
This, however, does seem to have been the way of the world. Extracting metals from ores and minerals from quarries and mines, and making glass and ceramics, were labour-intensive and often costly affairs, so that a great deal of the materials inventory was repurposed. Besides, the knowledge was sometimes lacking to make a particular material from scratch in situ. The glorious cobalt-blue glass in the windows of medieval French churches and cathedrals is often rich in sodium, characteristic of glass from the Mediterranean region. It was probably made from shards imported from the south using techniques that the northern Europeans didn’t possess, and perhaps dating back to Roman or Byzantine times. The twelfth-century monk Theophilus records that the French collected such glass and remelted it to make their windows [1].
In that instance, composition does say something about provenance. But if glass was recycled en masse, the chemical signature of its origin may get scrambled. It’s not surprising that such reuse was very common, for making glass from scratch was hugely burdensome: by one estimate, 100 kg of wood was needed to produce the ash for making 2 kg of glass, and collecting it took a whole day [2].
Just how extensively glass was recycled in large batches in Roman times is made clear in a new study by Jackson and Paynter [3]. Their analysis of glass fragments from a Roman site in York, England, shows that a lot of it came out of “a great big melting pot”: a jumble of recycled items melted together. The fragments can be broadly divided into classes differentiated by their antimony and manganese compositions. Both of these metals were typically added purposely during the Roman glass-making process because they could remove the colour (typically a blue-green tint) imparted by the impurities, such as iron, in the sand or ash [4]. Manganese was known in medieval Europe as “glassmaker’s soap”.
It’s the difficulty of making it that meant colourless glass was highly prized – and so particularly likely to be recycled. The results of Jackson and Paynter confirm how common this was. The largest category of glass samples that they analysed – around 40 percent of the total – contained high levels of both Sb and Mn, implying that glass rendered colourless by either additive would be separated from the rest and then recycled by melting.
But most of those samples aren’t colourless. That’s because remelting tends to incorporate other impurities, such as aluminium, titanium and iron, from the crucibles, furnaces or blowing irons. The recycled glass may then end up as tinted and undistinguished as that made with only low amounts of Mn. As a result, while it is derived from once highly prized, colourless glass reserved for fine tableware, this high Sb-Mn glass becomes devalued and used for mundane, material-intensive items such as windows and bottles. Eventually it just disappears into the melting pot.
1. Theophilus, On Divers Arts, transl. Hawthorne, J. G. & Smith, C. S. (Dover, New York, 1979).
2. Smedley, J. W., Jackson, C. M. & Booth, C. A., in Ceramics and Civilisation Vol. 8, eds McCray, P. & Kingery, W. D. (American Ceramic Society, 1998).
3. Jackson, C. M. & Paynter, S., Archaeometry 58, 68-95 (2016). [here]
4. Jackson, C. M., Archaeometry 47, 763-780 (2005).
_________________________________________________________
Recycling of materials is generally good for the planet, but it makes life hard for archaeologists. Analysis of ancient materials, for example by studying element or isotope compositions, can provide clues about the provenance of the raw materials and thus about the trade routes and economies of past cultures. But that business becomes complex, even indecipherable, if materials were reused and perhaps reprocessed in piecemeal fashion.
This, however, does seem to have been the way of the world. Extracting metals from ores and minerals from quarries and mines, and making glass and ceramics, were labour-intensive and often costly affairs, so that a great deal of the materials inventory was repurposed. Besides, the knowledge was sometimes lacking to make a particular material from scratch in situ. The glorious cobalt-blue glass in the windows of medieval French churches and cathedrals is often rich in sodium, characteristic of glass from the Mediterranean region. It was probably made from shards imported from the south using techniques that the northern Europeans didn’t possess, and perhaps dating back to Roman or Byzantine times. The twelfth-century monk Theophilus records that the French collected such glass and remelted it to make their windows [1].
In that instance, composition does say something about provenance. But if glass was recycled en masse, the chemical signature of its origin may get scrambled. It’s not surprising that such reuse was very common, for making glass from scratch was hugely burdensome: by one estimate, 100 kg of wood was needed to produce the ash for making 2 kg of glass, and collecting it took a whole day [2].
Just how extensively glass was recycled in large batches in Roman times is made clear in a new study by Jackson and Paynter [3]. Their analysis of glass fragments from a Roman site in York, England, shows that a lot of it came out of “a great big melting pot”: a jumble of recycled items melted together. The fragments can be broadly divided into classes differentiated by their antimony and manganese compositions. Both of these metals were typically added purposely during the Roman glass-making process because they could remove the colour (typically a blue-green tint) imparted by the impurities, such as iron, in the sand or ash [4]. Manganese was known in medieval Europe as “glassmaker’s soap”.
It’s the difficulty of making it that meant colourless glass was highly prized – and so particularly likely to be recycled. The results of Jackson and Paynter confirm how common this was. The largest category of glass samples that they analysed – around 40 percent of the total – contained high levels of both Sb and Mn, implying that glass rendered colourless by either additive would be separated from the rest and then recycled by melting.
But most of those samples aren’t colourless. That’s because remelting tends to incorporate other impurities, such as aluminium, titanium and iron, from the crucibles, furnaces or blowing irons. The recycled glass may then end up as tinted and undistinguished as that made with only low amounts of Mn. As a result, while it is derived from once highly prized, colourless glass reserved for fine tableware, this high Sb-Mn glass becomes devalued and used for mundane, material-intensive items such as windows and bottles. Eventually it just disappears into the melting pot.
1. Theophilus, On Divers Arts, transl. Hawthorne, J. G. & Smith, C. S. (Dover, New York, 1979).
2. Smedley, J. W., Jackson, C. M. & Booth, C. A., in Ceramics and Civilisation Vol. 8, eds McCray, P. & Kingery, W. D. (American Ceramic Society, 1998).
3. Jackson, C. M. & Paynter, S., Archaeometry 58, 68-95 (2016). [here]
4. Jackson, C. M., Archaeometry 47, 763-780 (2005).
Tuesday, March 01, 2016
Many worlds or many words?
I’ve been rereading Max Tegmark’s 1997 paper on the Many Worlds Interpretation of quantum mechanics, written in response to an informal poll taken that year at a quantum workshop. There, the MWI was the second most popular interpretation adduced by the attendees, after the Copenhagen Interpretation (which is here undefined). What, Tegmark asks, can account for the robust, even increasing, popularity of the MWI even after it has been so heavily criticized?
He gives various possible reasons, among them the idea that the emerging understanding of decoherence in the 1970s and 1980s removed the apparently serious objection “why don’t we perceive superpositions then?” Perhaps that’s true. Tegmark also says that enough experimental evidence had accumulated by then that quantum mechanics really is weird (quantum nonlocality, molecular superpositions etc) that maybe experimentalists (apparently a more skeptical bunch than theorists) were concluding, “hell, why not?” Again, perhaps so. Perhaps they really did think that “weirdness” here justified weirdness “there”. Perhaps they had become more ready to embrace quantum explanations of homeopathy and telepathy too.
But honestly, some of the stuff here. It’s delightful to see Tegmark actually write down for once the wave vector for an observer, since I’ve always wondered what that looked like. This particular observer makes a measurement on the spin state of a silver atom, and is happy with an up result but unhappy with a down result. In the former case, her state looks like this: |☺>. The latter case? Oh, you got there before me: |☹>. These two states are then combined as tensor products with the corresponding spin states. These equations are identified by numbers, rather as you do when you’re doing science.
Well, but what then of the objection that the very notion of probability is problematic when one is dealing with the MWI, given that everything that can happen does happen with certainty? This issue has been much debated, and certainly it is subtle. Subtler, I think, than the resolution Tegmark proposes. Let’s suppose, he says, that the observer is sleeping in bed when the spin measurement is made, and is placed in one or other of two identical rooms depending on the outcome. Yes, I can see you asking in what sense she is then an observer, and invoking Wigner’s friend and so on, but stay with me. You could at least imagine some apparatus designed to do this, right? So then she wakes up and wonders which room she is in. And she can then meaningfully calculate the probabilities – 50% for each. And, says Tegmark, these probabilities “could have been computed in advance of the experiment, used as gambling odds, etc., before the orthodox linguist would allow us to call them probabilities.”
Did you spot the flaw? She went to sleep – perhaps having realized that she’d have a 50% chance of waking up in either room – and then when she woke up she could find out which. But hang on – she? The “she” who went to sleep is not the “she” who woke up in one of the rooms. According to this view of the MWI, that first she is a superposition of the two shes who woke up. All that first she can say is that with 100% certainty, two future shes will occupy both rooms. At that point, the “probability” that “she” will wake up in room A or room B is a meaningless concept. “She”, or some other observer, could still place a bet on it, though, right, knowing that there will be one outcome or the other? Not really – rational betters would know that it makes no difference, if the MWI holds true. They’ll win and lose either way, with certainty. I wonder if Max, who I think truly does believe the MWI, would place a bet?
The point, I think, is that a linguist would be less bothered by the definition of “probability” here than by the definition of the observer. Posing the issue this way involves the usual refusal to admit that we lack any coherent way to relate the experiences of an individual before a quantum event (on which their life history is contingent) to the whole notion of that “same” individual afterwards. Still, we have the maths: |☺> + |☹> (pardon me for not normalizing) becomes |☺> and |☹> afterwards. And in Tegmark’s universe, it’s the maths that counts.
Oh, and I didn’t even ask what happens when the probability of the spin measurements is not 50:50 but 70:30. Another day, perhaps.
He gives various possible reasons, among them the idea that the emerging understanding of decoherence in the 1970s and 1980s removed the apparently serious objection “why don’t we perceive superpositions then?” Perhaps that’s true. Tegmark also says that enough experimental evidence had accumulated by then that quantum mechanics really is weird (quantum nonlocality, molecular superpositions etc) that maybe experimentalists (apparently a more skeptical bunch than theorists) were concluding, “hell, why not?” Again, perhaps so. Perhaps they really did think that “weirdness” here justified weirdness “there”. Perhaps they had become more ready to embrace quantum explanations of homeopathy and telepathy too.
But honestly, some of the stuff here. It’s delightful to see Tegmark actually write down for once the wave vector for an observer, since I’ve always wondered what that looked like. This particular observer makes a measurement on the spin state of a silver atom, and is happy with an up result but unhappy with a down result. In the former case, her state looks like this: |☺>. The latter case? Oh, you got there before me: |☹>. These two states are then combined as tensor products with the corresponding spin states. These equations are identified by numbers, rather as you do when you’re doing science.
Well, but what then of the objection that the very notion of probability is problematic when one is dealing with the MWI, given that everything that can happen does happen with certainty? This issue has been much debated, and certainly it is subtle. Subtler, I think, than the resolution Tegmark proposes. Let’s suppose, he says, that the observer is sleeping in bed when the spin measurement is made, and is placed in one or other of two identical rooms depending on the outcome. Yes, I can see you asking in what sense she is then an observer, and invoking Wigner’s friend and so on, but stay with me. You could at least imagine some apparatus designed to do this, right? So then she wakes up and wonders which room she is in. And she can then meaningfully calculate the probabilities – 50% for each. And, says Tegmark, these probabilities “could have been computed in advance of the experiment, used as gambling odds, etc., before the orthodox linguist would allow us to call them probabilities.”
Did you spot the flaw? She went to sleep – perhaps having realized that she’d have a 50% chance of waking up in either room – and then when she woke up she could find out which. But hang on – she? The “she” who went to sleep is not the “she” who woke up in one of the rooms. According to this view of the MWI, that first she is a superposition of the two shes who woke up. All that first she can say is that with 100% certainty, two future shes will occupy both rooms. At that point, the “probability” that “she” will wake up in room A or room B is a meaningless concept. “She”, or some other observer, could still place a bet on it, though, right, knowing that there will be one outcome or the other? Not really – rational betters would know that it makes no difference, if the MWI holds true. They’ll win and lose either way, with certainty. I wonder if Max, who I think truly does believe the MWI, would place a bet?
The point, I think, is that a linguist would be less bothered by the definition of “probability” here than by the definition of the observer. Posing the issue this way involves the usual refusal to admit that we lack any coherent way to relate the experiences of an individual before a quantum event (on which their life history is contingent) to the whole notion of that “same” individual afterwards. Still, we have the maths: |☺> + |☹> (pardon me for not normalizing) becomes |☺> and |☹> afterwards. And in Tegmark’s universe, it’s the maths that counts.
Oh, and I didn’t even ask what happens when the probability of the spin measurements is not 50:50 but 70:30. Another day, perhaps.
Subscribe to:
Posts (Atom)