I have a piece in the Guardian online about this paper from Richard Muller that is causing so much fuss, though it says nothing new and hasn’t even passed peer review yet (and might not). Actually my piece is not really about the paper itself, which is discussed elsewhere, but the question of scientists revising their views (or not).
I suspect one could publish a piece on the Guardian’s Comment is Free that read simply “climate change”, and then let them get on with it. There are a few comments below my piece that relate to the article, but they quickly settle down into yet another debate among themselves about whether climate change is real. Hadn’t they all exhausted themselves in the 948 comments following Leo Hickman’s other piece on this issue? But there’s some value in it, not least in sampling the range of views that non-scientist climate sceptics hold. I don’t mean that sarcastically – it seems important to know how all the scepticism justifies itself. Disheartening, sure, but useful.
________________________________________________________________
It’s tempting to infer from the reports of University of California physicist Richard Muller’s conversion that climate sceptics really can change their spots. Analyses by Muller’s Berkeley Earth Surface Temperature project, which have been made publicly available, reveal that the Earth’s land surface is on average 1.5 C warmer than it was when Mozart was born, and that, as Muller puts it “humans are almost entirely the cause”. He says that his findings are even stronger than those of the Intergovernmental Panel on Climate Change, which presents the consensus of the climate-science community that most of the warming in the past half century is almost certainly due to human activities. “Call me a converted skeptic”, says Muller in the New York Times.
Full marks for the professor’s scientific integrity, then. But those of us who agree with the conclusions of nearly every serious climate scientist on the planet shouldn’t be too triumphant. Muller was never your usual sceptic, picking and choosing his data to shore up an ideological position. He was sceptical only in the proper scientific sense of withholding judgement until he felt persuaded by the evidence.
Besides, Muller already stated four years ago that he accepted the consensus view – not because everyone else said so, but because he’d conducted his own research. That didn’t stop him from pointing out the (real) flaws with the infamous ‘hockey stick’ graph of temperature change over the past millennium, nor from accusing Al Gore of cherry-picking facts in An Inconvenient Truth.
In one sense, Muller is here acting as a model scientist: demanding strong evidence, damning distortions in any direction, and most of all, exemplifying the Royal Society’s motto Nullius in verba, ‘take no one’s word for it.’ But that’s not necessarily as virtuous as it seems. For one thing, as the Royal Society’s founders discovered, you have to take someone’s word for some things, since you lack the time and knowledge to verify everything yourself. And as one climatologist said, Muller’s findings only “demonstrate once again what scientists have known with some degree of certainty for nearly two decades”. Wasn’t it verging on arrogant to have so doubted his peers’ abilities? There’s a fine line between trusting your own judgement and assuming everyone else is a blinkered incompetent.
All the same, Muller’s self-confessed volte-face is commendably frank. It’s also unusual. In another rare instance, James Lovelock was refreshingly insouciant when he recently admitted that climate change, while serious, might not be quite as apocalyptic as he had previously forecast – precisely the kind of doom-mongering view that fuelled Muller’s scepticism. There’s surely something in Lovelock’s suggestion that being an independent scientist makes it easier to change your mind – the academic system still struggles to accept that getting things wrong occasionally is part of being a scientist.
But the problem is as much constitutional as institutional. Despite their claim that evidence is the arbiter, scientists rarely alter their views in major ways. Sure, they are often surprised by their discoveries, but on fundamental questions they are typically trenchant. The great astronomer Tycho Brahe never accepted the Copernican cosmos, Joseph Priestley never renounced phlogiston, Einstein never fully accepted quantum theory. Most great scientists have carried some obsolete convictions to the grave, which is why Max Planck claimed that science advances one funeral at a time.
This sounds scandalous, but actually it’s useful. Big questions in science are rarely resolved at a stroke by transparent experimental results. So they require vigorous debate, and the opposing views need resolute champions. Richard Dawkins and E. O. Wilson are currently locking horns about the existence of group selection in Darwinian evolution precisely because the answer is so far from obvious. I’d place money on neither ever rescinding.
The fact is that most scientists seek not to convert themselves but to convert others. That’s fair enough, for it’s those others who can most objectively judge who has the best case.
Could this mean we actually need climate sceptics? Better to say that we need to subject both sides of the debate to rigorous scientific testing. Just as Muller has done.
Tuesday, July 31, 2012
Friday, July 27, 2012
Political interference
I’ve a mountain of stuff to put up here after a holiday. For starters, here’s the pre-edited version of an editorial for last week’s issue of Nature. I mention here in passing an opinion piece by Charles Lane of the Washington Post, but couldn’t sink my teeth into it as much as I’d have liked. It is breathtaking what passes as political commentary in the right-wing US media. Lane is worried that US social scientists have an unduly high proportion of Democrats. As I say below, that’s true for US academia generally. To Lane, this means there is a risk of political bias (so that social science is dangerous). Needless to say, there is quite a different interpretation that one might place on the fact that a majority of intelligent, educated Americans are liberals.
But the truly stupid part of his argument is that “Politicization was a risk political scientists accepted when they took government funding in the first place.” No one, Lane trumpets, has offered any counter-argument to that, “so I’ll consider that point conceded.” He’d do better to interpret the lack of response as an indication of the asinine nature of the assertion. Basically he is saying that all governments may reserve the right to employ the methods of dictatorship, imposing censorship and restricting academic freedoms. So if Congress acts like the Turkish government, what are those damned academics whining about? This is the thing about US right-wingers that just leaves we Europeans scratching our heads: they seem to believe that government is a necessary evil that should interfere with the state as little as possible, unless that interference is based on right-wing ideology (for example, by tampering with climate research). Perhaps there’s nothing surprising about that view in itself, though; what’s weird is how blind those who hold it are to its inconsistency.
______________________________________________________________
A fundamental question for democracy is what to submit to the democratic process. The laws of physics should presumably be immune. But should public opinion decide which science gets studied, or at least funded? That’s the implication of an amendment to the US National Science Foundation’s 2013 spending bill approved by the House of Representatives in May. Proposed by Republican Jeff Flake, it would prevent the NSF from funding political science, for which it awarded about $11m in grants this year. The Senate may well squash the amendment, but it’s deeply concerning that it got so far. Flake was hoping for bigger cuts to the NSF’s overall budget, but had to settle for an easier target. He indulged in the familiar trick in the US Congress of finding research with apparently obscure or trivial titles and parading it as a waste of taxpayers’ money.
One can do this in any area of science. The particular vulnerability of the social sciences is that, being less cluttered with technical terminology, it seems superficially easier for the lay person to assess. As social scientist Duncan Watts of Microsoft Research in New York has pointed out, “everyone has experience being human, and so the vast majority of findings in social science coincide with something that we have either experienced or can imagine experiencing”. This means the Flakes of this world have little trouble proclaiming such findings obvious or insignificant.
Part of the blame must lie with the practice of labelling the social sciences ‘soft’, which too readily translates as woolly or soft-headed. Because they deal with systems that are highly complex, adaptive and not rigorously rule-bound, the social sciences are among the hardest of disciplines, both methodologically and intellectually. What is more, they suffer because their findings do sometimes seem obvious. Yet equally, the “obvious”, common-sense answer may prove quite false when subjected to scrutiny. There are countless examples, from economics to traffic planning, which is one reason why the social sciences probably unnerve some politicians used to making decisions based not on evidence but on intuition, wishful thinking and an eye on the polls.
What of the critics’ other arguments against publicly funded political science? They say it is more susceptible to political bias; in particular, more social scientists have Democratic leanings than Republican. The latter is true, but equally so for US academics generally. We can argue about why, but why single out political science? The charge of bias, meanwhile, is asserted rather than demonstrated.
And what has political science ever done for us? We don’t know why crime rates rise and fall or the effect of deterrents, we can’t solve the financial crisis or stop civil wars, we can’t agree on the state’s role in systems of justice or taxation. As Washington Post columnist Charles Lane argues, “the larger the social or political issue, the more difficult it is to illuminate definitively through the methods of ‘hard science’.” In part this just restates the fact that political science is among the most difficult of the sciences. To conclude that hard problems are better solved by not studying them is ludicrous. Should we slash the physics budget unless the dark-matter and dark-energy problems are solved? Lane’s statement falls for the very myth it wants to attack: that political science is ruled, like physics, by precise, unique, universal rules. In any case, we have little idea how successful political science has been, for politicians rarely pay much heed to evidence-based advice from the social sciences, unless of course the evidence suits them. And to constrain political scientists with utilitarian bean-counting is to miss what is mostly its point anyway. As the likes of John Rawls, Herbert Simon, Robert Axelrod, Kenneth Waltz and Karl Popper have shown, they have enriched political debate beyond measure.
The general notion that politicians should decide what is or is not worthy of research is perilous. Here, the proper function of democracy is to establish impartial bodies of experts and leave it to them. But Flake’s amendment does more than just traduce a culture of expertise. Among the research he selected for ridicule were studies of gender disparity in politics and models for international analysis of climate change: issues unpopular with right-wingers. In other words, his interference is not just about cost-cutting but has a political agenda. That he and his political allies feel threatened by evidence-based study of politics and society does not speak highly of their confidence in the objective case for their policies. Flake’s amendment is no different in principle to the ideological infringements of academic freedom in Turkey or Iran. It has nothing to do with democracy.
But the truly stupid part of his argument is that “Politicization was a risk political scientists accepted when they took government funding in the first place.” No one, Lane trumpets, has offered any counter-argument to that, “so I’ll consider that point conceded.” He’d do better to interpret the lack of response as an indication of the asinine nature of the assertion. Basically he is saying that all governments may reserve the right to employ the methods of dictatorship, imposing censorship and restricting academic freedoms. So if Congress acts like the Turkish government, what are those damned academics whining about? This is the thing about US right-wingers that just leaves we Europeans scratching our heads: they seem to believe that government is a necessary evil that should interfere with the state as little as possible, unless that interference is based on right-wing ideology (for example, by tampering with climate research). Perhaps there’s nothing surprising about that view in itself, though; what’s weird is how blind those who hold it are to its inconsistency.
______________________________________________________________
A fundamental question for democracy is what to submit to the democratic process. The laws of physics should presumably be immune. But should public opinion decide which science gets studied, or at least funded? That’s the implication of an amendment to the US National Science Foundation’s 2013 spending bill approved by the House of Representatives in May. Proposed by Republican Jeff Flake, it would prevent the NSF from funding political science, for which it awarded about $11m in grants this year. The Senate may well squash the amendment, but it’s deeply concerning that it got so far. Flake was hoping for bigger cuts to the NSF’s overall budget, but had to settle for an easier target. He indulged in the familiar trick in the US Congress of finding research with apparently obscure or trivial titles and parading it as a waste of taxpayers’ money.
One can do this in any area of science. The particular vulnerability of the social sciences is that, being less cluttered with technical terminology, it seems superficially easier for the lay person to assess. As social scientist Duncan Watts of Microsoft Research in New York has pointed out, “everyone has experience being human, and so the vast majority of findings in social science coincide with something that we have either experienced or can imagine experiencing”. This means the Flakes of this world have little trouble proclaiming such findings obvious or insignificant.
Part of the blame must lie with the practice of labelling the social sciences ‘soft’, which too readily translates as woolly or soft-headed. Because they deal with systems that are highly complex, adaptive and not rigorously rule-bound, the social sciences are among the hardest of disciplines, both methodologically and intellectually. What is more, they suffer because their findings do sometimes seem obvious. Yet equally, the “obvious”, common-sense answer may prove quite false when subjected to scrutiny. There are countless examples, from economics to traffic planning, which is one reason why the social sciences probably unnerve some politicians used to making decisions based not on evidence but on intuition, wishful thinking and an eye on the polls.
What of the critics’ other arguments against publicly funded political science? They say it is more susceptible to political bias; in particular, more social scientists have Democratic leanings than Republican. The latter is true, but equally so for US academics generally. We can argue about why, but why single out political science? The charge of bias, meanwhile, is asserted rather than demonstrated.
And what has political science ever done for us? We don’t know why crime rates rise and fall or the effect of deterrents, we can’t solve the financial crisis or stop civil wars, we can’t agree on the state’s role in systems of justice or taxation. As Washington Post columnist Charles Lane argues, “the larger the social or political issue, the more difficult it is to illuminate definitively through the methods of ‘hard science’.” In part this just restates the fact that political science is among the most difficult of the sciences. To conclude that hard problems are better solved by not studying them is ludicrous. Should we slash the physics budget unless the dark-matter and dark-energy problems are solved? Lane’s statement falls for the very myth it wants to attack: that political science is ruled, like physics, by precise, unique, universal rules. In any case, we have little idea how successful political science has been, for politicians rarely pay much heed to evidence-based advice from the social sciences, unless of course the evidence suits them. And to constrain political scientists with utilitarian bean-counting is to miss what is mostly its point anyway. As the likes of John Rawls, Herbert Simon, Robert Axelrod, Kenneth Waltz and Karl Popper have shown, they have enriched political debate beyond measure.
The general notion that politicians should decide what is or is not worthy of research is perilous. Here, the proper function of democracy is to establish impartial bodies of experts and leave it to them. But Flake’s amendment does more than just traduce a culture of expertise. Among the research he selected for ridicule were studies of gender disparity in politics and models for international analysis of climate change: issues unpopular with right-wingers. In other words, his interference is not just about cost-cutting but has a political agenda. That he and his political allies feel threatened by evidence-based study of politics and society does not speak highly of their confidence in the objective case for their policies. Flake’s amendment is no different in principle to the ideological infringements of academic freedom in Turkey or Iran. It has nothing to do with democracy.
Thursday, July 12, 2012
Name that colour
I don’t read much popular science. That’s not a boast, as if to say that I’m above such things, but a guilty confession – I ought to read more, but am too slow a reader. That I’m missing out is being confirmed for me now as I finally get round to reading Guy Deutscher’s Through the Language Glass, which was shortlisted for the Royal Society Winton Prize last year. I knew this was a book I wanted to read, because it deals in some detail with the linguistics of colour terminology, which I looked into while writing Bright Earth. I was finally moved to get it after writing the piece below for the BBC Future site a month or so ago, and wanting to do more with this very interesting work. Whether I will be able to do that or not remains to be seen, but I’m glad it motivated me to get Deutscher’s book, because it is absolutely splendid. I remember Richard Holmes, chairing the book prize panel, questioning how helpful it really was for a book to advertise itself with Stephen Fry’s quote “Jaw-droppingly wonderful”, but His Fryness is quite correct. There’s another chapter – well, perhaps another section – that I would have added to Bright Earth, had I known some of this stuff: I wasn’t aware that Gladstone (that Gladstone) had postulated that the invention of new dyes and pigments actually stimulated the development of colour terminology itself, since it was only (he said) when people could abstract colours from their manifestations in natural objects that they figured they needed words for them. It’s not at all clear if this is true, but it is an intriguing idea, and not obviously nonsense.
____________________________________________________
The artist Derek Jarman once met a friend on London’s Oxford Street and complimented him on his beautiful yellow coat. His friend replied that he’d bought it in Tokyo, where it wasn’t considered yellow at all, but green.
We don’t always agree about colour. Your red might be my pink or orange. Vietnamese and Korean don’t differentiate blue from green – leaves and sky are both coloured xanh in Vietnam. These overlaps and omissions can seem bizarre if they’re not part of your culture, but aren’t even visible if they are.
But we shouldn’t be too surprised by them. The visible spectrum isn’t like a paint colour chart, neatly separated into blocks of distinct hue, but is a continuum in which each colour blends into the next. Why should we expect to agree on where to set the boundaries, or on which colours are the most fundamental? The yellow band, say, is as wide as the cyan band, so why is yellow considered any more ‘basic’ than cyan?
A new study by physicist Vittorio Loreto at the University of Rome ‘La Sapienza’ and his colleagues argues that this naming and hierarchical ranking of colours isn’t, after all, arbitrary. The researchers say that there is a natural hierarchy of colour terms that arises from the interplay between our innate ability to distinguish one hue from another and the complex cultural negotiation out of which language itself appears.
In essence, their argument pertains to the entire edifice of language: how it is that we come to divide the world into specific categories of object or concept that we can all, within a given culture, agree on. Somehow we arrive at a language that distinguishes ‘cup’, ‘mug’, ‘glass’, ‘bowl’ and so on, without there being well-defined and mutually exclusive ‘natural’ criteria for these terms.
But the researchers have not chosen arbitrarily to focus on colour words. These have long been a focus for linguists, since they offer an ideal multicultural example of how we construct discrete categories from a world that lacks such inherent distinctions. Why don’t we have a hundred basic colour terms like ‘red’, ‘blue’ and so on, given that we can in principle tell apart at least this many hues (think back to those paint charts)? Or why not get by with just four or five colours?
In fact, some cultures do. The Dugerm Dani people of New Guinea, for example, have only two colour words, which can best be translated as ‘black’ and ‘white’, or light and dark. A few other pre-literate cultures recognize only three colours: black, white and red. Others have only a handful more.
The curious thing is that these simplified colour schemes are not capricious. For one thing, the named colours tend to match the ‘basic’ colours of more complex chromatic lexicons: red, yellow, blue and so on. What’s more, the colours seem to ‘arrive’ in a culture’s evolving vocabulary in a universal order: first black and white, then red, then green or yellow (followed by the other of this pair), then blue... So there is no known culture that recognizes, say, just red and blue: you don’t tend to ‘get’ blue unless you already have black, white, red, yellow and (perhaps) green.
This universal hierarchy of colour names was first observed [actually Deutscher shows that this wasn’t the first observation, but a rediscovery of an idea proposed in the nineteenth century by the German philologist Lazarus Geiger] by anthropologists Brent Berlin and Paul Kay in 1969, but there has been no explanation for it. This is what Loreto and colleagues now purport to offer. They use a computer model of language evolution in which new words arise as if through a kind of ‘game’ played repeatedly between pairs of individuals in a population: one the speaker, the other the hearer. The speaker might talk about a particular object – a colour say – using a word that the hearer doesn’t already possess. Will the hearer figure out what the speaker is referring to, and if so, will she then adopt the same word herself, jettisoning her own word for that object or recognizing a new sub-category of such objects? It is out of many interactions of this sort, which may or may not act to spread a word, that the population’s shared language arises.
For colour words, this negotiation is biased by our visual perception. We don’t see all parts of the visible spectrum equally: it is easier for us to see small changes in hue (that is, in the wavelength of the light entering our eyes) in some parts than in others. Loreto and colleagues impose this so-called “just noticeable difference function” of colour perception on the inter-agent interactions in their model. That’s what makes it more likely that some bands of the spectrum will begin to emerge as more ‘deserving’ than others of their own colour word. In other words, the population of agents will agree faster on a word associated with some hues than others.
This speed at which a consensus arises about a colour word with an agreed meaning specifies the resulting hierarchy of such words. And the order in which this happens in the computer experiments – red first, then violet, green/yellow, blue, orange and then cyan – is very close to that identified by Berlin and Kay. (Black and white, which aren’t themselves spectral colours, must be assumed at the outset as the crude distinction between dark and light.) Crucially, this sequence can’t be predicted purely from the “just noticeable difference function” – that is, from the physiology of colour vision – but arises only when it is fed into the ‘naming game’.
The match isn’t perfect, however. For one thing, violet doesn’t appear in Berlin and Kay’s hierarchy. Loreto and colleagues explain its emergence in their sequence as an artificial consequence of the way reddish hues crop up at both ends of the visible spectrum. And Berlin and Kay listed brown after blue. But brown isn’t a spectral colour – it’s a kind of dark yellow/orange, and so can be considered a variant shade of orange. Whether or not you accept those explanations for the discrepancies, this model of language evolution looks set to offer a good basis for exploring factors such as cultural differences and contingencies, like those Jarman discovered, and how language gets transmitted between cultures, often mutating in the process.
Paper: V. Loreto, A. Mukherjee & F. Tria, Proc. Natl Acad. Sci. USA doi: 10.1073/pnas.1113347109.
____________________________________________________
The artist Derek Jarman once met a friend on London’s Oxford Street and complimented him on his beautiful yellow coat. His friend replied that he’d bought it in Tokyo, where it wasn’t considered yellow at all, but green.
We don’t always agree about colour. Your red might be my pink or orange. Vietnamese and Korean don’t differentiate blue from green – leaves and sky are both coloured xanh in Vietnam. These overlaps and omissions can seem bizarre if they’re not part of your culture, but aren’t even visible if they are.
But we shouldn’t be too surprised by them. The visible spectrum isn’t like a paint colour chart, neatly separated into blocks of distinct hue, but is a continuum in which each colour blends into the next. Why should we expect to agree on where to set the boundaries, or on which colours are the most fundamental? The yellow band, say, is as wide as the cyan band, so why is yellow considered any more ‘basic’ than cyan?
A new study by physicist Vittorio Loreto at the University of Rome ‘La Sapienza’ and his colleagues argues that this naming and hierarchical ranking of colours isn’t, after all, arbitrary. The researchers say that there is a natural hierarchy of colour terms that arises from the interplay between our innate ability to distinguish one hue from another and the complex cultural negotiation out of which language itself appears.
In essence, their argument pertains to the entire edifice of language: how it is that we come to divide the world into specific categories of object or concept that we can all, within a given culture, agree on. Somehow we arrive at a language that distinguishes ‘cup’, ‘mug’, ‘glass’, ‘bowl’ and so on, without there being well-defined and mutually exclusive ‘natural’ criteria for these terms.
But the researchers have not chosen arbitrarily to focus on colour words. These have long been a focus for linguists, since they offer an ideal multicultural example of how we construct discrete categories from a world that lacks such inherent distinctions. Why don’t we have a hundred basic colour terms like ‘red’, ‘blue’ and so on, given that we can in principle tell apart at least this many hues (think back to those paint charts)? Or why not get by with just four or five colours?
In fact, some cultures do. The Dugerm Dani people of New Guinea, for example, have only two colour words, which can best be translated as ‘black’ and ‘white’, or light and dark. A few other pre-literate cultures recognize only three colours: black, white and red. Others have only a handful more.
The curious thing is that these simplified colour schemes are not capricious. For one thing, the named colours tend to match the ‘basic’ colours of more complex chromatic lexicons: red, yellow, blue and so on. What’s more, the colours seem to ‘arrive’ in a culture’s evolving vocabulary in a universal order: first black and white, then red, then green or yellow (followed by the other of this pair), then blue... So there is no known culture that recognizes, say, just red and blue: you don’t tend to ‘get’ blue unless you already have black, white, red, yellow and (perhaps) green.
This universal hierarchy of colour names was first observed [actually Deutscher shows that this wasn’t the first observation, but a rediscovery of an idea proposed in the nineteenth century by the German philologist Lazarus Geiger] by anthropologists Brent Berlin and Paul Kay in 1969, but there has been no explanation for it. This is what Loreto and colleagues now purport to offer. They use a computer model of language evolution in which new words arise as if through a kind of ‘game’ played repeatedly between pairs of individuals in a population: one the speaker, the other the hearer. The speaker might talk about a particular object – a colour say – using a word that the hearer doesn’t already possess. Will the hearer figure out what the speaker is referring to, and if so, will she then adopt the same word herself, jettisoning her own word for that object or recognizing a new sub-category of such objects? It is out of many interactions of this sort, which may or may not act to spread a word, that the population’s shared language arises.
For colour words, this negotiation is biased by our visual perception. We don’t see all parts of the visible spectrum equally: it is easier for us to see small changes in hue (that is, in the wavelength of the light entering our eyes) in some parts than in others. Loreto and colleagues impose this so-called “just noticeable difference function” of colour perception on the inter-agent interactions in their model. That’s what makes it more likely that some bands of the spectrum will begin to emerge as more ‘deserving’ than others of their own colour word. In other words, the population of agents will agree faster on a word associated with some hues than others.
This speed at which a consensus arises about a colour word with an agreed meaning specifies the resulting hierarchy of such words. And the order in which this happens in the computer experiments – red first, then violet, green/yellow, blue, orange and then cyan – is very close to that identified by Berlin and Kay. (Black and white, which aren’t themselves spectral colours, must be assumed at the outset as the crude distinction between dark and light.) Crucially, this sequence can’t be predicted purely from the “just noticeable difference function” – that is, from the physiology of colour vision – but arises only when it is fed into the ‘naming game’.
The match isn’t perfect, however. For one thing, violet doesn’t appear in Berlin and Kay’s hierarchy. Loreto and colleagues explain its emergence in their sequence as an artificial consequence of the way reddish hues crop up at both ends of the visible spectrum. And Berlin and Kay listed brown after blue. But brown isn’t a spectral colour – it’s a kind of dark yellow/orange, and so can be considered a variant shade of orange. Whether or not you accept those explanations for the discrepancies, this model of language evolution looks set to offer a good basis for exploring factors such as cultural differences and contingencies, like those Jarman discovered, and how language gets transmitted between cultures, often mutating in the process.
Paper: V. Loreto, A. Mukherjee & F. Tria, Proc. Natl Acad. Sci. USA doi: 10.1073/pnas.1113347109.
Monday, July 09, 2012
Who's in charge?
When I was asked to write a piece for the Guardian about the GSK scandal, my first thought was that it would be nice to know Richard Sykes’ response to the court decision, given that at least some of what GSK is being punished for happened under his watch. Lacking the time to hunt him down, I hoped someone else might do that subsequently. They have. The result is quite astonishing. As the Observer also reports this weekend, he tells us that “I have not had a chance to read the newspapers and have not a clue as to what is going on.”
Is this a joke? Sykes is a busy man, but we are being asked to believe that a law case that has been dragging on for years, involving extremely serious malpractice and resulting in a $3 bn settlement, against the company of which he was chairman during at least part of the relevant period, has somehow passed him by, so that he’s now in the same position as the rest of us in having to read all about it in the papers – and that he hasn’t quite got round to that yet. If there is any truth in this – and what do I know about how these things work? – that is all the more shocking. I really do struggle to imagine a situation in which Sykes has managed to shut out all knowledge of this case, has not been called upon during its course, and now lacks the motivation or the sense of obligation to get up to speed on it. And even if all this were somehow plausible, could he not even at least come up with the kind of blandishments offered by the current GSK CEO about having now put things right? The company has pleaded guilty, for goodness’ sake, it is not even as though he can refuse to comment on the question of guilt and culpability. So Murdoch knew nothing, Diamond knew nothing, now Sykes knew nothing. Is there actually anyone in charge of the world?
Is this a joke? Sykes is a busy man, but we are being asked to believe that a law case that has been dragging on for years, involving extremely serious malpractice and resulting in a $3 bn settlement, against the company of which he was chairman during at least part of the relevant period, has somehow passed him by, so that he’s now in the same position as the rest of us in having to read all about it in the papers – and that he hasn’t quite got round to that yet. If there is any truth in this – and what do I know about how these things work? – that is all the more shocking. I really do struggle to imagine a situation in which Sykes has managed to shut out all knowledge of this case, has not been called upon during its course, and now lacks the motivation or the sense of obligation to get up to speed on it. And even if all this were somehow plausible, could he not even at least come up with the kind of blandishments offered by the current GSK CEO about having now put things right? The company has pleaded guilty, for goodness’ sake, it is not even as though he can refuse to comment on the question of guilt and culpability. So Murdoch knew nothing, Diamond knew nothing, now Sykes knew nothing. Is there actually anyone in charge of the world?
Wednesday, July 04, 2012
The drugs aren't working
The Guardian seems to be keeping me busy at the moment. Here’s a piece published today about the GlaxoSmithKline scandal. It was apparently lightly edited for ‘legal’ reasons, but I’m not sure that Coca-Cola is really libelled here. Mind you, in Britain it’s hard to tell. Perhaps I’m safe so long as I don’t mention chiropractors.
______________________________________________________________
Perhaps the most shocking thing about the latest GlaxoSmithKline drug scandal is that malpractice among our overlords still has the ability to shock at all. Yet despite popular cynicism about doctors being in the pockets of the drug companies, there remains a sense that the people responsible for our healthcare are more principled and less corruptible than expenses-fiddling politicians, predatory bankers, amoral media magnates and venal police.
If this were a junk-food company lying about its noxious products, or a tobacco company pushing ciggies on schoolkids, we’d be outraged but hardly surprised. When a major pharmaceutical company is found to have been up to comparable misdemeanours – bad enough to warrant an astonishing $3 bn fine – it seems more of a betrayal of trust.
This is absurd, of course, but it shows how the healthcare industry benefits from its apparent proximity to the Hippocratic Oath. “Do more, feel better, live longer”, GSK purrs. How can we doubt a company that announces as its priorities “Improving the health and well-being of people around the world”, and “Being open and honest in everything we do”?
Now GSK admits that, in effect, it knowingly risked damaging the health of people around the world, and was secretive and fraudulent in some of what it did. Among other things, it promoted the antidepressant drug Paxil, approved only for adults, to people under 18; it marketed other drugs for non-approved uses; it suppressed scientific studies that didn’t suit (for example over the heart-attack risks of its diabetes drug Avandia), and over-hyped others that did; and it hosted outings for doctors in exotic locations and showered them with perks, knowing that this would boost prescriptions of its drugs.
I’m incensed not because this vindicates a conviction that pharmaceutical companies are staffed by profit-hungry liars and cheats, but precisely because I know that they are not: that so many of their scientists, and doubtless executives and marketers too, are decent folk motivated by the wish to benefit the world. We were bamboozled, but they have been degraded.
And it is precisely because Big Pharma really has benefitted the world, making life a great deal more tolerable and advancing scientific understanding, that the industry has acquired the social capital of public trust that GSK has been busy squandering.
But it’s time we accepted that it is a business like any other, and does not operate on a higher, more altruistic plane than Coca-Cola. It will do whatever it can get away with, whether that means redacting scientific reports, bribing academics and physicians, or pushing into ‘grey’ markets without proper consent or precaution. After all, this has happened countless times before. All the giants – AstraZeneca, Bristol-Myers Squibb, Merck, Eli Lilly – have been investigated for bribery. One of the most notorious episodes of misconduct involved Merck’s anti-inflammatory drug Vioxx, withdrawn in 2004 after the company persistently played down its risk of causing cardiovascular problems. History suggests that GSK CEO Andrew Witty’s assurances that lessons have been learnt are meaningless.
As with the banking scandals, GSK’s downfall is partly a failure of management – those at the top (some of the malparactice predates Witty’s incumbency) weren’t watching. It’s partly a failure of culture: the jollies and bribes came to seem normal, ethically unproblematic, even an entitlement, to both the donors and recipients.
And it’s partly a failure of regulation. The US Food and Drugs Administration has seemed at times not just toothless but actually collusive. Meanwhile, some American academics, having enjoyed Big Pharma’s kickbacks for decades, are now shrieking about the Physician Payments Sunshine Act, a part of the ObamaCare package which would make it mandatory for physicians to declare any perks or payments received from drug companies greater than $10, whether as speaker fees, theatre tickets or Hawaiian holidays. The protestors claim they will drown in bureaucracy. In reality they will be forced to reveal how much these things supplement their already healthy income. Harvard physician Thomas Stossel claimed in the Wall Street Journal that the backhanders don’t harm patients. The GSK ruling shows otherwise.
But the problems are still deeper. You don’t have to be an anti-capitalist to admit the inadequacies of relying solely on market forces for our drugs – not least for those that, being urgently needed mostly by poor countries, will never turn a profit. Incentives for Global Health, a non-profit organization at Yale University, have argued the case for a global, public-sector drug development agency, funded for example by a Tobin tax. In the unlikely event that our leaders should dare to demand such genuine recompense for the moral bankruptcy of the financial world, there would be few better uses for it – and freedom from the corrupting influence of the profit margin adds another argument to this already compelling case.
One way or another, some rethinking of how drugs are discovered, developed, sold and used is needed, before the noble art of medicine comes to look more like Mr Wormwood selling a dodgy motor for whatever he can get away with.
______________________________________________________________
Perhaps the most shocking thing about the latest GlaxoSmithKline drug scandal is that malpractice among our overlords still has the ability to shock at all. Yet despite popular cynicism about doctors being in the pockets of the drug companies, there remains a sense that the people responsible for our healthcare are more principled and less corruptible than expenses-fiddling politicians, predatory bankers, amoral media magnates and venal police.
If this were a junk-food company lying about its noxious products, or a tobacco company pushing ciggies on schoolkids, we’d be outraged but hardly surprised. When a major pharmaceutical company is found to have been up to comparable misdemeanours – bad enough to warrant an astonishing $3 bn fine – it seems more of a betrayal of trust.
This is absurd, of course, but it shows how the healthcare industry benefits from its apparent proximity to the Hippocratic Oath. “Do more, feel better, live longer”, GSK purrs. How can we doubt a company that announces as its priorities “Improving the health and well-being of people around the world”, and “Being open and honest in everything we do”?
Now GSK admits that, in effect, it knowingly risked damaging the health of people around the world, and was secretive and fraudulent in some of what it did. Among other things, it promoted the antidepressant drug Paxil, approved only for adults, to people under 18; it marketed other drugs for non-approved uses; it suppressed scientific studies that didn’t suit (for example over the heart-attack risks of its diabetes drug Avandia), and over-hyped others that did; and it hosted outings for doctors in exotic locations and showered them with perks, knowing that this would boost prescriptions of its drugs.
I’m incensed not because this vindicates a conviction that pharmaceutical companies are staffed by profit-hungry liars and cheats, but precisely because I know that they are not: that so many of their scientists, and doubtless executives and marketers too, are decent folk motivated by the wish to benefit the world. We were bamboozled, but they have been degraded.
And it is precisely because Big Pharma really has benefitted the world, making life a great deal more tolerable and advancing scientific understanding, that the industry has acquired the social capital of public trust that GSK has been busy squandering.
But it’s time we accepted that it is a business like any other, and does not operate on a higher, more altruistic plane than Coca-Cola. It will do whatever it can get away with, whether that means redacting scientific reports, bribing academics and physicians, or pushing into ‘grey’ markets without proper consent or precaution. After all, this has happened countless times before. All the giants – AstraZeneca, Bristol-Myers Squibb, Merck, Eli Lilly – have been investigated for bribery. One of the most notorious episodes of misconduct involved Merck’s anti-inflammatory drug Vioxx, withdrawn in 2004 after the company persistently played down its risk of causing cardiovascular problems. History suggests that GSK CEO Andrew Witty’s assurances that lessons have been learnt are meaningless.
As with the banking scandals, GSK’s downfall is partly a failure of management – those at the top (some of the malparactice predates Witty’s incumbency) weren’t watching. It’s partly a failure of culture: the jollies and bribes came to seem normal, ethically unproblematic, even an entitlement, to both the donors and recipients.
And it’s partly a failure of regulation. The US Food and Drugs Administration has seemed at times not just toothless but actually collusive. Meanwhile, some American academics, having enjoyed Big Pharma’s kickbacks for decades, are now shrieking about the Physician Payments Sunshine Act, a part of the ObamaCare package which would make it mandatory for physicians to declare any perks or payments received from drug companies greater than $10, whether as speaker fees, theatre tickets or Hawaiian holidays. The protestors claim they will drown in bureaucracy. In reality they will be forced to reveal how much these things supplement their already healthy income. Harvard physician Thomas Stossel claimed in the Wall Street Journal that the backhanders don’t harm patients. The GSK ruling shows otherwise.
But the problems are still deeper. You don’t have to be an anti-capitalist to admit the inadequacies of relying solely on market forces for our drugs – not least for those that, being urgently needed mostly by poor countries, will never turn a profit. Incentives for Global Health, a non-profit organization at Yale University, have argued the case for a global, public-sector drug development agency, funded for example by a Tobin tax. In the unlikely event that our leaders should dare to demand such genuine recompense for the moral bankruptcy of the financial world, there would be few better uses for it – and freedom from the corrupting influence of the profit margin adds another argument to this already compelling case.
One way or another, some rethinking of how drugs are discovered, developed, sold and used is needed, before the noble art of medicine comes to look more like Mr Wormwood selling a dodgy motor for whatever he can get away with.
Tuesday, July 03, 2012
Introducing Iamus
This story was in yesterday’s Guardian in slightly edited form. It was accompanied by a review of some of Iamus’s music by Tom Service, who was not terribly impressed. It’s a shame that Tom had only Hello World! to review, since that was an early piece by Iamus and so very much a prototype – things have moved on since then. I think his review was quite fair, but I had a sense that, knowing it was made by computer, he was looking out for the “computer-ness” in it. This bears on the final part of my story below, for which there was no room in the Guardian. I think one can detect a certain amount of ‘anti-computer prejudice’ in the Guardian comments thread too, though that is perhaps no stronger than the general ‘anti-modernist’ bias. I’d be interested to see what Tom Service makes of the CD when it appears later this year. I carry no torch for Iamus as a composer, but I must admit that I’m growing fond of it and certainly feel it is a significant achievement. Anyway, there will be more on this soon – I’m writing a different piece on the work for Nature, to appear in August.
_______________________________________________________
As soon as you see the title of Iamus’s composition “Transits – Into an Abyss”, you know it’s going to be challenging, modernist stuff. The strings pile up discords, now spooky, now ominous. But if your tastes run to Bartók, Ligeti and Penderecki, you might like it. At least you have to admit that this bloke knows what he’s doing.
But this bloke doesn’t know anything at all. Iamus is a computer programme. Until the London Symphony Orchestra was handed the score, no human had intervened in preparing this music.
“When we tell people that, they think it’s a trick”, says Francisco Vico, leader of the team at the University of Malaga who devised Iamus. “Some say they simply don’t believe us, others say it’s just creepy.” He anticipates that when Iamus’s debut CD is released in September, performed by top-shelf musicians including the LSO, it is going to disturb a lot of folk.
You can get a taste of Iamus’s oeuvre before then, because on 2 July some of Iamus’s compositions will be performed and streamed live from Malaga. The event is being staged to mark the 100th anniversary of the birth of Alan Turing, the man credited with more or less inventing the concept of the computer. It was Turing who devised the test to distinguish humans from artificial intelligence made famous by the opening sequence of Ridley Scott’s Blade Runner. And the performance will itself be a kind of Turing test: you can ask yourself whether you could tell, if you didn’t know, that this music was made by machine.
Iamus composes by mutating very simple starting material in a manner analogous to biological evolution. The evolving compositions each have a kind of musical core, a ‘genome’, which gradually becomes more complex. “Iamus generates an initial population of compositions automatically”, Vico explains, “but their genomes are so simple that they barely develop into a handful of notes, lasting just a few seconds. As evolution proceeds, mutations alter the content and size of this primordial genetic material, and we get longer and more elaborated pieces.” All the researchers specify at the outset is the rough length of the piece and the instruments it will use.
“A single genome can encode many melodies”, explains composer Gustavo Díaz-Jerez of the Conservatory of the Basque Country in San Sebastian, who has collaborated with the Malaga team since the outset and is the pianist on the new recordings. “You find this same idea of a genome in the Western musical canon – that’s why the music makes sense.”
The computer doesn’t impose any particular aesthetic. Although most of its serious pieces are in a modern classical style, it can compose in other genres too, and for any set of instruments. The ‘darwinian’ composition process also lends itself to producing new variations of well-known pieces [PB: I’ve been sent some great variants of the Nokia ringtone] or merging two or more existing compositions to produce offspring – musical sex, you might say.
Using computers and algorithms – automated systems of rules – to make music has a long history. The Greek composer Iannis Xenakis did it in the 1960s, and in the following decade two Swedish composers devised an algorithm for creating nursery-rhyme melodies in the style of Swedish composer Alice Tegnér. In the 1980s computer scientist Kemal Ebcioglu created a program that harmonised chorales in the style of Bach.
As artificial intelligence and machine learning became more sophisticated, so did the possibilities for machine music: now computers could infer rules and guidelines from real musical examples, rather than being fed them to begin with. Computer scientist John ‘Al’ Biles devised an algorithm called GenJam that learns to improvise jazz. A trumpeter himself, Biles performs live alongside GenJam under the name the Al Biles Virtual Quintet, but admits that the algorithm is a rather indifferent player. The same is true of GenBebop, devised by cognitive scientists Lee Spector and Adam Alpern, which improvises solos in the style of Charlie Parker by ‘listening’ to him and iterating its own efforts under the ultimately less-than-discerning eye of an automated internal critic.
One of the most persuasive systems was the Continuator, devised by François Pachet at Sony’s Computer Science Laboratory in Paris. In a Turing test where the Continuator traded licks with an improvising human pianist, expert listeners were mostly unable to guess whether it was the human or the computer playing.
But these efforts still haven’t shown that a computer can make tolerable music from scratch. One of the best known attempts is ‘Emily Howell’, a programme created by music professor David Cope. Yet Howell’s bland, arpeggiated compositions sound like a technically skilled child trying to ape Beethoven or Bach, or like Michael Nyman on a bad day: fine for elevators but not for the concert hall.
This is why Iamus – named after the mythical son of Apollo who could understand the language of birds – is different. This seems to be the first time that music composed purely by computer has been deemed good enough for top-class performers to play it. Díaz-Jerez admits that the LSO were “a little bit sceptical at the beginning, but were very surprised” by the quality of what they were being asked to play. The soprano Celia Alcedo, he says, “couldn’t believe the expressiveness of some of the lines” she was given to sing.
Lennox Mackenzie, the LSO’s chairman, had mixed feelings about the orchestral pieces. “I felt it was like a wall of sound”, he says. “If you put a colour to it, this music was grey. It went nowhere. It was too dense and massive, no instrument stuck out at any point. But at the end of it, I thought it was quite epic.”
“The other thing that struck me”, Mackenzie adds, “was that it was festooned with expression marks, which just seemed arbitrary and meaningless. My normal inclination is to delve into music and find out what it’s all about. But here I don’t think I’d find anything.” But he’s far from discouraging. “I didn’t feel antipathy towards it. It does have something. They should keep trying, I’d say.”
What is slightly disconcerting is that Iamus can produce this stuff endlessly: thousands of pieces, all valid and musically plausible, all fully notated and ready to play, faultless from a technical point of view, and “many of them great”, according to Díaz-Jerez. Such profligacy feels improper: if it’s that easy, can the music really be any good? Yet Díaz-Jerez thinks that the pieces are often better in some respects than those produced by some avant-garde composers, which might revel in their own internal logic but are virtually impossible to play. And crucially, different people have different favourites – it’s not as though the programme just occasionally gets lucky and turns out something good.
How does a performer interpret these pieces, given that there’s no “intention” of the composer to look for? “Suppose I found a score in a library without knowing who wrote it”, says Díaz-Jerez. “I approach these pieces as I would that one – by analysing the score to see how it works.” In that respect, he sees no difference from deducing the structure of an intricate Bach fugue.
You can compare it with computer chess, says philosopher of music Stephen Davies of the University of Auckland in New Zealand. “People said computers wouldn't be able to show the same original thinking, as opposed to crunching random calculations. But now it’s hard to see the difference between people and computers with respect to creativity in chess. Music too is rule-governed in a way that should make it easily simulated.”
However, Iamus might face deeply ingrained prejudice. Brain-scanning studies by neuroscientists Stefan Koelsch and Nikolaus Steinbeis have shown that the same piece of music played to listeners elicits activity in the parts of the brain responsible for ascribing intentions to other humans if they are told that it was composed by a human but not if they are told it was computer-generated. In other words, it matters to our perceptions of expressiveness how we think the music was made. Perhaps Iamus really will need to be marketed as a pseudo-human to be taken seriously.
_______________________________________________________
As soon as you see the title of Iamus’s composition “Transits – Into an Abyss”, you know it’s going to be challenging, modernist stuff. The strings pile up discords, now spooky, now ominous. But if your tastes run to Bartók, Ligeti and Penderecki, you might like it. At least you have to admit that this bloke knows what he’s doing.
But this bloke doesn’t know anything at all. Iamus is a computer programme. Until the London Symphony Orchestra was handed the score, no human had intervened in preparing this music.
“When we tell people that, they think it’s a trick”, says Francisco Vico, leader of the team at the University of Malaga who devised Iamus. “Some say they simply don’t believe us, others say it’s just creepy.” He anticipates that when Iamus’s debut CD is released in September, performed by top-shelf musicians including the LSO, it is going to disturb a lot of folk.
You can get a taste of Iamus’s oeuvre before then, because on 2 July some of Iamus’s compositions will be performed and streamed live from Malaga. The event is being staged to mark the 100th anniversary of the birth of Alan Turing, the man credited with more or less inventing the concept of the computer. It was Turing who devised the test to distinguish humans from artificial intelligence made famous by the opening sequence of Ridley Scott’s Blade Runner. And the performance will itself be a kind of Turing test: you can ask yourself whether you could tell, if you didn’t know, that this music was made by machine.
Iamus composes by mutating very simple starting material in a manner analogous to biological evolution. The evolving compositions each have a kind of musical core, a ‘genome’, which gradually becomes more complex. “Iamus generates an initial population of compositions automatically”, Vico explains, “but their genomes are so simple that they barely develop into a handful of notes, lasting just a few seconds. As evolution proceeds, mutations alter the content and size of this primordial genetic material, and we get longer and more elaborated pieces.” All the researchers specify at the outset is the rough length of the piece and the instruments it will use.
“A single genome can encode many melodies”, explains composer Gustavo Díaz-Jerez of the Conservatory of the Basque Country in San Sebastian, who has collaborated with the Malaga team since the outset and is the pianist on the new recordings. “You find this same idea of a genome in the Western musical canon – that’s why the music makes sense.”
The computer doesn’t impose any particular aesthetic. Although most of its serious pieces are in a modern classical style, it can compose in other genres too, and for any set of instruments. The ‘darwinian’ composition process also lends itself to producing new variations of well-known pieces [PB: I’ve been sent some great variants of the Nokia ringtone] or merging two or more existing compositions to produce offspring – musical sex, you might say.
Using computers and algorithms – automated systems of rules – to make music has a long history. The Greek composer Iannis Xenakis did it in the 1960s, and in the following decade two Swedish composers devised an algorithm for creating nursery-rhyme melodies in the style of Swedish composer Alice Tegnér. In the 1980s computer scientist Kemal Ebcioglu created a program that harmonised chorales in the style of Bach.
As artificial intelligence and machine learning became more sophisticated, so did the possibilities for machine music: now computers could infer rules and guidelines from real musical examples, rather than being fed them to begin with. Computer scientist John ‘Al’ Biles devised an algorithm called GenJam that learns to improvise jazz. A trumpeter himself, Biles performs live alongside GenJam under the name the Al Biles Virtual Quintet, but admits that the algorithm is a rather indifferent player. The same is true of GenBebop, devised by cognitive scientists Lee Spector and Adam Alpern, which improvises solos in the style of Charlie Parker by ‘listening’ to him and iterating its own efforts under the ultimately less-than-discerning eye of an automated internal critic.
One of the most persuasive systems was the Continuator, devised by François Pachet at Sony’s Computer Science Laboratory in Paris. In a Turing test where the Continuator traded licks with an improvising human pianist, expert listeners were mostly unable to guess whether it was the human or the computer playing.
But these efforts still haven’t shown that a computer can make tolerable music from scratch. One of the best known attempts is ‘Emily Howell’, a programme created by music professor David Cope. Yet Howell’s bland, arpeggiated compositions sound like a technically skilled child trying to ape Beethoven or Bach, or like Michael Nyman on a bad day: fine for elevators but not for the concert hall.
This is why Iamus – named after the mythical son of Apollo who could understand the language of birds – is different. This seems to be the first time that music composed purely by computer has been deemed good enough for top-class performers to play it. Díaz-Jerez admits that the LSO were “a little bit sceptical at the beginning, but were very surprised” by the quality of what they were being asked to play. The soprano Celia Alcedo, he says, “couldn’t believe the expressiveness of some of the lines” she was given to sing.
Lennox Mackenzie, the LSO’s chairman, had mixed feelings about the orchestral pieces. “I felt it was like a wall of sound”, he says. “If you put a colour to it, this music was grey. It went nowhere. It was too dense and massive, no instrument stuck out at any point. But at the end of it, I thought it was quite epic.”
“The other thing that struck me”, Mackenzie adds, “was that it was festooned with expression marks, which just seemed arbitrary and meaningless. My normal inclination is to delve into music and find out what it’s all about. But here I don’t think I’d find anything.” But he’s far from discouraging. “I didn’t feel antipathy towards it. It does have something. They should keep trying, I’d say.”
What is slightly disconcerting is that Iamus can produce this stuff endlessly: thousands of pieces, all valid and musically plausible, all fully notated and ready to play, faultless from a technical point of view, and “many of them great”, according to Díaz-Jerez. Such profligacy feels improper: if it’s that easy, can the music really be any good? Yet Díaz-Jerez thinks that the pieces are often better in some respects than those produced by some avant-garde composers, which might revel in their own internal logic but are virtually impossible to play. And crucially, different people have different favourites – it’s not as though the programme just occasionally gets lucky and turns out something good.
How does a performer interpret these pieces, given that there’s no “intention” of the composer to look for? “Suppose I found a score in a library without knowing who wrote it”, says Díaz-Jerez. “I approach these pieces as I would that one – by analysing the score to see how it works.” In that respect, he sees no difference from deducing the structure of an intricate Bach fugue.
You can compare it with computer chess, says philosopher of music Stephen Davies of the University of Auckland in New Zealand. “People said computers wouldn't be able to show the same original thinking, as opposed to crunching random calculations. But now it’s hard to see the difference between people and computers with respect to creativity in chess. Music too is rule-governed in a way that should make it easily simulated.”
However, Iamus might face deeply ingrained prejudice. Brain-scanning studies by neuroscientists Stefan Koelsch and Nikolaus Steinbeis have shown that the same piece of music played to listeners elicits activity in the parts of the brain responsible for ascribing intentions to other humans if they are told that it was composed by a human but not if they are told it was computer-generated. In other words, it matters to our perceptions of expressiveness how we think the music was made. Perhaps Iamus really will need to be marketed as a pseudo-human to be taken seriously.