I have a Muse piece on Nature News about a forthcoming paper in Nature on cooperation and punishment in game theory, by Karl Sigmund and colleagues. It’s quite closely related to recent work by Dirk Helbing, also discussed briefly below. There are many interesting aspects to Dirk’s papers, which I can’t touch on here – not least, the fact that the outcomes of these games can be dependent on the spatial configuration of the players. Here is the pre-edited article.
***********************************************************************
The punishment of anti-social behaviour seems necessary for a stable society. But how should it be policed, and how severe should it be? Game theory offers some answers.
The fundamental axis of political thought in democratic nations could be said to refer to the ‘size’ of government. How much or how little should the state interfere in our lives? At one end of the axis sits political philosopher Thomas Hobbes, whose state is so authoritarian – an absolute monarchy – that it barely qualifies as a democracy at all once the ruler is elected. At the other extreme we have Peter Kropotkin, the Russian revolutionary anarchist who argued in Mutual Aid (1902) that people can organize themselves harmoniously without any government at all.
At least, that’s one view. What’s curious is that both extremes of this spectrum can be viewed as either politically right- or left-wing. Hobbes’ domineering state could equally be Stalin’s, while the armed, vigilante world of extreme US libertarianism (and Glenn Beck) looks more like the brutal ‘State of Nature’ that Hobbes feared – everyone for themselves – than Kropotkin’s cosy commune.
But which works best? I’m prepared to guess that most Nature readers, being benign moderates, will cluster around the middle ground defined by John Stuart Mill, who argued that government is needed to maintain social stability, but should intrude only to the extent of preventing individuals from harming others. Laws and police forces, in this view, exist to ensure that you don’t pillage and murder, not to ensure that you have moral thoughts.
If only it were that simple. The trouble is that ‘harming others’ is a slippery concept, illustrated most profoundly by the problem of the ‘commons’. If you drop litter, if you don’t pay your taxes, if you tip your sewage into the river, it’s hard to pinpoint how or who your actions ‘harm’, if anyone – but if we all do it, society suffers. So laws and penal codes must not only prevent or punish obvious crimes like murder, but also discourage free-riders who cheat on the mechanisms that promote social order.
How much to punish, though, and how to implement it? If you steal, should you temporarily lose your liberty, or permanently lose your hand? And what works best in promoting cooperative behaviour: the peer pressure of social ostracism, or the state pressure of police arrest?
Experiments in behavioural economics, in particular ‘public goods games’ where participants seek to maximize their rewards through competition or cooperation, have shown that people care about punishment to an ‘irrational’ degree [1]. Say, for example, players are asked to put some of their money into a collective pot, which will then be multiplied and divided among the players. The more you all put in, the better the payoff. But if one person doesn’t contribute, they still get the reward – so there’s a temptation to free-ride.
If players are allowed to fine free-riders, but at a cost to themselves, they will generally do it even if they make a loss: they care more about fairness than profit. Now, however, the problem is that there’s a second-order temptation to free-ride: you contribute to the pot but leave others to shoulder the cost of sanctioning the cheaters who don’t. There’s an infinite regress of opportunities to free-ride, which can eventually undermine cooperation.
But what if the players can share the cost of punishment by contributing to a pool in advance – equivalent, say, to paying for a police force and penal service? This decreases the overall profits – it costs society – because the ‘punishment pool’ is wasted if no one actually cheats. Yet in a new paper in Nature [2], game theorist Karl Sigmund of the University of Vienna and his colleagues show in a computer model that pool-punishment can nevertheless evolve as the preferred option over peer-punishment as a way of policing the game and promoting cooperation: a preference, you might say, for a state police force as opposed to vigilante justice. This arrangement is, however, self-organized à la Kropotkin, not imposed from the top down à la Hobbes: pool-punishment simply emerges as the most successful (that is, the most stable) strategy.
Of course, we know that what often distinguishes these things in real life is that state-sponsored policing is more moderate and less arbitrary or emotion-led than vigilante retribution. That highlights another axis of political opinion: are extreme punishments more effective at suppressing defection than less severe ones? A related modelling study of public-goods games by Dirk Helbing of ETH in Zürich and his coworkers, soon to be published in the New Journal of Physics [3] and elaborated in another recent paper [4], suggests that the level of cooperation may depend on the strength of punishment in subtle, non-intuitive ways. For example, above a critical punishment (fine) threshold, cooperators who punish can gain strength by sticking together, eventually crowding out both defectors and non-punishing cooperators (second-order free riders). But if punishment is carried out not by cooperators but by other defectors, too high a fine is counterproductive and reduces cooperation. Cooperation can also be created by an ‘unholy alliance’ of cooperators and defectors who both punish.
Why would defectors punish other defectors? This behaviour sounds bizarre, but is well documented experimentally [5], and familiar in real life: there are both hypocritical ‘punishing defectors’ (think of TV evangelists whose condemnation of sexual misdemeanours ignores their own) and ‘sincere’ ones, who deplore certain types of cheating while practising others.
One of the most important lessons of these game-theory models in recent years is that the outcomes are not necessarily permanent or absolute. What most people (perhaps even Glenn Beck) want is a society in which people cooperate. But different strategies for promoting this have different vulnerabilities to an invasion of defectors. And strategies evolve: prolonged cooperation might erode a belief in the need for (costly) policing, opening the way for a defector take-over. Which is perhaps to say that public policy should be informed but not determined by computer models. As Stephen Jay Gould has said, ‘There are no shortcuts to moral insight’ [6].
References
[1] Fehr, E. & Gächter, S. Am. Econ. Rev. 90, 980-994 (2000).
[2] Sigmund, K., De Silva, H., Traulsen, A. & Hauert, C. Nature doi:10/1038/nature09203.
[3] Helbing, D., Szolnoki, A., Perc, M. & Szabó, G. New J. Phys. (in press); see http://arxiv.org/abs/1007.0431 (2010).
[4] Helbing, D., Szolnoki, A., Perc, M. & Szabó, G. PLoS Comput. Biol. 6(4), e1000758 (2010).
[5] Shinada, M., Yamagishi, T. & Omura, Y. Evol. Hum. Behav. 25, 379-393 (2004).
[6] Gould, S. J. Natural History 106 (6), 12-21 (1997).
No comments:
Post a Comment