Here’s the pre-edited version of my latest Muse for Nature News. The paper I discuss here is very long but also very ambitious, and well worth a read.
**********************************************************************
Bad risk management contributed to the current financial crisis. Two economists believe the situation could be improved by gaining a deeper understanding of what is not known.
Donald Rumsfeld is an unlikely prophet of risk analysis, but that may be how posterity will anoint him. His remark about ‘unknown unknowns’ was derided at the time as a piece of meaningless obfuscation, but more careful refection suggests he had a point. It is one thing to recognize the gaps and uncertainties in our knowledge of a situation, another to acknowledge that entirely unforeseen circumstances might utterly change the picture. (Whether you subscribe to Rumsfeld’s view that the challenges in managing post-invasion Iraq were unforeseeable is another matter.)
Contemporary economics can’t handle the unknown unknowns – or more precisely, it confuses them with known unknowns. Financial speculation is risky by definition, yet the danger is not that the risks exist, but that the highly developed calculus of risk in economic theory – some of which has won Nobel prizes – gives the impression that they are under control.
The reasons for the current financial crisis have been picked over endlessly, but one common view is that it involved a failure in risk management. It is the models for handling risk that Nobel leaureate economist Joseph Stiglitz seemed to have in mind when he remarked in 2008 that ‘Many of the problems our economy faces are the result of the use of misguided models. Unfortunately, too many [economic policy-makers] took the overly simplistic models of courses in the principles of economics (which typically assume perfect information) and assumed they could use them as a basis for economic policy’ [1].
Facing up to these failures could prompt the bleak conclusion that we know nothing. That’s the position taken by Nassim Nicholas Taleb in his influential book The Black Swan [2], which argues that big disruptions in the economy can never be foreseen, and yet are not anything like as rare as conventional theory would have us believe.
But in a preprint on Arxiv, Andrew Lo and Mark Mueller of MIT’s Sloan School of Management offer another view [3]. They say that what we need is a proper taxonomy of risk – not unlike, as it turns out, Rumsfeld’s infamous classification. In this way, they say, we can unite risk assessment in economics with the way uncertainties are handled in the natural sciences.
The current approach to uncertainty in economics, say Lo and Mueller, suffers from physics envy. ‘The quantitative aspirations of economists and financial analysts have for many years been based on the belief that it should be possible to build models of economic systems – and financial markets in particular – that are as predictive as those in physics,’ they point out.
Much of the foundational work in modern economics took its lead explicitly from physics. One of its principal architects, Paul Samuelson, has admitted that his seminal book Foundations of Economic Analysis [4] was inspired by the work of mathematical physicist Edwin Bidwell Wilson, a protégé of the pioneer of statistical physics Willard Gibbs.
Physicists were by then used to handling the uncertainties of thermal noise and Brownian motion, which create a gaussian or normal distribution of fluctuations. The theory of Brownian random walks was in fact first developed by physicist Louis Bachelier in 1900 to describe fluctuations in economic prices.
Economists have known since the 1960s that these fluctuations don’t in fact fit a gaussian distribution at all, but are ‘fat-tailed’, with a greater proportion of large-amplitude excursions. But many standard theories have failed to accommodate this, most notably the celebrated Black-Scholes formula used to calculate options pricing, which is actually equivalent to the ‘heat equation’ in physics.
But incorrect statistical handling of economic fluctuations is a minor issue compared with the failure of practitioners to distinguish fluctuations that are in principle modellable from those that are more qualitative – to distinguish, as Lo and Mueller put it, trading decisions (which need maths) from business decisions (which need experience and intuition).
The conventional view of economic fluctuations – that they are due to ‘external’ shocks to the market, delivered for example by political events and decisions – has some truth in it. And these external factors can’t be meaningfully factored into the equations as yet. As the authors say, from July to October 2008, in the face of increasingly negative prospects for the financial industry, the US Securities and Exchange Commission intervened to impose restrictions on certain companies in the financial services sector. ‘This unanticipated reaction by the government’, say Lo and Mueller, ‘is an example of irreducible uncertainty that cannot be modeled quantitatively, yet has substantial impact on the risks and rewards of quantitative strategies.’
They propose a five-tiered categorization of uncertainty, from the complete certainty of Newtonian mechanics, through noisy systems and those that we are forced to describe statistically because of incomplete knowledge about deterministic processes (as in coin tossing), to ‘irreducible uncertainty’, which they describe as ‘a state of total ignorance that cannot be remedied by collecting more data, using more sophisticated methods of statistical inference or more powerful computers, or thinking harder and smarter.’
The authors think that this is more than just an enumeration of categories, because it provides a framework for how to think about uncertainties. ‘It is possible to “believe” a model at one level of the hierarchy but not at another’, they say. And they sketch out ideas for handling some of the more challenging unknowns, as for example when qualitatively different models may apply to the data at different times.
‘By acknowledging that financial challenges cannot always be resolved with more sophisticated mathematics, and incorporating fear and greed into models and risk-management protocols explicitly rather than assuming them away’, Lo and Mueller say, ‘we believe that the financial models of the future will be considerably more successful, even if less mathematically elegant and tractable.’
They call for more support of post-graduate economic training to create a cadre of better informed practitioners, more alert to the limitations of the models. That would help; but if we want to eliminate the ruinous false confidence engendered by the clever, physics-aping maths of economic theory, why not make it standard practice to teach everyone who studies economics at any level that these models of risk and uncertainty apply only to specific and highly restricted varieties of it?
References
1. Stiglitz, J. New Statesman, 16 October 2008.
2. Taleb, N. N. The Black Swan (Allen Lane, London, 2007).
3. Lo, A. W. & Mueller, M. T. http://www.arxiv.org/abs/1003.2688.
4. Samuelson, P. A. Foundations of Economic Analysis (Harvard University Press, Cambridge, 1947).
1 comment:
Taleb, in The Black Swan and in The Fourth Quadrant also talks at length about Taxonomies of risk. The part of The Black Swan I think most about is the part where he talks about how to decide what kind of risks to take. He favors wagers with low costs, but high positive outcomes and gives several, non-lottery, examples. I don't remember any table or graph that puts these all together but it's easy enough to imagine.
Post a Comment