Wednesday, July 28, 2010

A new kind of economics


This is the first of the pieces I've written on the back of a workshop that I attended at the end of June on agent-based modelling of the economy. It appears (in edited form) in the August issue of Prospect. I have also written on this for the Economist - will post that shortly. And I am writing on the more general issue of large-scale simulation of the economy and other social systems for New Scientist.

***************************************************

Critics of conventional economic theory have never had it so good. The credit crunch has left the theory embraced by most of the economic community a sitting duck. Using the equations of the most orthodox theoretical framework – so-called dynamic stochastic general equilibrium (DSGE) models – the governor of the Federal Reserve Frederic Mishkin forecast in the summer of 2007 that the banking problems triggered by stagnation of the US housing market would be a minor blip. The story that unfolded subsequently, culminating in September 2008 in the near-collapse of the global financial market, seemed to represent the kind of falsification that would bury any theory in the natural sciences.

But it has not done so here, and probably will not. How come? The Nobel laureate Robert Lucas, who advocated the replacement of Keynesian economic models with DSGE models in the 1970s, has explained why: this theory is explicitly not designed to handle crashes, so of course it will not predict them. That’s not a shortcoming of the models, Lucas says, but a reflection of the stark reality that crashes are inherently unpredictable by this or any other theory. They are aberrations, lacunae in the laws of economics.

You can see his point. Retrospective claims to have foreseen the crisis amount to little more than valid but generalized concerns about the perils of prosperity propped up by easy credit, or of complex financial instruments whose risks are opaque even to those using them. No one forecast the timing, direction or severity of the crash – and how could they, given that the debts directly tied up in ‘toxic’ sub-prime mortgage defaults were relatively minor?

But this pessimistic position is under challenge. For Lucas is wrong; there are models of financial markets that do generate crashes. Fluctuations ranging from the quotidian to the catastrophic are an intrinsic feature of some models that dispense with the simplifying premises of DSGE and instead try to construct market behaviour from the bottom up. They create computer simulations of large numbers of ‘agents’ – individuals who trade with one another according to specified decision-making rules, while responding to each others’ decisions. These so-called agent-based models take advantage of the capacity of modern computers to simulate complex interactions between vast numbers of agents. The approach has already been used successfully to understand and predict traffic flow and pedestrian movements – here the agents (vehicles or people) are programmed to move to their destination at a preferred speed unless they must slow down or veer to avoid a collision – as well as to improve models of contagion in disease epidemics.

A handful of economists, along with interlopers from the natural sciences, believe that agent-based models offer the best hope of understanding the economy in all its messy glory, rather than just the decorous aspects addressed by conventional theories. At a workshop in Virginia in June, I heard how ABMs might help us learn the lessons of the credit crunch, anticipate and guard against the next one, and perhaps even offer a working model of the entire economic system.

Some aspects of ABMs are so obviously an improvement on conventional economic theories that it seems bizarre to outsiders why they are still marginalized. Agents, like real traders, can behave in diverse ways. They can learn from experience. They are affected by each other’s actions, potentially leading to the herd behaviour that undoubtedly afflicts markets. ABMs, unlike DSGE models, can include institutions such as banks (a worrying omission, you might imagine, in models of financial markets). Some of these factors can be incorporated into orthodox theories, but not easily or transparently, and often they are not.

What upsets traditional economists most, however, is that ABMs are ‘non-equilibrium’ models, which means that they generally never settle into a steady state in which prices adjust to meet demand and markets ‘clear’, meaning that supply is perfectly matched to demand. Conventional economic thinking has, more or less since Adam Smith, assumed the reality of this platonic ideal, which is ruffled by external ‘shocks’ such as political events and policies. In its most simplistic form, this perfect market demands laissez-faire free trade, and is only hindered by regulation and intervention.

Even though orthodox theorists acknowledge that ‘market imperfections’ cause deviations from this ideal (market failures), that very terminology gives the game away. In ABMs, ‘imperfections’ and ‘failures’ are generally a natural, emergent feature of the more realistic ingredients of the models. This posits a totally different view of how the economy operates. For example, feedbacks such as herd-like trading behaviour can create bubbles in which commodity prices soar on a wave of optimism, and crashes when panic sweeps across the trading floor. It seems clear that such amplifying processes turned a downturn of the US housing market into a freezing of credit throughout the entire banking system.

What made the Virginia meeting, sponsored by the US National Science Foundation, unusual is that it was relatively heedless of these battle lines between conventional and alternative thinkers. Committed agent-based modellers mixed with researchers from the Federal Reserve, the Bank of England and the Rand Corporation, specialists in housing markets and policy advisers. The goal was both to unravel the lessons of the credit crunch and to discuss the feasibility of making immense ABMs with genuine predictive capability. That would be a formidable enterprise, requiring the collaboration of many different experts and probably costing tens of millions of dollars. Even with the resources, it would probably take at least five years to have a model up and running.

Once that would have seemed a lot to gamble. Now, with a bill from the crisis running to trillions (and the threat of more to come), to refuse this investment would border on the irresponsible. Could such a model predict the next crisis, though? That’s the wrong question. The aim – and there is surely no president, chancellor, or lending or investment bank CEO who does not now crave this – would be to identify where the systemic vulnerabilities lie, what regulations might mitigate them (and which would do the opposite), and whether early-warning systems could spot danger signs. We’ve done it for climate change. Does anyone now doubt that economic meltdown poses comparable risks and costs?

2 comments:

Peter said...

The aim "would be to identify where the systemic vulnerabilities lie, what regulations might mitigate them (and which would do the opposite), and whether early-warning systems could spot danger signs."

The problem here would be, I think, that for any given new legislation one could not know whether the agent models we currently use will give an adequate prediction of response to the given stimulus (although some people will be able to guess better than others, just as interpretation of medium range weather forecasts is something of an art). If we were to omit certain kinds of agent to agent communication, for example, which we might do to ensure tractability, there would be some legislation for which prediction would go far awry, while other legislation would give no problems.

It's not clear to me whether the complexity of economic models is greater than or less than the complexity of medium range weather forecasts. Ditto for usefulness. Any views?

Philip Ball said...

Peter,
It's true, and an acknowledged problem, that agent behaviour is unlikely to stay fixed when the boundaries (such as regulation) change (see Goodhart's Law). That might be handled in ad hoc ways, but ultimately the hope is to build enough psychological or even neurological complexity into agents to allow for it. That is a long way off! On the other hand, conventional models don't have a hope of capturing such behaviour either.
I think it is probably the case with economics as with the weather that one can distinguish between short-to-medium-term unpredictability (weather forecasts) and long-term trends (climate prediction). So there are doubtless some long-term things one can foresee, but not in terms of the details.