Models.Behaving.Badly
Models.Behaving.Badly: Why Confusing Illusion with Reality can be a Disaster, on Wall Street and in Life
Emanuel Derman
First published on February 22, 2012
|
Nice Title. Shame about the song
Emanuel Derman is a “quant” of illustrious pedigree: not only a 20-year veteran of Goldman Sachs (say what you like about the vampire squid but over the last couple of decades Goldman’s financial analysts have consistently been the smartest guys in the room), but also a close colleague of nobel laureate Fischer Black, co-inventor with Myron Scholes of the (in)famous Black Scholes option pricing model.
Given that the motion before the house concerns misbehaving financial models you might expect some fairly keen insights on this topic: It has already been well documented that Black-Scholes doesn’t work awfully well when the market is in a state of extreme stress — that is, precisely when you want it working awfully well. In fact, in those situations Black-Scholes can create havoc, and memorably did during the Russian crisis of 1998, during which Myron Scholes’ pioneering hedge fund Long Term Capital Management catastrophically failed.
But this isn’t Emanuel Derman’s interest: the specific inadequacy of Black-Scholes (that it assumes that market events occur in isolation of each other and are therefore arranged according to a “normal” probability distribution) rates barely a mention. Derman’s view is that reliance on any financial model will end in tears, simply because models are poor metaphors which are not grounded in the same reality as the sciences whose language they mimic.
Hmm.
Benoit Mandelbrot, whose excellent book The (Mis)Behaviour of Markets clearly outlines the “tail risk” inadequacy of Black Scholes, recognises that it is the market, not the model, that tends to misbehave. A model can’t be blamed for failing to work when misapplied. Guns don’t kill; the people holding them do.
This is a narrow example of a broader principle which (counterintuitively) is true of all scientific theories: they only work within pre-defined conditions in carefully controlled experimental environments. Even Newton’s basic laws of mechanics only hold true with zero friction, zero gravity, infinite elasticity, infinite regularity and a total vacuum, conditions that in real life never prevail. “Real life” experiments are thus indulged with a margin of error: that a heartily-struck cricket ball does not prescribe precisely the trajectory Newton says it ought is not evidence that his fundamental laws are wrong, but simply that the pure experimental requirements for its true operation are not present. There aren’t enough data.[1]
All scientific — and, for that matter, any other linguistic — theories benefit from this “get out of jail” card: they are what philosopher Nancy Cartwright calls “nomological machines”, explicitly pre-defined to be “true” only in tightly circumscribed (and often practically impossible) conditions. The looseness or tightness of those constraining conditions and the consequences of marginal variations to them determine how useful the theory, or metaphor, is in practice. F=MA will be a better guide for a flying cricket ball than for the proverbial crisp packet blowing across St. Mark’s Square.
Emanuel Derman thinks science really speaks truths, while models peddle something less worthwhile. He sees a qualitative difference and not merely one of degree. Models he treats as broadly analogous with metaphors, which he says depend for their validity on comparison with an unrelated scenario. Theorems and laws, on the other hand, need empirical validation but once they have it stand rooted to the ground of reality by their own two feet.
I’m not sure the distinction is as sharp as Derman thinks it is. Nevertheless, this talk of metaphors cheered me because the vital role of metaphor in constructing meaning is overlooked even by linguists, and is completely ignored by most scientists and mathematicians. But Derman makes less of it that I hoped he might.
What Derman means by metaphor is really a simile: the ability to reason by analogy with something already well understood. A model, under this reading, makes its prediction by reference to what would happen in an analogous situation. “Resemblance is always partial, and so models necessarily simplify things and reduce the dimensions of the world". But metaphors are far more powerful, expansionary operators in scientific and literal discourse than that.
In Derman’s world, there is a clear line between fact and metaphor and he is impatient with those who confuse it. That would include me, because I have trouble seeing the boundary between metaphorical models and theoretical (or even literal) reality: each is an abstraction, each a simplification, each a “nomological machine” which only has value within a set of parameters. “Literal” meaning is really a species of metaphor. The difference between a model and a theory is one of scope and degree: a model is a heuristic; a theory more of an algorithm. Models are less worked out; more rules of thumb. If so treated, both have real practical uses provided their output is treated with an appropriate pinch of salt. LTCM’s folly was to suppose their model could solve for something it manifestly could not. Scientists in recent times have been just as guilty of ontological overreach, so I’m not enormously sympathetic with the bee in Derman’s bonnet.
There are plenty of better grounds to take umbrage at investment bankers at the moment, in other words.
What we are left with is really a low-level, idiosyncratic grumble. There are better books written on this and similar subjects: Mandelbrot’s The (Mis)behaviour of Markets remains the technical classic, and Nassim Taleb’s The Black Swan: The Impact of the Highly Improbable a more entertaining popular entry. Not quite sure where this fits between.
See also
References
- ↑ Cricket ball trajectories provide a related metaphor about high modernism which I discuss elsewhere.