Tail event: Difference between revisions

no edit summary
No edit summary
No edit summary
Line 22: Line 22:
Each investor’s private motivations, and opinions, may be nuanced and personal — how is the rest of its portfolio positioned, what local risks is it especially sensitive to — but these idiosyncrasies cancel out in a large sample — they are like the [[Brownian motion]] of molecules in a [[nice hot cup of tea]]. They are reversions to the [[entropy|entropic mean]]; baseline white noise — so we can disregard them. Which is just as well for the complexity of our models. Until it isn’t.
Each investor’s private motivations, and opinions, may be nuanced and personal — how is the rest of its portfolio positioned, what local risks is it especially sensitive to — but these idiosyncrasies cancel out in a large sample — they are like the [[Brownian motion]] of molecules in a [[nice hot cup of tea]]. They are reversions to the [[entropy|entropic mean]]; baseline white noise — so we can disregard them. Which is just as well for the complexity of our models. Until it isn’t.


Put another way, though the “interconnectedness” of similar transactions means they do ''not'' have the quality of independence that [[normal distributions]] require, most of the time it's close enough: the information is chaotic — as traders say, “noisy” — in the immediate term, here the dissimilarities between trader motivations are most pronounced, but over a large aggregation of trades and a longer period of time a “signal” emerges. This is what Black Scholes, volatility and convexity models track. As long as all traders are using the same aggregated market information — and the market works hard to ensure they are — a “normal” probabilistic model<ref>I am working hard not to use the intimidating term [[stochastic]]” here by the way.</ref> works fairly well. It’s not a bad ''model''.  
Put another way: although the “interconnectedness” of similar transactions means they do ''not'' have the quality of independence that [[normal distributions]] require, most of the time it’s close enough: the information is chaotic — as traders say, “noisy” — in the immediate term, here the dissimilarities between trader motivations are most pronounced, but over a large aggregation of trades and a longer period a “signal” emerges. This is what [[Black-Scholes option pricing model|Black-Scholes]], volatility and convexity models track: as long as all traders all use the same aggregated market information — and the market works hard to ensure they do — a “normal” probabilistic model<ref>I am working hard not to use the intimidating term [[stochastic]]” here by the way.</ref> works fairly well. It’s not a bad ''model''.  


We treat professional market participants as a largely homogenous group from which emerges, over time, a [[signal]]. Almost like, you know, like an ''invisible hand'' is guiding the market.
We treat professional market participants as a largely homogenous group from which emerges, over time, a [[signal]]. Almost like, you know, like an ''invisible hand'' is guiding the market.
Line 28: Line 28:
This is good: it gets our model out of the gate. If investors were not broadly homogeneous, our probability models would not work. “The average height of every item in this shed” is not a particularly useful calculation. Which way the causal arrow flows — whether signal drives theory or theory determines what counts as a signal — is an open question.
This is good: it gets our model out of the gate. If investors were not broadly homogeneous, our probability models would not work. “The average height of every item in this shed” is not a particularly useful calculation. Which way the causal arrow flows — whether signal drives theory or theory determines what counts as a signal — is an open question.


But there is a second order sense in which the earlier and later trades ''are'' related, in practice: the later participants know about the earlier trade, and its price — it is part of that universal corpus of market information, deemed known by all, that informs price formation process: all can thereby infer the trend from prior trades — and use this abstract information to form their [[bid]] or [[ask]].  
But there is a second-order sense in which the earlier and later trades ''are'' related, in practice: the later participants know about the earlier trade and its price — it is part of that universal corpus of market information, deemed known by all, it informs price formation process: all can thereby infer the trend from prior trades — and use this abstract information to form their [[bid]] or [[ask]].  


=====Nomological machines never quite work in the real world=====
=====Nomological machines never quite work in the real world=====


When you bounce a ball, friction, energy loss, structural imperfections, impurities in the rubber and environmental interference frustrate the conditions needed to satisfy the “[[nomological machine]]”: the required conditions for Newton’s laws to hold are not present, so we let it pass when our bouncing ball never quite conserves momentum. It is close enough and usually no one is counting in any case.  
When you bounce a ball, friction, energy loss, structural imperfections, impurities in the rubber and environmental interference frustrate the conditions needed to satisfy the “[[nomological machine]]”: the required conditions for Newton’s laws to hold are not present so, when our bouncing ball never quite conserves momentum, we let it pass. It is close enough and usually no one is counting in any case.  


This is the sense in which, as {{author|Nancy Cartwright}} puts it, ''the laws of physics lie''. They ''don’t'' represent what happens in the the real world.
This is the sense in which, as {{author|Nancy Cartwright}} puts it, ''the laws of physics lie''. They ''don’t'' represent what happens in the real world.


The same applies to the statistical techniques for we use to measure behaviour of the market. Much of the non-homogenous behaviour cancels itself out. Where it doesn’t — where it creates a persistent variance from how a normal distribution would behave over time, we can model that, too, with measures like [[volatility]]. We use probabilistic —  that is, independence-assuming —techniques to model these second order corrections like volatility, too.  
The same applies to the statistical techniques we use to measure market behaviour. Much of the non-homogenous behaviour cancels itself out. Where it doesn’t — where it creates a persistent variance from how a normal distribution would behave over time, we can model that, too, with measures like [[volatility]]. We use probabilistic —  that is, independence-assuming —techniques to model these second-order corrections like volatility, too.  


Why do we make these assumptions of independence and homogeneity of events? Because otherwise, we could not predict at all. A human being with free will and moral agency does not obey laws of probability. She can put a coin down heads up every time. She can go out of her way to deliberately frustrate any prediction we make.  
Why do we assume independence and homogeneity of events? Because otherwise, ''we could not predict at all''. A human being with free will and moral agency does not obey laws of probability. She can put a coin down heads up every time. She can go out of her way to deliberately frustrate any prediction or suggestion {{strike|her husband|another person}} makes.  


“Oh, you predicted heads? Well, I say tails.”
“Oh, you predicted heads? Well, I say tails.”


It's not just that individual humans ''can'' do that: they ''like'' doing that. Likewise, you can't draw models that predict the behaviour of dissimilar objects. Statistical rules require homogeneity. The odds of rolling a six hold true for fair dice, but not for carpet slippers or fish.  
It’s not just that individual humans ''can'' do that: they ''like'' doing that. Likewise, you can’t draw models that predict the behaviour of dissimilar objects. Statistical rules require homogeneity. The odds of rolling a six hold true for fair dice, but not for carpet slippers or fish.  


But his is the magic, so claimed, of [[big data]]. All those idiosyncrasies cancel themselves out and leave us with a set of basically homogenous participants. ''You'' night not like rice pudding or [[lentil convexity|lentils]] but over a whole population, a fairly reliable proportion of the population does. We can ignore the individual. It is our dystopian lot that our social systems increasingly are configured to ignore us.
But this is the magic, so claimed, of [[big data]]. All those idiosyncrasies cancel themselves out and leave us with a set of basically homogenous participants. ''You'' night not like rice pudding or [[lentil convexity|lentils]] but over a whole population, a fairly reliable proportion of the population does. We can ignore individuals. The variances they represent are noise. It is our dystopian lot that our institutions and social systems increasingly are configured to ignore us.


{{Quote|“Yes! We’re all individuals! We’re all different!”
{{Quote|
 
''BRIAN:'' “You’re all individuals!”<br>
“I’m not.”
''CROWD:'' “Yes! We’re all individuals!” <br>
:''Monty Pyhthon’s Life of Brian''}}
''BRIAN:'' “You’re all different!” <br>
''CROWD:'' “Yes! We’re all different!” <br>
''(small voice at the back):'' “I’m not.”
:—Monty Python’s ''Life of Brian''}}


Our agency and our idiosyncrasies average out. We all want to eat, be warm and dry and have rewarding careers. That we all do about this in subtly different ways doesn't much matter. Until it does  
Our agency and our idiosyncrasies average out. We all want to eat, be warm and dry and have rewarding careers. That we all do about this in subtly different ways doesn't much matter. Until it does