Tail event: Difference between revisions

no edit summary
No edit summary
No edit summary
Tags: Mobile edit Mobile web edit
Line 32: Line 32:
=====Nomological machines never quite work in the real world=====
=====Nomological machines never quite work in the real world=====


When you bounce a ball, friction, energy loss, structural imperfections, impurities in the rubber and environmental interference frustrate the conditions needed to satisfy the “[[nomological machine]]”: the required assumptions for Newton’s laws to hold are not present, so we let it pass when our bouncing ball never quite obeys them, but it is close enough and usually no one is counting in any case.  
When you bounce a ball, friction, energy loss, structural imperfections, impurities in the rubber and environmental interference frustrate the conditions needed to satisfy the “[[nomological machine]]”: the required conditions for Newton’s laws to hold are not present, so we let it pass when our bouncing ball never quite obeys them, but it is close enough and usually no one is counting in any case.  


The same applies to the statistical techniques for we use to measure behaviour of the market. As we have seen, the occasional intervention of idiosyncratic behaviour is basically noise. Where the interdependence creates a persistent variance from the normal probability model over time we can model that, too, with measures like [[volatility]]. We use probabilistic techniques to model these second order corrections, too.  
This is the sense in which, as {{author|Nancy Cartwright}} puts it, ''the laws of physics lie''. They ''don’t'' represent what happens in the the real world.
 
The same applies to the statistical techniques for we use to measure behaviour of the market. As we have seen, most of the time the net upshot of all the idiosyncratic behaviour cancels out and leaves noise. Where it doesn’t — where it creates a persistent variance from how a normal distribution would behave over time, we can model that, too, with measures like [[volatility]]. We use probabilistic —  that is, independence-assuming —techniques to model these second order corrections like volatility, too.  
 
Why do we make these assumptions of independence and homogeneity of events? Because were it not there, we could make no reliable predictions at all. A human being with free will and moral agency does not obey laws of probability. If she has control of the coin she can put it down heads up every time. She can go out of her way to deliberately frustrate any predictions we make. Humans tend to ''like'' doing that.  On an individual level. Likewise you can't draw models that predict the behaviour of dissimilar objects. Statistical rules require homogeneity. The odds of rolling a six hold true for fair dice, but not for carpet slippers or fish.
 
But his is the magic, so claimed, of big data. All those idiosyncrasies cancel themselves out. ''you'' night not like rice pudding, or regularly buy [[lentil convexity|lentils]], but over a whole population a fairly reliable proportion of the population does.
 
Our agencies


But there is a ''third'' order of dissimilarities. In times of stress in the market the behaviour of other people in the market ''directly'' and ''directionally'' affects your transaction, and yours affects others.
But there is a ''third'' order of dissimilarities. In times of stress in the market the behaviour of other people in the market ''directly'' and ''directionally'' affects your transaction, and yours affects others.