Template:M intro design System redundancy: Difference between revisions

no edit summary
No edit summary
No edit summary
 
(7 intermediate revisions by the same user not shown)
Line 51: Line 51:
==2. Time==
==2. Time==
====Reductionism about time====
====Reductionism about time====
In reducing everything to measured inputs and outputs, [[data modernism]] collapses into a kind of ''[[reductionism]]'', only about ''time'': just as reductionists see all knowledge as being reducible to infinitesimally small, sub-atomic essences — all laws of nature are functions of theoretical physics — so [[data modernist]]s see socio-economics as reducible to infinitesimally small windows — “frames” — of ''time'': so thin as to be static, resembling the still frames of a movie reel.  
The JC’s developing view is that this grimness is caused by the poverty of this model when compared to the territory it sets out to map. For the magic of an algorithm is its ability to reduce a rich, multi-dimensional experience to a succession of very simple, one-dimensional steps. But in that [[reductionism]], we lose something ''essential''.


By compiling a sequence of consecutive frames you can create a “cinematic” ''appearance'' of movement. The beauty of a static frame is that it can’t move or do anything unexpected. In this way we replace ''actually'' passing time — ''three'' dimensional objects projecting backwards and forwards in a ''fourth'' dimension — with ''apparently'' passing time rendered in only ''one'' dimension.<ref>Binary code is linear: it has only one dimension.</ref>
The Turing machines on which [[data modernism]] depends have no ''[[tense]]''. There is no past or future, perfect or otherwise in code: there is only a permanent simple ''present''. A software object’s ''past'' is rendered as a series of date-stamped events and presented as metadata in the present. An object’s ''future'' is not represented at all.


For the Turing machines on which [[data modernism]] depends do not do ''[[tense]]''. They are permanently in the ''present''. The apparent continuity through time they vouchsafe is, like cinematography, a conjuring trick: there is no sense of continuity ''in the code'': we ascribe our own sense, from our own natural language, to what we see. The “magic” is not in the machine. It is in our heads.
In reducing everything to data, spatio-temporal continuity is represented as an array of contiguous, static ''events''. Each has zero duration and zero dimension: they are just ''values''.
 
Now the beauty of a static frame is its economy. It ''can’t move'', it can’t surprise us, it takes up minimal space. We can replace bandwidth-heavy actual spacetime, in which ''three'' dimensional objects project backwards and forwards in a ''fourth'' dimension — with ''apparent'' time, rendered the single symbol-processing dimension that is the lot of all Turing machines.
 
The apparent temporal continuity that results, like cinematography, is a conjuring trick: it does not exist “in the code” at all; rather the output of the code is presented in a way that ''induces the viewer to impute continuity to it''. When regarding the code’s output, the user ascribes her own conceptualisation of time, from her own natural language, to what she sees. The “magic” is not in the machine. It is in her head.


For existential continuity backwards and forwards in “time”, is precisely the problem the human brain evolved to solve: it demands a projection of continuously existing “things” with definitive boundaries, just one of which is “me”, moving through spacetime, interacting with each other. None of this “continuity” is “in the data”.<ref>{{author|David Hume}} wrestled with this idea of continuity: if I see you, then look away, then look back at you, what ''grounds'' do I have for believing it is still “you”?  Computer code makes no such assumption. It captures property A, timestamp 1; property A timestamp 2, property A timestamp 3: these are discrete objects with common property, in a permanent present — code imputes no necessary link between them, not does it extrapolate intermediate states. It is the human genius to make that logical leap. How we do it, ''when'' we do it — generally, how human consciousness works, defies explanation. {{author|Daniel Dennett}} made a virtuoso attempt to apply this algorithmic [[reductionist]] approach to the problem of mind in {{br|Consciousness Explained}}, but ended up defining away the very thing he claimed to explain, effectively concluding “consciousness is an illusion”. But on whom?</ref>
For existential continuity backwards and forwards in “time”, is precisely the problem the human brain evolved to solve: it demands a projection of continuously existing “things” with definitive boundaries, just one of which is “me”, moving through spacetime, interacting with each other. None of this “continuity” is “in the data”.<ref>{{author|David Hume}} wrestled with this idea of continuity: if I see you, then look away, then look back at you, what ''grounds'' do I have for believing it is still “you”?  Computer code makes no such assumption. It captures property A, timestamp 1; property A timestamp 2, property A timestamp 3: these are discrete objects with common property, in a permanent present — code imputes no necessary link between them, not does it extrapolate intermediate states. It is the human genius to make that logical leap. How we do it, ''when'' we do it — generally, how human consciousness works, defies explanation. {{author|Daniel Dennett}} made a virtuoso attempt to apply this algorithmic [[reductionist]] approach to the problem of mind in {{br|Consciousness Explained}}, but ended up defining away the very thing he claimed to explain, effectively concluding “consciousness is an illusion”. But on whom?</ref>
Line 70: Line 74:


The second time-bound scenario tells us something small, but meaningful about the history of the world. The snapshot does not.
The second time-bound scenario tells us something small, but meaningful about the history of the world. The snapshot does not.
====Assembling a risk period out of snapshots====
The object of the exercise is to have as fair a picture of the real risk of the situation with as little information processing as possible. Risks play out over a timeframe: the trick is to gauge what that is.
Here is the appeal of [[data modernism]]: you can assemble the ''appearance'' of temporal continuity — the calculations for which are ''gargantuan'' — out of a series of data snapshots, the calculations for which are merely ''huge''.
Information processing capacity being what it is — still limited, approaching an asymptote and increasingly energy consumptive (rather like an object approaching the speed of light) — there is still a virtue in economy. Huge beats gargantuan. In any case, the shorter the window of time we must represent to get that fair picture of the risk situation, the better. We tend to err on the short side.
For example the listed corporate’s quarterly reporting period: commentators lament how it prioritises unsustainable short term profits over long term corporate health and stability — it does — while modernists, and those resigned to it, shrug their shoulders with varying degrees of regret, shake their heads and say that is just how it is.
For our purposes, another short period that risk managers look to is the ''liquidity period'': the longest plausible time one is stuck with an investment before one can get out of it. The period of risk. This is the time frame over which one measures — guestimates — one’s maximum potential unavoidable loss.
Liquidity differs by asset class. Liquidity for equities is usually almost — not quite, and this is important — instant. Risk managers generally treat it “a day or so”.
For an investment fund it might be a day, a month, a quarter, or a year. Private equity might be 5 years. For real estate it is realistically months, but in any case indeterminate. Probabilistically you are highly unlikely to lose the lot in a day, but over five years there is a real chance.
So generally the more liquid the asset, the more controllable is its risk.  But liquidity, like volatility, comes and goes. It is usually not there when you most need it.  So we should err on the long side when estimating liquidity periods in a time of stress.
But there the longer the period, the greater that change of loss. And the harder things are to calculate. We are doubly motivated to keep liquidity periods as short as as possible.
====[[Glass half full]] and multidimensionality====
Here is where history — ''real'' history, not the synthetic history afforded by [[data modernism]] — makes a difference.
''On a day'', the realistic range in which a stock can move in a liquidity period — its “gap risk” — is relatively stable. Say, 30 percent of its [[market value]].  (This [[market value]] we derive from technical and fundamental readings: the business ’s book value, the presumption that there is a sensible [[bid]] and [[ask]], so that the stock price will oscillate around its “true value” as bulls and bears cancel each other out under the magical swoon of [[Adam Smith]]’s [[invisible hand]].
But this view is assembled from static snapshots which don't move at all. Each frame carries ''no'' intrinsic risk: the ''illusion'' of movement emerges from the succession of frames. Therefore [[data modernism]] is not good at estimating how long a risk period should be. Each of its snapshots, when you zero in on it, is a still life: here, shorn of its history, a “[[glass half full]]” and a “[[glass half empty]]” look alike.
We apply the our risk tools to them as if they were the same: ''assuming the market value is fair, how much could I lose in the time it would realistically take me to sell''?  Thirty percent, right?
But they are ''not'' the same.
If a stock trades at 200 today, it makes a difference that it traded at 100 yesterday, 50 the day before that, and over the last ten years traded within a range between 25 and 35. This history tells us this glass, right now, is massively, catastrophically over-full: that the milk in it is, somehow, freakishly forming an improbable spontaneous column above the glass, restrained and supported by nothing by the laws of extreme improbability, and it is liable to revert to its Brownian state at any moment with milk spilt ''everywhere''.
With that history might think a drop of 30pc of the milk is our ''best'' case scenario.


=== It’s the long run, stupid===
=== It’s the long run, stupid===
Line 90: Line 128:
Then, your time horizon for redundancy is not one year, or twenty years, but ''two-hundred and fifty years''. Quarter of a millennium: that is how long it would take to earn back $5 billion in twenty million dollar clips.
Then, your time horizon for redundancy is not one year, or twenty years, but ''two-hundred and fifty years''. Quarter of a millennium: that is how long it would take to earn back $5 billion in twenty million dollar clips.


===On the virtue of slack===
==3. Redundancy==
====On the virtue of slack====
Redundancy is another word for “slack”, in the sense of “looseness in the tether between interconnected parts of a wider whole”.  
Redundancy is another word for “slack”, in the sense of “looseness in the tether between interconnected parts of a wider whole”.  


Line 114: Line 153:


To be sure, the importance of employees, and the value they add, is not constant. We all have flat days where we don’t achieve very much. In an operationalised workplace they pick up a penny a day on 99 days out of 100; if they save the firm £ on that 100th day, it is worth paying them 2 pennies a day every day even if, 99 days out of 100, you are making a loss.
To be sure, the importance of employees, and the value they add, is not constant. We all have flat days where we don’t achieve very much. In an operationalised workplace they pick up a penny a day on 99 days out of 100; if they save the firm £ on that 100th day, it is worth paying them 2 pennies a day every day even if, 99 days out of 100, you are making a loss.
===Fragility and tight coupling===
===Fragility and tight coupling===
The “leaner” a distributed system is, the more ''[[fragile]]'' it will be and the more “[[single points of failure]]” it will contain whose malfunction, in the best case, will halt the whole system, and in [[tightly-coupled]] [[complex system]]s may trigger a chain reaction of successive component failures, chain reactions and unpredictable nonlinear consequences. On 9/11, as Martin Amis put it, 20 box-cutters created two million tonnes of rubble, left 4,000 dead, and transformed global politics for a generation.
The “leaner” a distributed system is, the more ''[[fragile]]'' it will be and the more “[[single points of failure]]” it will contain whose malfunction, in the best case, will halt the whole system, and in [[tightly-coupled]] [[complex system]]s may trigger a chain reaction of successive component failures, chain reactions and unpredictable nonlinear consequences. On 9/11, as Martin Amis put it, 20 box-cutters created two million tonnes of rubble, left 4,000 dead, and transformed global politics for a generation.