Template:M intro design System redundancy: Difference between revisions

From The Jolly Contrarian
Jump to navigation Jump to search
Tags: Mobile edit Mobile web edit
No edit summary
 
(55 intermediate revisions by the same user not shown)
Line 1: Line 1:
{{Quote|“I think the people in this country have had enough of experts from organisations with acronyms saying that they know what is best and getting it consistently wrong.
{{quote|{{d|High modernism|haɪ ˈmɒdᵊnɪzᵊm|n}}
:—Michael Gove}}
A form of modernism characterised by an unfaltering confidence in science and technology as means to reorder the social and natural world.}}
==1. Data modernism==
[[System redundancy|One of the]] [[JC]]’s pet theories is that western commerce — especially the [[Financial services|part concerned with moving green bits of paper around]] — is deep into the regrettable phase of a love affair with “[[data modernism]]”, a computer-adulterated form of [[high modernism]].  


[[System redundancy|The]] JC likes his pet management theories as you know, readers, and none are dearer to his heart than the idea that the [[High modernism|high-modernist]]s have, for forty years, held western management orthodoxy hostage.
{{High modernism capsule}}


The modernist programme is as simple to state as it is self-serving: a distributed organisation is best controlled centrally, and from the place with the best view of the big picture: the top. All relevant information can be articulated as data you know: “[[Signal-to-noise ratio|In God we trust, all others must bring data]]” — and, with enough data everything about the organisation’s present can be known and its future extrapolated.
Embraced as it was by the two great Utopian ideologies of the twentieth century, high modernism reached its zenith in the 1930s, and fell from grace as dramatically as they did, but its basic, mad-scientist premise that with sufficient information, processing power and control we can master our domain — has never quite gone away.  


Even though, inevitably, one has less than perfect information, extrapolations, mathematical derivations and [[Large language model|algorithmic pattern matches]] from a large but finite data set will have better predictive value than the gut feel of [[ineffable]] expertise”: the status we have historically assigned to experienced experts is grounded in folk psychology, lacks analytical rigour and, when compared with sufficient granular data, cannot be borne out: this is the lesson of {{br|Moneyball: The Art of Winning an Unfair Game}}. Just as Wall Street data crunchers can have no clue about baseball and still outperform veteran talent scouts, so can data models and analytics who know nothing about the technical details of, say, the law outperform humans who do when optimising business systems. Thus, from a network of programmed but uncomprehending rule-followers, a smooth, steady and stable business revenue stream [[emerge]]s.
There were two Utopian ideas that died in the twentieth century, and one that didn’t:  [[F. W. Taylor]]’s [[scientific management]]: the view that, just as the natural world can be ordered by science, so can the business world be ordered, and controlled, by ''[[process]]''. [[Process]] is a sort of [[algorithm]] that runs on a carbon and not a silicon [[substrate]]: that is, ''us''. [[Taylorism]], too, once out of favour, is making a revival in the networked, data-driven world. We call this “[[data modernism]]”. It is encapsulated in the expression, attributed to Edwin R. Fisher:
{{quote|In God we trust. All others must bring data.<ref>Notably, Fisher made the statement to a Senate subcommittee in rebuttal to the proposition that passive smoking is bad for you: “I should like to close by citing a well-recognised cliché in scientific circles. The cliché is, “In God we trust, others must provide data.” What we need is good scientific data before I am willing to accept and submit to the proposition that smoking is a hazard to the nonsmoker.”</ref>}}


Since the world overflows with data, we can programmatise business. Optimisation is a mathematical problem to be solved. It is a [[knowable unknown]]. To the extent we fail, we can put it down to not enough data or computing power.
====Bring your own job satisfaction====
In pitting ''information'' against ''experience'' [[data modernism|this philosophy]] has systematically undermined the importance in organisations of ''those with [[ineffable]] expertise''. They are called “[[Subject matter expert|subject-matter experts]]”, which sounds venerable until you hear the exasperated tone in which it is uttered.


Since data quantity and computing horsepower have exploded in the last few decades, the [[high-modernist]]s have grown ever surer that their time — the [[Singularity]] — is nigh. Before long, and everything will be solved.
Over forty years the poor [[SME]] has, by a thousand literal cuts, been stripped of her status and weathered a slow but inevitable descent into the quotidian: first they came for her assistants — typists, receptionists, proof-readers, mail and fax room attendants —  then her perks — business travel, away days, taxis home — then her kit — company cars, laptops, mobile devices — then her space —  that once commodious office became communal, then lost its door, then its walls, diminished to a dedicated space along a row, and most recently has become a conditional promise of a sanitised white space in front of a [[telescreen]] somewhere in the building, should you be in early, or enough of your colleagues away sick or on holiday.  


But, a curious dissonance: these modernising techniques arrive and flourish, while traditional modes of working requiring skill, craftsmanship and tact are outsourced, computerised, right-sized and AI-enhanced — but yet the end product gets no less cumbersome, no faster, no leaner, and no less risky. There may be fewer [[subject matter expert]]s around, but there seem to be more [[software-as-a-service]] providers, [[Master of Business Administration|MBA]]s, [[COO]]s, [[workstream lead]]s and [[Proverbial school-leaver from Bucharest|itinerant school-leavers in call-centres on the outskirts of Brașov]]
This managed degradation of [[expert|expertise]] is a logical consequence of [[data modernism]]: human “magic” is not good, but an evil that is no longer necessary: risky, inconstant, evanescent, fragile, expensive, inconstant and, most of all, ''hard to quantify'' — and what you can’t quantify, you can’t evaluate, and what you can’t evaluate you shouldn’t, in a data-optimised world, ''do''.


The pioneer of this kind of modernism was [[Frederick Winslow Taylor]]. He was the progenitor of the maximally efficient production line. His inheritors say things like, “[[The Singularity is Near|the singularity is near]]and [[Software is eating the world|software will eat the world]]” but for all their millenarianism the on-the-ground experience at the business end of this all world-eating software is as grim as it ever was.
With the exploding power of information processing the range of things for which we must still rely on that [[Subject matter expert|necessary evil]] has diminished. Many [[thought leader]]s<ref>The most prominent is [[Ray Kurzweil]], though honourable mention to DB’s former CEO John Cryan and, of course, there is the redoubtable [[Richard Susskind|Suss]]. </ref> foretell it is only a matter of time until there are none left at all.


We have a theory that this “data reductionism” reducing everything to quantisable inputs and outputs — owes tends to a kind of [[reductionism]], only about ''time'': just as radical rationalists see all knowledge as reducible to, and explicable in terms of, its infinitesimally small sub-atomic essence, so the data modernists see it as explicable in terms of infinitesimally small windows of ''time''.
====Sciencing the shit out of business====
[[Data modernism]]’s central [[metaphor]] works by treating human workers as if they were carbon-based [[Turing machine]]s, and “the firm” an orchestrated network of automatons. Orchestration happens centrally, from the place with the best view of the big picture: the top.<ref>Curiously, this is not the theory behind distributed computing, which is rather [[end-to-end principle|controlled from the edges]]. But still.</ref>


This is partly because computer languages don’t do [[tense|''tense'']]: they are coded in the present, and have no frame of reference for continuity. <ref>I have put some thoughts together on that [[Code and language|here]]. I should say this is all my own work and may therefore be nonsense.</ref> And it is partly because having to cope with history, the passage of time, and the continued existence of objects, makes things exponentially more complex than they already are. An atomically thin snapshot of the world as data is enough of a beast to be still well beyond the operating parameters of even the most powerful quantum machines: that level of detail extending into the future and back from the past is, literally, infinitely more complicated. The modernist programme is to suppose that “time” is really just comprised of billions of infinitesimally thin, static slices, each functionally identical to any other, so by measuring the [[delta]] between them we have a means of handling that complexity.
From the top, the only “[[Legibility|legible]]” information is [[data]], so all germane  [[Management information and statistics|management information]] takes that form — you know: “[[Signal-to-noise ratio|In God we trust, all others must bring data]]. With enough of the stuff, so the theory goes, everything about the organisation’s ''present'' can be known, and from a complete picture of the present one can extrapolate to the future.  


That is does not have a hope of working seems beside the point.
Armed with all the [[Signal-to-noise ratio|data]], the organisation’s permanent infrastructure can be honed down and dedicated to its core business. Peripheral functions — [[operation]]s, [[personnel]], [[legal]] and ''~ cough ~'' strategic [[management consultant|management advice]] — can be [[Outsourcing|outsourced]] to specialist service providers, and then scaled up or down as management priorities dictate<ref>“Surge pricing” in times of crisis, though.</ref> or switched out should they malfunction or otherwise be surplus to requirements.<ref>A former general counsel of UBS once had the bright idea of creating a “shared service” out of its legal function that could be contracted out to other banks, like [[Credit Suisse]]. He kept bringing the idea up, though it was rapidly pooh-poohed each time. Who knew it would work out so well in practice?</ref>


In any case, just in time rationalisers take a cycle and code for that.  What is the process, start to finish, what are the dependencies, what are the plausible unknowns, and how do we optimise for efficiency of movement, components and materials, to manage
We should, by now, feel like we are in a new and better world — right? — yet the customer experience as poor as ever. Not ''worse'', necessarily: just no ''better''. Just try getting hold of a bank manager now. “BAU-as-a-service” has streamlined and enhanced the great heft what businesses do, at the cost of depersonalising service and eliminating outlying opportunities for which the model says there is no business case.
 
You might call this effect “[[Pareto triage]]”. Great for those within a [[Normal distribution|standard deviation]] of the mean, who are happy with the average. But it poorly serves the long tail of oddities and opportunities. Those just beyond that “[[Pareto triage|Pareto threshold]]” have little choice but to manage their expectations and take a marginally unsatisfactory experience as the best they are likely to get. Customers subordinate their own priorities to the preferences of the model. But, unless you are McDonald’s, the proposition that 80% of your customers ''want'' exactly the same thing — as opposed to just being prepared to put up with it, in the absence of a better alternative — is a kind of wishful [[averagarianism]].
 
====The Moneyball effect: experts are bogus====
In the mean time, those [[subject matter expert]]s who don’t drop off altogether wither on the vine. Even though we have less than perfect information, algorithmic extrapolations, derivations and [[Large language model|pattern matches]] from what ever we do have are presumed to yield greater predictive value than any [[subject matter expert]]s’ “[[ineffable]] wisdom”.
 
This is the {{br|Moneyball}} lesson. Our veneration for human expertise is a misapprehension. It is, er, ''not borne out by the data''. And in the twenty-first century
we are ''inundated'' with data. Business optimisation is just a hard mathematical problem. Now we have computer processing power to burn, it is a [[knowable unknown]]. To the extent we fail, we can put it down to not enough data or computing power — ''yet''. But the [[singularity]] is coming, soon.
 
====The persistence of rubbish====
{{Singularity and perspective chauvinism}}
 
And if so, it’s worth asking again, ''how come everything seems so joyless and glum''? Are we ''missing'' something?
 
That would explain a curious dissonance: these modernising techniques arrive and flourish, while traditional modes of working requiring skill, craftsmanship and tact are outsourced, computerised, right-sized and AI-enhanced — but yet the end product gets no less cumbersome, no faster, no leaner, and no less risky.  ''[[Tedium]] remains constant''.<ref>This may be a [[sixteenth law of worker entropy]].</ref>
 
There may be fewer [[subject matter expert]]s around, but there seem to be more [[software-as-a-service]] providers, [[Master of Business Administration|MBA]]s,  [[COO]]s, [[workstream lead]]s and [[Proverbial school-leaver from Bucharest|itinerant school-leavers in call-centres on the outskirts of Brașov]].<ref>We present therefore the [[JC]]’s [[sixteenth law of worker entropy]] — the “[[law of conservation of tedium]]”.</ref>
 
====Taylorism====
None of this is new: just our enthusiasm for it. The prophet of [[data modernism]] was [[Frederick Winslow Taylor]], progenitor of the maximally efficient production line. His inheritors say things like, “[[The Singularity is Near|the singularity is near]]” and “[[Software is eating the world|software will eat the world]]” but for all their millenarianism the on-the-ground experience at the business end of this all world-eating software is as grim as it ever was.
 
==2. Time==
====Reductionism about time====
The JC’s developing view is that this grimness is caused by the poverty of this model when compared to the territory it sets out to map. For the magic of an algorithm is its ability to reduce a rich, multi-dimensional experience to a succession of very simple, one-dimensional steps. But in that [[reductionism]], we lose something ''essential''.
 
The Turing machines on which [[data modernism]] depends have no ''[[tense]]''. There is no past or future, perfect or otherwise in code: there is only a permanent simple ''present''. A software object’s ''past'' is rendered as a series of date-stamped events and presented as metadata in the present. An object’s ''future'' is not represented at all.
 
In reducing everything to data, spatio-temporal continuity is represented as an array of contiguous, static ''events''. Each has zero duration and zero dimension: they are just ''values''.
 
Now the beauty of a static frame is its economy. It ''can’t move'', it can’t surprise us, it takes up minimal space. We can replace bandwidth-heavy actual spacetime, in which ''three'' dimensional objects project backwards and forwards in a ''fourth'' dimension — with ''apparent'' time, rendered the single symbol-processing dimension that is the lot of all Turing machines.
 
The apparent temporal continuity that results, like cinematography, is a conjuring trick: it does not exist “in the code” at all; rather the output of the code is presented in a way that ''induces the viewer to impute continuity to it''. When regarding the code’s output, the user ascribes her own conceptualisation of time, from her own natural language, to what she sees. The “magic” is not in the machine. It is in her head.
 
For existential continuity backwards and forwards in “time”, is precisely the problem the human brain evolved to solve: it demands a projection of continuously existing “things” with definitive boundaries, just one of which is “me”, moving through spacetime, interacting with each other. None of this “continuity” is “in the data”.<ref>{{author|David Hume}} wrestled with this idea of continuity: if I see you, then look away, then look back at you, what ''grounds'' do I have for believing it is still “you”? Computer code makes no such assumption. It captures property A, timestamp 1; property A timestamp 2, property A timestamp 3: these are discrete objects with common property, in a permanent present — code imputes no necessary link between them, not does it extrapolate intermediate states. It is the human genius to make that logical leap. How we do it, ''when'' we do it — generally, how human consciousness works, defies explanation. {{author|Daniel Dennett}} made a virtuoso attempt to apply this algorithmic [[reductionist]] approach to the problem of mind in {{br|Consciousness Explained}}, but ended up defining away the very thing he claimed to explain, effectively concluding “consciousness is an illusion”. But on whom?</ref>
 
Turing machines, and [[data modernism]] that depends on them, does away with the need for time and continuity altogether, instead ''simulating'' it through a succession of static slices — but that continuity vanishes when one regards the picture show as a sequence of still frames.
 
But existential continuity is not the sort of problem you can define away. Dealing with history and continuity is exactly the thing we are trying to solve.
 
[[Gerd Gigerenzer]] has a nice example that illustrates the importance of continuity.
 
Imagine a still frame of two pint glasses, A and B, each containing half a pint of beer. Which is half-full and which is half-empty? This static scenario poses an apparently stupid question. It is often used to illustrate how illogical and imprecise our language is. But this is only true if the still frame is considered in the abstract: that is, stripped of its ''context'' in time and space.
 
For, imagine a short film at the start of which glass A is full and glass is B empty. Then a little Cartesian imp arrives, picks up glass A and tips half into glass B. ''Now'' which is half-full and which is half-empty? ''That history makes a difference''.
 
The second time-bound scenario tells us something small, but meaningful about the history of the world. The snapshot does not.
 
====Assembling a risk period out of snapshots====
The object of the exercise is to have as fair a picture of the real risk of the situation with as little information processing as possible. Risks play out over a timeframe: the trick is to gauge what that is.
 
Here is the appeal of [[data modernism]]: you can assemble the ''appearance'' of temporal continuity — the calculations for which are ''gargantuan'' — out of a series of data snapshots, the calculations for which are merely ''huge''.
 
Information processing capacity being what it is — still limited, approaching an asymptote and increasingly energy consumptive (rather like an object approaching the speed of light) — there is still a virtue in economy. Huge beats gargantuan. In any case, the shorter the window of time we must represent to get that fair picture of the risk situation, the better. We tend to err on the short side.
 
For example the listed corporate’s quarterly reporting period: commentators lament how it prioritises unsustainable short term profits over long term corporate health and stability — it does — while modernists, and those resigned to it, shrug their shoulders with varying degrees of regret, shake their heads and say that is just how it is.
 
For our purposes, another short period that risk managers look to is the ''liquidity period'': the longest plausible time one is stuck with an investment before one can get out of it. The period of risk. This is the time frame over which one measures — guestimates — one’s maximum potential unavoidable loss.
 
Liquidity differs by asset class. Liquidity for equities is usually almost — not quite, and this is important — instant. Risk managers generally treat it “a day or so”.
 
For an investment fund it might be a day, a month, a quarter, or a year. Private equity might be 5 years. For real estate it is realistically months, but in any case indeterminate. Probabilistically you are highly unlikely to lose the lot in a day, but over five years there is a real chance.
 
So generally the more liquid the asset, the more controllable is its risk.  But liquidity, like volatility, comes and goes. It is usually not there when you most need it.  So we should err on the long side when estimating liquidity periods in a time of stress.
 
But there the longer the period, the greater that change of loss. And the harder things are to calculate. We are doubly motivated to keep liquidity periods as short as as possible.
 
====[[Glass half full]] and multidimensionality====
Here is where history — ''real'' history, not the synthetic history afforded by [[data modernism]] — makes a difference.
 
''On a day'', the realistic range in which a stock can move in a liquidity period — its “gap risk” — is relatively stable. Say, 30 percent of its [[market value]].  (This [[market value]] we derive from technical and fundamental readings: the business ’s book value, the presumption that there is a sensible [[bid]] and [[ask]], so that the stock price will oscillate around its “true value” as bulls and bears cancel each other out under the magical swoon of [[Adam Smith]]’s [[invisible hand]].
 
But this view is assembled from static snapshots which don't move at all. Each frame carries ''no'' intrinsic risk: the ''illusion'' of movement emerges from the succession of frames. Therefore [[data modernism]] is not good at estimating how long a risk period should be. Each of its snapshots, when you zero in on it, is a still life: here, shorn of its history, a “[[glass half full]]” and a “[[glass half empty]]” look alike.
 
We apply the our risk tools to them as if they were the same: ''assuming the market value is fair, how much could I lose in the time it would realistically take me to sell''?  Thirty percent, right?
 
But they are ''not'' the same.
 
If a stock trades at 200 today, it makes a difference that it traded at 100 yesterday, 50 the day before that, and over the last ten years traded within a range between 25 and 35. This history tells us this glass, right now, is massively, catastrophically over-full: that the milk in it is, somehow, freakishly forming an improbable spontaneous column above the glass, restrained and supported by nothing by the laws of extreme improbability, and it is liable to revert to its Brownian state at any moment with milk spilt ''everywhere''.
 
With that history might think a drop of 30pc of the milk is our ''best'' case scenario.


=== It’s the long run, stupid===
=== It’s the long run, stupid===
The usual approach for system optimisation is to take a snapshot of the process as it is over its lifecycle, and map that against a hypothetical critical path. Kinks and duplications in the process are usually obvious, and we can iron them out to reconfigure the system to be as efficient and responsive as possible. Mapping best case and worst case scenarios for each phase in that life cycle can give good insights into which parts of the process are in need of re-egineering: it is often [[What will it look like?|not the ones we expect]].
The usual approach for system optimisation is to take a snapshot of the process as it is over its lifecycle, and map that against a hypothetical critical path. Kinks and duplications in the process are usually obvious, and we can iron them out to reconfigure the system to be as efficient and responsive as possible. Mapping best case and worst case scenarios for each phase in that life cycle can give good insights into which parts of the process are in need of re-engineering: it is often [[What will it look like?|not the ones we expect]].


But how long should that life cycle be? We should judge it by the frequency of the worst possible negative event that could happen. Given that we are contemplating the [[infinite]] future, this is hard to say, but it is longer that we think: not just a single manufacturing cycle or reporting period. The efficiency of a process must take in ''all'' parts of the cycle — the whole gamut of the four seasons — not just that nice day in July when all seems fabulous with the world. There will be other days; difficult ones, on which where multiple unrelated components fail at the same moment, or where the market drops, clients blow up, or tastes gradually change. There will be almost imperceptible, secular changes in the market which will demand products be refreshed, replaced, updated, reconfigured; opportunities and challenges will arise which must be met: your window for measuring who and what is ''truly'' redundant in your organisation must be long enough to capture all of those slow-burning, infrequent things.  
But how long should that life cycle be? We should judge it by the frequency of the worst possible negative event that could happen. Given that we are contemplating the [[infinite]] future, this is hard to say, but it is longer than we think: not just a single manufacturing cycle or reporting period. The efficiency of a process must take in ''all'' parts of the cycle — the whole gamut of the four seasons — not just that nice day in July when all seems fabulous with the world. There will be other days; difficult ones, on which where multiple unrelated components fail at the same moment, or where the market drops, clients blow up, or tastes gradually change. There will be almost imperceptible, secular changes in the market which will demand products be refreshed, replaced, updated, reconfigured; opportunities and challenges will arise which must be met: your window for measuring who and what is ''truly'' redundant in your organisation must be long enough to capture all of those slow-burning, infrequent things.  


Take our old, now dearly departed, friends at [[Credit Suisse]]. Like all banks, over the last decade they were heavily focused on the ''cost'' of their prime brokerage operation. Prime brokerage is a simple enough business, but it’s also easy to lose your shirt doing it.
Take our old, now dearly departed, friends at [[Credit Suisse]]. Like all banks, over the last decade they were heavily focused on the ''cost'' of their prime brokerage operation. Prime brokerage is a simple enough business, but it’s also easy to lose your shirt doing it.
Line 35: Line 120:
(We are, of course, assuming that better human risk management might have averted that loss. If it would not have, then the firm should not have been in business at all)
(We are, of course, assuming that better human risk management might have averted that loss. If it would not have, then the firm should not have been in business at all)


The skills and operations you need for these phases are different, more expensive, but likely far more determinative of the success of your organization over the long run.
The skills and operations you need for these phases are different, more expensive, but likely far more determinative of the success of your organisation over the long run.


The [[Simpson’s paradox]] effect: over a short period the efficiency curve may seem to go one way; over a longer period it may run perpendicular.
The [[Simpson’s paradox]] effect: over a short period the efficiency curve may seem to go one way; over a longer period it may run perpendicular.
Line 43: Line 128:
Then, your time horizon for redundancy is not one year, or twenty years, but ''two-hundred and fifty years''. Quarter of a millennium: that is how long it would take to earn back $5 billion in twenty million dollar clips.
Then, your time horizon for redundancy is not one year, or twenty years, but ''two-hundred and fifty years''. Quarter of a millennium: that is how long it would take to earn back $5 billion in twenty million dollar clips.


===Tight coupling===
==3. Redundancy==
Redundancy is another word for “slack”, in the sense of looseness in the tether between interconnected parts of a wider whole.  
====On the virtue of slack====
Redundancy is another word for “slack”, in the sense of “looseness in the tether between interconnected parts of a wider whole”.  


For optimum normal operation, one should ''minimise'' slack, thereby generating maximum responsiveness, handling , cornering: what lmusicians would call  “attack” — the greatest torque, the most direct transmission of power to road; the minimum ''latency''.
To optimise normal operation, we hear, we should ''minimise'' slack, thereby generating maximum responsiveness, handling, cornering: what musicians would call  “attack” — tightness gives the greatest torque, the most direct transmission of power to road; the minimum ''latency''.


The tighter we couple inputs to outputs, the faster the response. But the less margin there is for variation.
The tighter we couple inputs to outputs, the faster the response. But the less margin there is for variation.
Line 52: Line 138:
And, as {{author|Charles Perrow}} notes<ref>In one of the JC’s favourite books, {{br|Normal Accidents: Living with High-Risk Technologies}}.</ref> this in-the-moment flow state, when the machine is humming, is only a stable state in tightly constrained environments. Where every outcome can be predicted, monitored, and sub-optimal ones can be avoided by rote.  
And, as {{author|Charles Perrow}} notes<ref>In one of the JC’s favourite books, {{br|Normal Accidents: Living with High-Risk Technologies}}.</ref> this in-the-moment flow state, when the machine is humming, is only a stable state in tightly constrained environments. Where every outcome can be predicted, monitored, and sub-optimal ones can be avoided by rote.  


But, generally, these are not very interesting environments. They are production lines. Factory shop floors: every element of the process is within your gift and under your control.  
But, generally, these are not very interesting environments. They are production lines. Factory shop floors — [[nomological machine|nomological machines]] — where every element of the process is under control. It is where production is ''not'' tightly controlled — intervening agents, third parties, shifting priorities and market conditions — that things get “interesting”.  
 
Because these just-in-time systems have the lowest tolerance for a failure. The more efficient a system is, the greater more “single point failures”, that can bring whole system to a halt, there will be. Any component misbehaviour can trigger a chain reaction leading to catastrophe.  


That very lack of “give” that makes the sports car so responsive on a dime dry track makes it skid off a wet one. The tighter the coupling the less time sone has to diagnose the failure and fix or shut the system down before catastrophic damage results.  
That very lack of “give” that makes a sports car so responsive on a dry track makes it skid off a wet one. The less slack there is, the less time an operator has to diagnose and fix a problem — or shut the system down — to avoid catastrophic damage.  


Conversely a system with built-in back-ups and redundancies can go on working while we repair failed components. A certain amount of “stockpiling” in the system allows production to continue should there be any outages or supply chain problems throughout the process.
A system with built-in back-ups and redundancies can go on working while we repair failed components. A certain amount of “stockpiling” in the system allows production to continue should there be any outages or supply chain problems throughout the process.


Even a production line environment is not perfectly stable, of course. It is — should be in a constant state of improvement— “jidoka” in the [[Toyota Production System]] — whereby engineers adjust for evolving demand, react to market developments, and capitalise on new technology and knowhow.  
But even a production line environment is not perfectly stable. It should be in a constant state of improvement whereby engineers monitor and adjust to optimise, to cater for evolving demand, to react to market developments, and capitalise on new technology and knowhow.  


This is “meta-production”: a valuable “background processing” function — important and valuable but not day to day “urgent”— for which “redundant” personnel can be occupied, from which they can redeploy immediately should a crisis arise.  
This is “meta-production”: a valuable “background processing” function — important and valuable but not day to day “urgent”— for which “redundant” personnel can be occupied, from which they can redeploy immediately should a crisis arise.  
Line 66: Line 150:
This has two benefits: firstly the process of “peacetime” self-analysis should in part be aimed at identifying emerging risks and design flaws in the system, thus heading off incipient crisis; secondly, to do that the personnel need ''expertise'':  an intimate, detailed, holistic understanding of the process and the system. By intimately understanding the system, these second-line workers should therefore be better able to react to a crisis should one arise.
This has two benefits: firstly the process of “peacetime” self-analysis should in part be aimed at identifying emerging risks and design flaws in the system, thus heading off incipient crisis; secondly, to do that the personnel need ''expertise'':  an intimate, detailed, holistic understanding of the process and the system. By intimately understanding the system, these second-line workers should therefore be better able to react to a crisis should one arise.


This behaviour rewards long-term “skin in the game”. The best employees here are long-serving, local, full-time, employees full of institutional knowledge and practical hands-on systems knowhow.  Inexperienced outsourced labour, of the sort by whon these traditional experts are being systematically replaced, will be far less use in either role.
This behaviour rewards long-term “skin in the game”. The best employees here are long-serving, local, full-time, employees full of institutional knowledge and practical hands-on systems knowhow.  Inexperienced outsourced labour, of the sort by whom these traditional experts are being systematically replaced, will be far less use in either role.
 
To be sure, the importance of employees, and the value they add, is not constant. We all have flat days where we don’t achieve very much. In an operationalised workplace they pick up a penny a day on 99 days out of 100; if they save the firm £ on that 100th day, it is worth paying them 2 pennies a day every day even if, 99 days out of 100, you are making a loss.
 
===Fragility and tight coupling===
The “leaner” a distributed system is, the more ''[[fragile]]'' it will be and the more “[[single points of failure]]” it will contain whose malfunction, in the best case, will halt the whole system, and in [[tightly-coupled]] [[complex system]]s may trigger a chain reaction of successive component failures, chain reactions and unpredictable nonlinear consequences. On 9/11, as Martin Amis put it, 20 box-cutters created two million tonnes of rubble, left 4,000 dead, and transformed global politics for a generation.
 
A financial market is a [[complex system]]. It comprises an indeterminate number of autonomous actors, many of whom — notably participating corporations — are themselves complex systems, interacting in unpredictable ways. It is the nature of a complex system that it is unpredictable — not just in the sense of a jointed pendulum, whose exact path is impossible to predict, but whose possible range of movement, it's design space, is known to 100 percent. The “design space” of a complex system is unknown. You cannot calculate even probabilistically how it will behave. The fact that, for long periods, it appears to closely cleave to simple operating parameters is beside the point.
 
The robustness of any system depends on the tightness of the coupling between components. How much slack is there? In financial markets, increasingly, none at all.
 
When the JC started practise a millennium ago, to convey an urgent written communication one gave it to a chap on a bicycle. Well — one gave it to a secretary, who sent it to the mailroom, and ''they'' gave it to a chap on a bicycle. [[Facsimile]] was the innovation: while quicker than a bike courier, it was still manual and bounded at either end by analogue processes such that the communication began and ended embedded in a physical [[substrate]] which you couldn’t easily reply to, forward<ref>Not at least without time, manual intervention and loss of fidelity.</ref> let alone cut and paste from.
 
The analog universe thus imposed its own immutable sobriety upon the conduct of business: there was a genteel maximum at which matters progressed, and that was that. Time is its own natural fire break. You could charge down to the mail room and call back an ill-advised letter in a way you can’t with an intemperate email. Just having the letter typed meant it passed through multiple hands, that effluxion was itself a kind of self-enforcing circumspection and a kind of natural brake upon precipitate behaviour.
 
In any case ''slow'', loosely-coupled chain reactions have a better chance of being stopped, or contained. The liquidity crunch that ruined Silicon Valley Bank unfolded in minutes, not hours, and did for [[Credit Suisse]], an unrelated bank on a different continent before bank executives could work out what was going on.
 
So, yes, financial services are tightly coupled. The increasing speed and complexity of the system’s  interconnectedness aggravates crash risk. The more interconnections, the faster information flows around the system, the more quickly it can swamp whatever systems we erect to contain it. Credit Suisse had its own fundamental problems, to be sure, but the speed at which it was brought down by entirely unconnected system events should surely give pause for thought.
 
Redundancy — slack — in this environment, is a virtue.
====Regulatory ''human'' capital?====
Instinctively, we all know this.
 
We build certain kinds of redundancy into our systems precisely as a failsafe against catastrophic failure. Financial services regulators require banks to hold [[regulatory capital]] — cash, held against no specific risk, as a bulwark against divers credit and liquidity crises.
 
[[Tier 1 capital]] is a buffer — slack; a ''[[system redundancy]]'' — designed to protect not just the individual institutions who must hold it ''but the wider system''. As executives at [[Lehman]] and [[Credit Suisse]] would tell us, after the fact, capital takes you so far. (''Before'' the fact they might have grumbled, too, that capital is an expensive dead weight on corporate returns.)
 
For [[Tier 1 capital|regulatory capital]] is an [[Airbag - steering-wheel continuum|airbag]]: protects you in a prang, but doesn’t help you avoid one in the first place. To be sure, there are accounting techniques that do: [[risk weighting]], [[leverage ratio]]s, [[regulatory margin]] — when they work, they are better than airbags, but they suffer from being determinate responses to unpredictable problems.
There have already been three [[Basel Accords]]; they are working on a fourth, because the first three haven’t had the desired effect. We still have periodic market meltdowns, not because the Basel rules aren’t detailed enough, but because, fundamentally, fixed rules cannot manage indeterminate risk situations. We have seen over and over well-meant rules behave counterintuitively at times of stress.<ref>Quoth the [[Basel Committee on Banking Supervision|Basel Committee]], explaining its most recent rules: <br>“''An underlying cause of the global financial crisis was the build-up of excessive on- and off-balance sheet leverage in the banking system. In many cases, banks built up excessive leverage while apparently maintaining strong risk-based capital ratios. At the height of the crisis, financial markets forced the banking sector to reduce its leverage in a manner that amplified downward pressures on asset prices. This deleveraging process exacerbated the feedback loop between losses, falling bank capital and shrinking credit availability.''”</ref>
 
We should not be surprised: accounting rules aren't sentient. They cannot read the market, understand a given institution’s cultural dynamics, let alone its particular risk profile in times of unforeseeable stress. 
 
But different kinds of buffers might be more effective at avoiding the pickles that leveraged financial institutions can get themselves into. Buffers of resource, material and significantly expert people: overabundance of skill, experience and expertise that ''can'' diagnose, react to, prevent and manage liquidity crises.
 
Why not, as well as regulatory ''share'' capital, encourage our institutions to hold excess [[human capital|''human'' capital]]? Or at least be less cavalier about systematically ''removing'' it, in the name of short-term cost savings.
 
Just as we must hold share capital in fair weather as well as foul, we should not expect to run expertise in fair weather on a shoestring. You can’t buy-in institutional knowledge in a time of crisis. You can’t buy institutional knowledge ''at all''. Even un-contextualised expertise, at a time of panic, will command outrageous premiums.
 
===''Jidoka''===
But what, a finance director might ask, would these expensive experts do if they are technically “redundant”?
 
Unlike [[tier 1 capital]], ''human'' capital need not just sit there costing money. These are people you can use as systems design and process experts, to analyse systems, root out anachronisms, build parallel state-of-the-art IT systems from which legacy infrastructure can be migrated. This is jidoka — automation with a human touch. This is creative, rewarding, builds
 
We run the gamut from super-fragility, where component failure triggers system ''meltdown'' — these are {{author|Charles Perrow}}’s“[[system accident]]s”; a continuum between normal fragility, where component failure causes system disruption and normal robustness where there is enough redundancy in the system that it can withstand outages and component failures, bit components will continue to fail in predictable ways, and then antifragility, where the redundancy itself is able to respond to component failures and secular challenges, and resigns the system in light of experience to ''reduce'' the risk of known failures.
 
The difference between robustness and antifragility here is the quality of the redundant components. If your redundancy strategy is to have lots of excess stock, lots of spare components and an inexhaustible supply of itinerant, enthusiastic but inexpert school-leavers from Bucharest ,then your machine will be robust and functional will be able to keep operating as long as macro conditions persist, but it will not learn it will not develop, and it will not adapt to changing circumstances.
 
An [[antifragile]] system requires both kinds of redundancy: plant and stock, to keep the machine going, but tools and knowhow, to tweak the machine. Experience, expertise and insight. The same things — though they are expensive — that can head off catastrophic events can apprehend and capitalise upon outsized business opportunities. ChatGPT will not help with that.
 
===Suitable candidates for regulatory human capital ===
Speaking of systems there is, too, a ''negative'' feedback loop operating here. The institutional knowledge lives with loyal, long-serving employees. The 20-year operations veteran you made redundant in the last cost-cutting challenge is not [[fungible]] with the team of [[school leavers from Bucharest]] to whom you diffused his role. Nor will those call-centre contractors have the expertise or disposition to be much use as a system redundancy: they don’t have the skills, commitment or continuity of experience to help with re-engineering systems and optimising processes: that is a job that requires subtlety, and intimate understanding of how the organisation ticks. Outsourced contractors neither know nor care.


To be sure, the importance of employees, and the value they add, is not constant. We all have flat days where we don’t achieve very much. AIn an operationalised workplace they pick up a penny a day on 99 days out of 100; if they save the firm £ on that 100th day, it is worth paying them 2 pennies a day every day even if, 99 days out of 100, you are making a loss.
===“Redundancy” as a key to successful change management===
{{Quote|
But gravity always wins.
: Radiohead, ''Fake Plastic Trees'' (1992)}}


===Fragility===
[[Complex system]]s seek out their own equilibria. (A complex scenario that does not is not a system. It will fly apart).
Another thing about a lean system is that it is fragile. Super fragile, in fact as it has multiple critical failure points, any one of which will, in the best case, stop the whole of the system functioning, but in [[tightly-coupled]] [[complex system]]s may themselves trigger further component failure or a chain reaction.


A modern market is undoubtedly a complex system. It is boundaryless, comprised of interactions of an indeterminate number of autonomous actors, and many of those “actors” — notably corporations — themselves are complex systems comprising indeterminate autonomous actors). The question becomes how loosely connected are the parts? How much slack is there? Increasingly, none at all.  
It finds its equilibrium not from by divine command from the centre, but by countless decisions by the autonomous components that comprise the system. Over time, those autonomous components — people, mostly — react to stimuli, settle into habits and contrive ways of working, as they to, creating their own sub-networks, dependencies and generally acquiring their own meta theories of what they are there to do and how best to do it (some do this more consciously than others, but all, at some level do it.)


When the JC started practice the standard means if conveyance of written communication was mail. [[Facsimile]] was the innovation: while exponentially quicker than the mail, the fax is still was manual and bounded by analogue processes such that the end product, like the starting product, was indelibly embedded in a physical substrate. You couldn’t forward a fax,<ref>Not at least without time , manual intervention and further loss of fidelity.</ref> much less cut and paste from one.
These priorities will be personal to each component: they may partly coincide with the organisation’s but won’t entirely — it is no part of a corporation’s plan, above all else, to ''make sure I stay here, and thrive, and get paid, while minimising personal risk and responsibility'', but this, we submit, motivates most corporate employees more profoundly than ''ensuring immaculate shareholder return''. But we digress.  


Chain reactions were slow, and somewhat containable. The bank run that did for Northern Rock took days to unfold: required media propagation, and account holders to stop whatever they were doing and physically go to their local branch , for a queue to form, for that to be reported, and so on. In the same time, the liquidity crunch that ruined Silicon Valley Bank had also finished Credit Suisse, an unrelated bank on a different continent that had nothing to do with Silicon Valley Bank at all.  
In any case the systems and subsystems evolve their own ways of working. They create their own efficiencies — efficiencies that yield to those personal motivations, and may be quite perverse to the organisation’s stated mission.  


So, yes, financial services are tightly coupled. The increasingly complexity of their interconnectedness means that coupling grows ever tighter.
They wear in grooves, smooth down edges and naturally, through the adaptive process of usage, seek out “local maxima”, judged from the perspective of the local components.  


Redundancy, in this environment, is a virtue. the regulators fixate on one kind of redundancy — regulatory capital — but we should look at others. Redundancy of skill, experience and expertise.
We should not be surprised that systems which have found such an equilibrium are hard to shift from it. Call that equilibrium an “operating paradigm”. They will, through force of habit, precedent, template, and agreed ways of doing things, drift back to it.  


Just as we cannot expect to only hold capital when we need it, nor should we expect to only run expertise in fair weather on a shoestring. You can't buy in institutional knowledge in a time of crisis. (You can’t buy institutional knowledge at all). Even uncontextualised expertise, at a time of panic, will command outrageous premiums.  
In a fight between logic and gravity, gravity always wins. The only way to beat gravity is to work with it, to find new maxima.


We should consider holding regulatory ''human'' capital.
It stands to reason that a single “change agent” who arrives from outside and says, “hey, fellas, wouldn’t it be great if we fixed this?” won’t get far with the veteran crew who run the process now. Imagine an uncredentialised outsider presenting special relativity to the royal academy in 1700, a few years after Newton published ''Principia Mathematica''. It is hard to imagine such an outsider even getting an audience, let alone going over well.


Unlike tier 1 capital, human capital does not just sit there costing money. These are people you can use as systems design and process experts, to analyse systems, root out anachronisms, build parallel state-of-the-art it systems from which legacy infrastructure can be migrated. This is jidoka — automation with a human touch. This is creative, rewarding, builds
The thing about an operating paradigm is that ''it is operating''. On its own terms, it works. It ''isn’t in crisis''. Now in {{author|Thomas Kuhn}}’s conception of them,<ref>{{br|The Structure of Scientific Revolutions}}. Wonderful book.</ref> paradigms generally only break down when they stop working ''on their own terms''. Even then, Credentialed practitioners go out of their way to reframe their data to ensure it is consistent with the paradigm. They make things up to make it work: cosmological constants, dark energy, even an entire multiverse. As far as its constituents are concerned, it is working ''fine''. They may regard it as a thing of beauty, a many-splendoured contraption that they have, over the ages, grown into and dependent on, the way a beaver grows into and dependent on its dam. They will not easily give it up — cannot: they would be lost without it. We should not be surprised to see well-meant change initiatives foundering against this kind of entropy: this will for things to settle back to how they were.


We run the gamut from superfragility, where component failure triggers system ''meltdown'' — these are {{author|Charles Perrow}}’s“[[system accident]]s”; a continuum between normal fragility, where component failure causes system disruption and normal robustness where there is enough redundancy in the system that it can withstand outages and component failures, bit components will continue to fail in predictable ways, and then antifragility, where the redundancy itself is able to respond to component failures and secular challenges, and resigns the system in light of experience to ''reduce'' the risk of known failures.
This is the single virtue of the [[reduction in force]]. By arbitrarily removing a percentage of the system components, you might ''force'' it out of equilibrium, giving the components no choice but to find new ways of working. But their motivations as they do so are no less self-motivated than they were: you cannot shock a system into behaving selflessly.


The difference between robustness and antifragility here is the quality of the redundant components. If your redundancy strategy is to have lots of excess stock, lots of spare components and an inexhaustible supply of itinerant,enthusiastic but inexpert school-leavers from Bucharest ,then your machine will be robust and functional will be able to keep operating as long as macro conditions persist, but it will not learn it will not develop, and it will not adapt to changing circumstances.
Damon Centola<ref>Damon Centola, {{br|Change: How to Make Big Things Happen}}, 2021.</ref>  research about concentration and bunching of constituents to ensure change is permanent. Complex change isn't like viral infection. We can’t expect to drop jewels of crystalline logic into a well established system equilibrium and expect it to spontaneously revolutionise itself. Even viral infections,which do that, rip through the population and then vanish. Individuals are either dead or ''resistant'' to the virus, but beyond that the system carries on more or less as it did.


An antifragile system requires both kinds of redundancy: plant and stock, to keep the machine going, but tools and knowhow, to tweak the machine. Experience, expertise and insight. The same things — though they are expensive — that can head off catastrophic events can apprehend and capitalise upon outsized business opportunities. ChatGPT will not help with that.
A better model, Centola says, is a fishing net. Where a virus spreads quickly and burns out before people have been influenced to change ( and indeed may be more resolutely set against change), when people are exposed to change through many  strong, deep network ties change will spread more slowly but more effectively and permanently.  


===Redundancy as a key to successful change management===
This, too stands to reason: if we are invited to propose change and sponsor it, rather than having it imposed upon us, we are more likely to own it.
Damon Centola ’s research about concentration and bunching of constituents to ensure change is permanent.

Latest revision as of 17:58, 26 September 2023

High modernism
haɪ ˈmɒdᵊnɪzᵊm (n.)

A form of modernism characterised by an unfaltering confidence in science and technology as means to reorder the social and natural world.

1. Data modernism

One of the JC’s pet theories is that western commerce — especially the part concerned with moving green bits of paper around — is deep into the regrettable phase of a love affair with “data modernism”, a computer-adulterated form of high modernism.

As James C. Scott articulates it in his magnificent Seeing Like a State: How Certain Schemes to Improve the Human Condition Have Failed, high modernism is a “muscle-bound” self-confidence in the power of enlightenment — with or without a capital E — to satisfy human needs and master nature (including our own) by the central organisation of society according to scientific and logical principles.

This is the rational, ordered and geometric view that the world can be manhandled, literally, to optimise social outcomes through big, centrally-governed infrastructural, agricultural and technological projects.

Embraced as it was by the two great Utopian ideologies of the twentieth century, high modernism reached its zenith in the 1930s, and fell from grace as dramatically as they did, but its basic, mad-scientist premise — that with sufficient information, processing power and control we can master our domain — has never quite gone away.

There were two Utopian ideas that died in the twentieth century, and one that didn’t: F. W. Taylor’s scientific management: the view that, just as the natural world can be ordered by science, so can the business world be ordered, and controlled, by process. Process is a sort of algorithm that runs on a carbon and not a silicon substrate: that is, us. Taylorism, too, once out of favour, is making a revival in the networked, data-driven world. We call this “data modernism”. It is encapsulated in the expression, attributed to Edwin R. Fisher:

In God we trust. All others must bring data.[1]

Bring your own job satisfaction

In pitting information against experience this philosophy has systematically undermined the importance in organisations of those with ineffable expertise. They are called “subject-matter experts”, which sounds venerable until you hear the exasperated tone in which it is uttered.

Over forty years the poor SME has, by a thousand literal cuts, been stripped of her status and weathered a slow but inevitable descent into the quotidian: first they came for her assistants — typists, receptionists, proof-readers, mail and fax room attendants — then her perks — business travel, away days, taxis home — then her kit — company cars, laptops, mobile devices — then her space — that once commodious office became communal, then lost its door, then its walls, diminished to a dedicated space along a row, and most recently has become a conditional promise of a sanitised white space in front of a telescreen somewhere in the building, should you be in early, or enough of your colleagues away sick or on holiday.

This managed degradation of expertise is a logical consequence of data modernism: human “magic” is not good, but an evil that is no longer necessary: risky, inconstant, evanescent, fragile, expensive, inconstant and, most of all, hard to quantify — and what you can’t quantify, you can’t evaluate, and what you can’t evaluate you shouldn’t, in a data-optimised world, do.

With the exploding power of information processing the range of things for which we must still rely on that necessary evil has diminished. Many thought leaders[2] foretell it is only a matter of time until there are none left at all.

Sciencing the shit out of business

Data modernism’s central metaphor works by treating human workers as if they were carbon-based Turing machines, and “the firm” an orchestrated network of automatons. Orchestration happens centrally, from the place with the best view of the big picture: the top.[3]

From the top, the only “legible” information is data, so all germane management information takes that form — you know: “In God we trust, all others must bring data”. With enough of the stuff, so the theory goes, everything about the organisation’s present can be known, and from a complete picture of the present one can extrapolate to the future.

Armed with all the data, the organisation’s permanent infrastructure can be honed down and dedicated to its core business. Peripheral functions — operations, personnel, legal and ~ cough ~ strategic management advice — can be outsourced to specialist service providers, and then scaled up or down as management priorities dictate[4] or switched out should they malfunction or otherwise be surplus to requirements.[5]

We should, by now, feel like we are in a new and better world — right? — yet the customer experience as poor as ever. Not worse, necessarily: just no better. Just try getting hold of a bank manager now. “BAU-as-a-service” has streamlined and enhanced the great heft what businesses do, at the cost of depersonalising service and eliminating outlying opportunities for which the model says there is no business case.

You might call this effect “Pareto triage”. Great for those within a standard deviation of the mean, who are happy with the average. But it poorly serves the long tail of oddities and opportunities. Those just beyond that “Pareto threshold” have little choice but to manage their expectations and take a marginally unsatisfactory experience as the best they are likely to get. Customers subordinate their own priorities to the preferences of the model. But, unless you are McDonald’s, the proposition that 80% of your customers want exactly the same thing — as opposed to just being prepared to put up with it, in the absence of a better alternative — is a kind of wishful averagarianism.

The Moneyball effect: experts are bogus

In the mean time, those subject matter experts who don’t drop off altogether wither on the vine. Even though we have less than perfect information, algorithmic extrapolations, derivations and pattern matches from what ever we do have are presumed to yield greater predictive value than any subject matter experts’ “ineffable wisdom”.

This is the Moneyball lesson. Our veneration for human expertise is a misapprehension. It is, er, not borne out by the data. And in the twenty-first century we are inundated with data. Business optimisation is just a hard mathematical problem. Now we have computer processing power to burn, it is a knowable unknown. To the extent we fail, we can put it down to not enough data or computing power — yet. But the singularity is coming, soon.

The persistence of rubbish

That we are arcing exponentially upward towards something, as the adjacent possibilities explode around us[6] presumes that civilisation is, presently, as close to “the singularity”, whatever that is — truth, nirvana, apocalypse? Who knows? — as it has ever been.

But, seeing as we don’t know what or where that end state is (if we did, we would already be there, Q.E.D.), nor therefore can we know how close we are to it, nor whether we are going in the right direction.

A better hypothesis, thanks to Occam’s razor therefore: there is no end-goal. We are just humans, being.

And if so, it’s worth asking again, how come everything seems so joyless and glum? Are we missing something?

That would explain a curious dissonance: these modernising techniques arrive and flourish, while traditional modes of working requiring skill, craftsmanship and tact are outsourced, computerised, right-sized and AI-enhanced — but yet the end product gets no less cumbersome, no faster, no leaner, and no less risky. Tedium remains constant.[7]

There may be fewer subject matter experts around, but there seem to be more software-as-a-service providers, MBAs, COOs, workstream leads and itinerant school-leavers in call-centres on the outskirts of Brașov.[8]

Taylorism

None of this is new: just our enthusiasm for it. The prophet of data modernism was Frederick Winslow Taylor, progenitor of the maximally efficient production line. His inheritors say things like, “the singularity is near” and “software will eat the world” but for all their millenarianism the on-the-ground experience at the business end of this all world-eating software is as grim as it ever was.

2. Time

Reductionism about time

The JC’s developing view is that this grimness is caused by the poverty of this model when compared to the territory it sets out to map. For the magic of an algorithm is its ability to reduce a rich, multi-dimensional experience to a succession of very simple, one-dimensional steps. But in that reductionism, we lose something essential.

The Turing machines on which data modernism depends have no tense. There is no past or future, perfect or otherwise in code: there is only a permanent simple present. A software object’s past is rendered as a series of date-stamped events and presented as metadata in the present. An object’s future is not represented at all.

In reducing everything to data, spatio-temporal continuity is represented as an array of contiguous, static events. Each has zero duration and zero dimension: they are just values.

Now the beauty of a static frame is its economy. It can’t move, it can’t surprise us, it takes up minimal space. We can replace bandwidth-heavy actual spacetime, in which three dimensional objects project backwards and forwards in a fourth dimension — with apparent time, rendered the single symbol-processing dimension that is the lot of all Turing machines.

The apparent temporal continuity that results, like cinematography, is a conjuring trick: it does not exist “in the code” at all; rather the output of the code is presented in a way that induces the viewer to impute continuity to it. When regarding the code’s output, the user ascribes her own conceptualisation of time, from her own natural language, to what she sees. The “magic” is not in the machine. It is in her head.

For existential continuity backwards and forwards in “time”, is precisely the problem the human brain evolved to solve: it demands a projection of continuously existing “things” with definitive boundaries, just one of which is “me”, moving through spacetime, interacting with each other. None of this “continuity” is “in the data”.[9]

Turing machines, and data modernism that depends on them, does away with the need for time and continuity altogether, instead simulating it through a succession of static slices — but that continuity vanishes when one regards the picture show as a sequence of still frames.

But existential continuity is not the sort of problem you can define away. Dealing with history and continuity is exactly the thing we are trying to solve.

Gerd Gigerenzer has a nice example that illustrates the importance of continuity.

Imagine a still frame of two pint glasses, A and B, each containing half a pint of beer. Which is half-full and which is half-empty? This static scenario poses an apparently stupid question. It is often used to illustrate how illogical and imprecise our language is. But this is only true if the still frame is considered in the abstract: that is, stripped of its context in time and space.

For, imagine a short film at the start of which glass A is full and glass is B empty. Then a little Cartesian imp arrives, picks up glass A and tips half into glass B. Now which is half-full and which is half-empty? That history makes a difference.

The second time-bound scenario tells us something small, but meaningful about the history of the world. The snapshot does not.

Assembling a risk period out of snapshots

The object of the exercise is to have as fair a picture of the real risk of the situation with as little information processing as possible. Risks play out over a timeframe: the trick is to gauge what that is.

Here is the appeal of data modernism: you can assemble the appearance of temporal continuity — the calculations for which are gargantuan — out of a series of data snapshots, the calculations for which are merely huge.

Information processing capacity being what it is — still limited, approaching an asymptote and increasingly energy consumptive (rather like an object approaching the speed of light) — there is still a virtue in economy. Huge beats gargantuan. In any case, the shorter the window of time we must represent to get that fair picture of the risk situation, the better. We tend to err on the short side.

For example the listed corporate’s quarterly reporting period: commentators lament how it prioritises unsustainable short term profits over long term corporate health and stability — it does — while modernists, and those resigned to it, shrug their shoulders with varying degrees of regret, shake their heads and say that is just how it is.

For our purposes, another short period that risk managers look to is the liquidity period: the longest plausible time one is stuck with an investment before one can get out of it. The period of risk. This is the time frame over which one measures — guestimates — one’s maximum potential unavoidable loss.

Liquidity differs by asset class. Liquidity for equities is usually almost — not quite, and this is important — instant. Risk managers generally treat it “a day or so”.

For an investment fund it might be a day, a month, a quarter, or a year. Private equity might be 5 years. For real estate it is realistically months, but in any case indeterminate. Probabilistically you are highly unlikely to lose the lot in a day, but over five years there is a real chance.

So generally the more liquid the asset, the more controllable is its risk. But liquidity, like volatility, comes and goes. It is usually not there when you most need it. So we should err on the long side when estimating liquidity periods in a time of stress.

But there the longer the period, the greater that change of loss. And the harder things are to calculate. We are doubly motivated to keep liquidity periods as short as as possible.

Glass half full and multidimensionality

Here is where history — real history, not the synthetic history afforded by data modernism — makes a difference.

On a day, the realistic range in which a stock can move in a liquidity period — its “gap risk” — is relatively stable. Say, 30 percent of its market value. (This market value we derive from technical and fundamental readings: the business ’s book value, the presumption that there is a sensible bid and ask, so that the stock price will oscillate around its “true value” as bulls and bears cancel each other out under the magical swoon of Adam Smith’s invisible hand.

But this view is assembled from static snapshots which don't move at all. Each frame carries no intrinsic risk: the illusion of movement emerges from the succession of frames. Therefore data modernism is not good at estimating how long a risk period should be. Each of its snapshots, when you zero in on it, is a still life: here, shorn of its history, a “glass half full” and a “glass half empty” look alike.

We apply the our risk tools to them as if they were the same: assuming the market value is fair, how much could I lose in the time it would realistically take me to sell? Thirty percent, right?

But they are not the same.

If a stock trades at 200 today, it makes a difference that it traded at 100 yesterday, 50 the day before that, and over the last ten years traded within a range between 25 and 35. This history tells us this glass, right now, is massively, catastrophically over-full: that the milk in it is, somehow, freakishly forming an improbable spontaneous column above the glass, restrained and supported by nothing by the laws of extreme improbability, and it is liable to revert to its Brownian state at any moment with milk spilt everywhere.

With that history might think a drop of 30pc of the milk is our best case scenario.

It’s the long run, stupid

The usual approach for system optimisation is to take a snapshot of the process as it is over its lifecycle, and map that against a hypothetical critical path. Kinks and duplications in the process are usually obvious, and we can iron them out to reconfigure the system to be as efficient and responsive as possible. Mapping best case and worst case scenarios for each phase in that life cycle can give good insights into which parts of the process are in need of re-engineering: it is often not the ones we expect.

But how long should that life cycle be? We should judge it by the frequency of the worst possible negative event that could happen. Given that we are contemplating the infinite future, this is hard to say, but it is longer than we think: not just a single manufacturing cycle or reporting period. The efficiency of a process must take in all parts of the cycle — the whole gamut of the four seasons — not just that nice day in July when all seems fabulous with the world. There will be other days; difficult ones, on which where multiple unrelated components fail at the same moment, or where the market drops, clients blow up, or tastes gradually change. There will be almost imperceptible, secular changes in the market which will demand products be refreshed, replaced, updated, reconfigured; opportunities and challenges will arise which must be met: your window for measuring who and what is truly redundant in your organisation must be long enough to capture all of those slow-burning, infrequent things.

Take our old, now dearly departed, friends at Credit Suisse. Like all banks, over the last decade they were heavily focused on the cost of their prime brokerage operation. Prime brokerage is a simple enough business, but it’s also easy to lose your shirt doing it.

In peace-time, things looked easy for Credit Suisse, so they juniorised their risk teams. This, no doubt, marginally improved their net peacetime return on their relationship with Archegos. But those wage savings — even if $10m annually, were out of all proportion to the incremental risk that they assumed as a result.

(We are, of course, assuming that better human risk management might have averted that loss. If it would not have, then the firm should not have been in business at all)

The skills and operations you need for these phases are different, more expensive, but likely far more determinative of the success of your organisation over the long run.

The Simpson’s paradox effect: over a short period the efficiency curve may seem to go one way; over a longer period it may run perpendicular.

The perils, therefore, of data: it is necessarily a snapshot, and in our impatient times we imagine time horizons that are far too short. A sensible time horizon should be determined not by reference to your expected regular income, but to your worst possible day. Take our old friend Archegos: it hardly matters that you can earn $20m from a client in a year, consistently, every year for twenty years if you stand to lose five billion dollars in the twenty-first.

Then, your time horizon for redundancy is not one year, or twenty years, but two-hundred and fifty years. Quarter of a millennium: that is how long it would take to earn back $5 billion in twenty million dollar clips.

3. Redundancy

On the virtue of slack

Redundancy is another word for “slack”, in the sense of “looseness in the tether between interconnected parts of a wider whole”.

To optimise normal operation, we hear, we should minimise slack, thereby generating maximum responsiveness, handling, cornering: what musicians would call “attack” — tightness gives the greatest torque, the most direct transmission of power to road; the minimum latency.

The tighter we couple inputs to outputs, the faster the response. But the less margin there is for variation.

And, as Charles Perrow notes[10] this in-the-moment flow state, when the machine is humming, is only a stable state in tightly constrained environments. Where every outcome can be predicted, monitored, and sub-optimal ones can be avoided by rote.

But, generally, these are not very interesting environments. They are production lines. Factory shop floors — nomological machines — where every element of the process is under control. It is where production is not tightly controlled — intervening agents, third parties, shifting priorities and market conditions — that things get “interesting”.

That very lack of “give” that makes a sports car so responsive on a dry track makes it skid off a wet one. The less slack there is, the less time an operator has to diagnose and fix a problem — or shut the system down — to avoid catastrophic damage.

A system with built-in back-ups and redundancies can go on working while we repair failed components. A certain amount of “stockpiling” in the system allows production to continue should there be any outages or supply chain problems throughout the process.

But even a production line environment is not perfectly stable. It should be in a constant state of improvement whereby engineers monitor and adjust to optimise, to cater for evolving demand, to react to market developments, and capitalise on new technology and knowhow.

This is “meta-production”: a valuable “background processing” function — important and valuable but not day to day “urgent”— for which “redundant” personnel can be occupied, from which they can redeploy immediately should a crisis arise.

This has two benefits: firstly the process of “peacetime” self-analysis should in part be aimed at identifying emerging risks and design flaws in the system, thus heading off incipient crisis; secondly, to do that the personnel need expertise: an intimate, detailed, holistic understanding of the process and the system. By intimately understanding the system, these second-line workers should therefore be better able to react to a crisis should one arise.

This behaviour rewards long-term “skin in the game”. The best employees here are long-serving, local, full-time, employees full of institutional knowledge and practical hands-on systems knowhow. Inexperienced outsourced labour, of the sort by whom these traditional experts are being systematically replaced, will be far less use in either role.

To be sure, the importance of employees, and the value they add, is not constant. We all have flat days where we don’t achieve very much. In an operationalised workplace they pick up a penny a day on 99 days out of 100; if they save the firm £ on that 100th day, it is worth paying them 2 pennies a day every day even if, 99 days out of 100, you are making a loss.

Fragility and tight coupling

The “leaner” a distributed system is, the more fragile it will be and the more “single points of failure” it will contain whose malfunction, in the best case, will halt the whole system, and in tightly-coupled complex systems may trigger a chain reaction of successive component failures, chain reactions and unpredictable nonlinear consequences. On 9/11, as Martin Amis put it, 20 box-cutters created two million tonnes of rubble, left 4,000 dead, and transformed global politics for a generation.

A financial market is a complex system. It comprises an indeterminate number of autonomous actors, many of whom — notably participating corporations — are themselves complex systems, interacting in unpredictable ways. It is the nature of a complex system that it is unpredictable — not just in the sense of a jointed pendulum, whose exact path is impossible to predict, but whose possible range of movement, it's design space, is known to 100 percent. The “design space” of a complex system is unknown. You cannot calculate even probabilistically how it will behave. The fact that, for long periods, it appears to closely cleave to simple operating parameters is beside the point.

The robustness of any system depends on the tightness of the coupling between components. How much slack is there? In financial markets, increasingly, none at all.

When the JC started practise a millennium ago, to convey an urgent written communication one gave it to a chap on a bicycle. Well — one gave it to a secretary, who sent it to the mailroom, and they gave it to a chap on a bicycle. Facsimile was the innovation: while quicker than a bike courier, it was still manual and bounded at either end by analogue processes such that the communication began and ended embedded in a physical substrate which you couldn’t easily reply to, forward[11] let alone cut and paste from.

The analog universe thus imposed its own immutable sobriety upon the conduct of business: there was a genteel maximum at which matters progressed, and that was that. Time is its own natural fire break. You could charge down to the mail room and call back an ill-advised letter in a way you can’t with an intemperate email. Just having the letter typed meant it passed through multiple hands, that effluxion was itself a kind of self-enforcing circumspection and a kind of natural brake upon precipitate behaviour.

In any case slow, loosely-coupled chain reactions have a better chance of being stopped, or contained. The liquidity crunch that ruined Silicon Valley Bank unfolded in minutes, not hours, and did for Credit Suisse, an unrelated bank on a different continent before bank executives could work out what was going on.

So, yes, financial services are tightly coupled. The increasing speed and complexity of the system’s interconnectedness aggravates crash risk. The more interconnections, the faster information flows around the system, the more quickly it can swamp whatever systems we erect to contain it. Credit Suisse had its own fundamental problems, to be sure, but the speed at which it was brought down by entirely unconnected system events should surely give pause for thought.

Redundancy — slack — in this environment, is a virtue.

Regulatory human capital?

Instinctively, we all know this.

We build certain kinds of redundancy into our systems precisely as a failsafe against catastrophic failure. Financial services regulators require banks to hold regulatory capital — cash, held against no specific risk, as a bulwark against divers credit and liquidity crises.

Tier 1 capital is a buffer — slack; a system redundancy — designed to protect not just the individual institutions who must hold it but the wider system. As executives at Lehman and Credit Suisse would tell us, after the fact, capital takes you so far. (Before the fact they might have grumbled, too, that capital is an expensive dead weight on corporate returns.)

For regulatory capital is an airbag: protects you in a prang, but doesn’t help you avoid one in the first place. To be sure, there are accounting techniques that do: risk weighting, leverage ratios, regulatory margin — when they work, they are better than airbags, but they suffer from being determinate responses to unpredictable problems. There have already been three Basel Accords; they are working on a fourth, because the first three haven’t had the desired effect. We still have periodic market meltdowns, not because the Basel rules aren’t detailed enough, but because, fundamentally, fixed rules cannot manage indeterminate risk situations. We have seen over and over well-meant rules behave counterintuitively at times of stress.[12]

We should not be surprised: accounting rules aren't sentient. They cannot read the market, understand a given institution’s cultural dynamics, let alone its particular risk profile in times of unforeseeable stress.

But different kinds of buffers might be more effective at avoiding the pickles that leveraged financial institutions can get themselves into. Buffers of resource, material and significantly expert people: overabundance of skill, experience and expertise that can diagnose, react to, prevent and manage liquidity crises.

Why not, as well as regulatory share capital, encourage our institutions to hold excess human capital? Or at least be less cavalier about systematically removing it, in the name of short-term cost savings.

Just as we must hold share capital in fair weather as well as foul, we should not expect to run expertise in fair weather on a shoestring. You can’t buy-in institutional knowledge in a time of crisis. You can’t buy institutional knowledge at all. Even un-contextualised expertise, at a time of panic, will command outrageous premiums.

Jidoka

But what, a finance director might ask, would these expensive experts do if they are technically “redundant”?

Unlike tier 1 capital, human capital need not just sit there costing money. These are people you can use as systems design and process experts, to analyse systems, root out anachronisms, build parallel state-of-the-art IT systems from which legacy infrastructure can be migrated. This is jidoka — automation with a human touch. This is creative, rewarding, builds

We run the gamut from super-fragility, where component failure triggers system meltdown — these are Charles Perrow’s“system accidents”; a continuum between normal fragility, where component failure causes system disruption and normal robustness where there is enough redundancy in the system that it can withstand outages and component failures, bit components will continue to fail in predictable ways, and then antifragility, where the redundancy itself is able to respond to component failures and secular challenges, and resigns the system in light of experience to reduce the risk of known failures.

The difference between robustness and antifragility here is the quality of the redundant components. If your redundancy strategy is to have lots of excess stock, lots of spare components and an inexhaustible supply of itinerant, enthusiastic but inexpert school-leavers from Bucharest ,then your machine will be robust and functional will be able to keep operating as long as macro conditions persist, but it will not learn it will not develop, and it will not adapt to changing circumstances.

An antifragile system requires both kinds of redundancy: plant and stock, to keep the machine going, but tools and knowhow, to tweak the machine. Experience, expertise and insight. The same things — though they are expensive — that can head off catastrophic events can apprehend and capitalise upon outsized business opportunities. ChatGPT will not help with that.

Suitable candidates for regulatory human capital

Speaking of systems there is, too, a negative feedback loop operating here. The institutional knowledge lives with loyal, long-serving employees. The 20-year operations veteran you made redundant in the last cost-cutting challenge is not fungible with the team of school leavers from Bucharest to whom you diffused his role. Nor will those call-centre contractors have the expertise or disposition to be much use as a system redundancy: they don’t have the skills, commitment or continuity of experience to help with re-engineering systems and optimising processes: that is a job that requires subtlety, and intimate understanding of how the organisation ticks. Outsourced contractors neither know nor care.

“Redundancy” as a key to successful change management

But gravity always wins.

Radiohead, Fake Plastic Trees (1992)

Complex systems seek out their own equilibria. (A complex scenario that does not is not a system. It will fly apart).

It finds its equilibrium not from by divine command from the centre, but by countless decisions by the autonomous components that comprise the system. Over time, those autonomous components — people, mostly — react to stimuli, settle into habits and contrive ways of working, as they to, creating their own sub-networks, dependencies and generally acquiring their own meta theories of what they are there to do and how best to do it (some do this more consciously than others, but all, at some level do it.)

These priorities will be personal to each component: they may partly coincide with the organisation’s but won’t entirely — it is no part of a corporation’s plan, above all else, to make sure I stay here, and thrive, and get paid, while minimising personal risk and responsibility, but this, we submit, motivates most corporate employees more profoundly than ensuring immaculate shareholder return. But we digress.

In any case the systems and subsystems evolve their own ways of working. They create their own efficiencies — efficiencies that yield to those personal motivations, and may be quite perverse to the organisation’s stated mission.

They wear in grooves, smooth down edges and naturally, through the adaptive process of usage, seek out “local maxima”, judged from the perspective of the local components.

We should not be surprised that systems which have found such an equilibrium are hard to shift from it. Call that equilibrium an “operating paradigm”. They will, through force of habit, precedent, template, and agreed ways of doing things, drift back to it.

In a fight between logic and gravity, gravity always wins. The only way to beat gravity is to work with it, to find new maxima.

It stands to reason that a single “change agent” who arrives from outside and says, “hey, fellas, wouldn’t it be great if we fixed this?” won’t get far with the veteran crew who run the process now. Imagine an uncredentialised outsider presenting special relativity to the royal academy in 1700, a few years after Newton published Principia Mathematica. It is hard to imagine such an outsider even getting an audience, let alone going over well.

The thing about an operating paradigm is that it is operating. On its own terms, it works. It isn’t in crisis. Now in Thomas Kuhn’s conception of them,[13] paradigms generally only break down when they stop working on their own terms. Even then, Credentialed practitioners go out of their way to reframe their data to ensure it is consistent with the paradigm. They make things up to make it work: cosmological constants, dark energy, even an entire multiverse. As far as its constituents are concerned, it is working fine. They may regard it as a thing of beauty, a many-splendoured contraption that they have, over the ages, grown into and dependent on, the way a beaver grows into and dependent on its dam. They will not easily give it up — cannot: they would be lost without it. We should not be surprised to see well-meant change initiatives foundering against this kind of entropy: this will for things to settle back to how they were.

This is the single virtue of the reduction in force. By arbitrarily removing a percentage of the system components, you might force it out of equilibrium, giving the components no choice but to find new ways of working. But their motivations as they do so are no less self-motivated than they were: you cannot shock a system into behaving selflessly.

Damon Centola[14] research about concentration and bunching of constituents to ensure change is permanent. Complex change isn't like viral infection. We can’t expect to drop jewels of crystalline logic into a well established system equilibrium and expect it to spontaneously revolutionise itself. Even viral infections,which do that, rip through the population and then vanish. Individuals are either dead or resistant to the virus, but beyond that the system carries on more or less as it did.

A better model, Centola says, is a fishing net. Where a virus spreads quickly and burns out before people have been influenced to change ( and indeed may be more resolutely set against change), when people are exposed to change through many strong, deep network ties change will spread more slowly but more effectively and permanently.

This, too stands to reason: if we are invited to propose change and sponsor it, rather than having it imposed upon us, we are more likely to own it.

  1. Notably, Fisher made the statement to a Senate subcommittee in rebuttal to the proposition that passive smoking is bad for you: “I should like to close by citing a well-recognised cliché in scientific circles. The cliché is, “In God we trust, others must provide data.” What we need is good scientific data before I am willing to accept and submit to the proposition that smoking is a hazard to the nonsmoker.”
  2. The most prominent is Ray Kurzweil, though honourable mention to DB’s former CEO John Cryan and, of course, there is the redoubtable Suss.
  3. Curiously, this is not the theory behind distributed computing, which is rather controlled from the edges. But still.
  4. “Surge pricing” in times of crisis, though.
  5. A former general counsel of UBS once had the bright idea of creating a “shared service” out of its legal function that could be contracted out to other banks, like Credit Suisse. He kept bringing the idea up, though it was rapidly pooh-poohed each time. Who knew it would work out so well in practice?
  6. The “Adjacent Possible” – and How It Explains Human Innovation, Stuart Kauffman, TED,
  7. This may be a sixteenth law of worker entropy.
  8. We present therefore the JC’s sixteenth law of worker entropy — the “law of conservation of tedium”.
  9. David Hume wrestled with this idea of continuity: if I see you, then look away, then look back at you, what grounds do I have for believing it is still “you”? Computer code makes no such assumption. It captures property A, timestamp 1; property A timestamp 2, property A timestamp 3: these are discrete objects with common property, in a permanent present — code imputes no necessary link between them, not does it extrapolate intermediate states. It is the human genius to make that logical leap. How we do it, when we do it — generally, how human consciousness works, defies explanation. Daniel Dennett made a virtuoso attempt to apply this algorithmic reductionist approach to the problem of mind in Consciousness Explained, but ended up defining away the very thing he claimed to explain, effectively concluding “consciousness is an illusion”. But on whom?
  10. In one of the JC’s favourite books, Normal Accidents: Living with High-Risk Technologies.
  11. Not at least without time, manual intervention and loss of fidelity.
  12. Quoth the Basel Committee, explaining its most recent rules:
    An underlying cause of the global financial crisis was the build-up of excessive on- and off-balance sheet leverage in the banking system. In many cases, banks built up excessive leverage while apparently maintaining strong risk-based capital ratios. At the height of the crisis, financial markets forced the banking sector to reduce its leverage in a manner that amplified downward pressures on asset prices. This deleveraging process exacerbated the feedback loop between losses, falling bank capital and shrinking credit availability.
  13. The Structure of Scientific Revolutions. Wonderful book.
  14. Damon Centola, Change: How to Make Big Things Happen, 2021.