Template:M intro design System redundancy: Difference between revisions
Amwelladmin (talk | contribs) No edit summary |
Amwelladmin (talk | contribs) No edit summary |
||
Line 1: | Line 1: | ||
{{Quote|“I think the people in this country have had enough of experts from organisations with acronyms saying that they know what is best and getting it consistently wrong.” | {{Quote|“I think the people in this country have had enough of experts from organisations with acronyms saying that they know what is best and getting it consistently wrong.” | ||
:—Michael Gove}} | :—Michael Gove}} | ||
[[System redundancy|The]] JC likes his pet management theories as you know, readers, and none are dearer to his heart than the idea that the [[High modernism|high-modernist]]s have, for forty years, held western management orthodoxy hostage. | [[System redundancy|The]] JC likes his pet management theories as you know, readers, and none are dearer to his heart than the idea that the [[High modernism|high-modernist]]s have, for forty years, held western management orthodoxy hostage. | ||
Line 23: | Line 23: | ||
In any case, just in time rationalisers take a cycle and code for that. What is the process, start to finish, what are the dependencies, what are the plausible unknowns, and how do we optimise for efficiency of movement, components and materials, to manage | In any case, just in time rationalisers take a cycle and code for that. What is the process, start to finish, what are the dependencies, what are the plausible unknowns, and how do we optimise for efficiency of movement, components and materials, to manage | ||
=== It’s the long run, stupid=== | |||
The usual approach for system optimisation is to take a snapshot of the process as it is over its lifecycle, and map that against a hypothetical critical path. Kinks and duplications in the process are usually obvious, and we can iron them out to reconfigure the system to be as efficient and responsive as possible. Mapping best case and worst case scenarios for each phase in that life cycle can give good insights into which parts of the process are in need of re-engineering: it is often [[What will it look like?|not the ones we expect]]. | |||
But how long should that life cycle be? We should judge it by the frequency of the worst possible negative event that could happen. Given that we are contemplating the [[infinite]] future, this is hard to say, but it is longer than we think: not just a single manufacturing cycle or reporting period. The efficiency of a process must take in ''all'' parts of the cycle — the whole gamut of the four seasons — not just that nice day in July when all seems fabulous with the world. There will be other days; difficult ones, on which where multiple unrelated components fail at the same moment, or where the market drops, clients blow up, or tastes gradually change. There will be almost imperceptible, secular changes in the market which will demand products be refreshed, replaced, updated, reconfigured; opportunities and challenges will arise which must be met: your window for measuring who and what is ''truly'' redundant in your organisation must be long enough to capture all of those slow-burning, infrequent things. | |||
Take our old, now dearly departed, friends at [[Credit Suisse]]. Like all banks, over the last decade they were heavily focused on the ''cost'' of their prime brokerage operation. Prime brokerage is a simple enough business, but it’s also easy to lose your shirt doing it. | |||
In peace-time, things looked easy for [[Credit Suisse]], so they juniorised their risk teams. This, no doubt, marginally improved their net peacetime return on their relationship with [[Archegos]]. But those wage savings — even if $10m annually, were out of all proportion to the incremental risk that they assumed as a result. | |||
(We are, of course, assuming that better human risk management might have averted that loss. If it would not have, then the firm should not have been in business at all) | |||
The skills and operations you need for these phases are different, more expensive, but likely far more determinative of the success of your organisation over the long run. | |||
The [[Simpson’s paradox]] effect: over a short period the efficiency curve may seem to go one way; over a longer period it may run perpendicular. | |||
The perils, therefore, of data: it is necessarily a snapshot, and in our impatient times we imagine time horizons that are far too short. A sensible time horizon should be determined not by reference to your expected regular income, but to your worst possible day. Take our old friend [[Archegos]]: it hardly matters that you can earn $20m from a client in a year, consistently, every year for twenty years ''if you stand to lose five billion dollars in the twenty-first''. | |||
Then, your time horizon for redundancy is not one year, or twenty years, but ''two-hundred and fifty years''. Quarter of a millennium: that is how long it would take to earn back $5 billion in twenty million dollar clips. |
Revision as of 23:08, 1 August 2023
“I think the people in this country have had enough of experts from organisations with acronyms saying that they know what is best and getting it consistently wrong.”
- —Michael Gove
The JC likes his pet management theories as you know, readers, and none are dearer to his heart than the idea that the high-modernists have, for forty years, held western management orthodoxy hostage.
The modernist programme is as simple to state as it is self-serving: a distributed organisation is best controlled centrally, and from the place with the best view of the big picture: the top. All relevant information can be articulated as data — you know: “In God we trust, all others must bring data” — and, with enough data everything about the organisation’s present can be known and its future extrapolated.
Even though, inevitably, one has less than perfect information, extrapolations, mathematical derivations and algorithmic pattern matches from a large but finite data set will have better predictive value than the gut feel of “ineffable expertise”: the status we have historically assigned to experienced experts is grounded in folk psychology, lacks analytical rigour and, when compared with sufficient granular data, cannot be borne out: this is the lesson of Moneyball: The Art of Winning an Unfair Game. Just as Wall Street data crunchers can have no clue about baseball and still outperform veteran talent scouts, so can data models and analytics who know nothing about the technical details of, say, the law outperform humans who do when optimising business systems. Thus, from a network of programmed but uncomprehending rule-followers, a smooth, steady and stable business revenue stream emerges.
Since the world overflows with data, we can programmatise business. Optimisation is a mathematical problem to be solved. It is a knowable unknown. To the extent we fail, we can put it down to not enough data or computing power.
Since data quantity and computing horsepower have exploded in the last few decades, the high-modernists have grown ever surer that their time — the Singularity — is nigh. Before long, and everything will be solved.
But, a curious dissonance: these modernising techniques arrive and flourish, while traditional modes of working requiring skill, craftsmanship and tact are outsourced, computerised, right-sized and AI-enhanced — but yet the end product gets no less cumbersome, no faster, no leaner, and no less risky. There may be fewer subject matter experts around, but there seem to be more software-as-a-service providers, MBAs, COOs, workstream leads and itinerant school-leavers in call-centres on the outskirts of Brașov
The pioneer of this kind of modernism was Frederick Winslow Taylor. He was the progenitor of the maximally efficient production line. His inheritors say things like, “the singularity is near” and “software will eat the world” but for all their millenarianism the on-the-ground experience at the business end of this all world-eating software is as grim as it ever was.
We have a theory that this “data reductionism” reducing everything to quantisable inputs and outputs — owes tends to a kind of reductionism, only about time: just as radical rationalists see all knowledge as reducible to, and explicable in terms of, its infinitesimally small sub-atomic essence, so the data modernists see it as explicable in terms of infinitesimally small windows of time.
This is partly because computer languages don’t do tense: they are coded in the present, and have no frame of reference for continuity. [1] And it is partly because having to cope with history, the passage of time, and the continued existence of objects, makes things exponentially more complex than they already are. An atomically thin snapshot of the world as data is enough of a beast to be still well beyond the operating parameters of even the most powerful quantum machines: that level of detail extending into the future and back from the past is, literally, infinitely more complicated. The modernist programme is to suppose that “time” is really just comprised of billions of infinitesimally thin, static slices, each functionally identical to any other, so by measuring the delta between them we have a means of handling that complexity.
That is does not have a hope of working seems beside the point.
In any case, just in time rationalisers take a cycle and code for that. What is the process, start to finish, what are the dependencies, what are the plausible unknowns, and how do we optimise for efficiency of movement, components and materials, to manage
It’s the long run, stupid
The usual approach for system optimisation is to take a snapshot of the process as it is over its lifecycle, and map that against a hypothetical critical path. Kinks and duplications in the process are usually obvious, and we can iron them out to reconfigure the system to be as efficient and responsive as possible. Mapping best case and worst case scenarios for each phase in that life cycle can give good insights into which parts of the process are in need of re-engineering: it is often not the ones we expect.
But how long should that life cycle be? We should judge it by the frequency of the worst possible negative event that could happen. Given that we are contemplating the infinite future, this is hard to say, but it is longer than we think: not just a single manufacturing cycle or reporting period. The efficiency of a process must take in all parts of the cycle — the whole gamut of the four seasons — not just that nice day in July when all seems fabulous with the world. There will be other days; difficult ones, on which where multiple unrelated components fail at the same moment, or where the market drops, clients blow up, or tastes gradually change. There will be almost imperceptible, secular changes in the market which will demand products be refreshed, replaced, updated, reconfigured; opportunities and challenges will arise which must be met: your window for measuring who and what is truly redundant in your organisation must be long enough to capture all of those slow-burning, infrequent things.
Take our old, now dearly departed, friends at Credit Suisse. Like all banks, over the last decade they were heavily focused on the cost of their prime brokerage operation. Prime brokerage is a simple enough business, but it’s also easy to lose your shirt doing it.
In peace-time, things looked easy for Credit Suisse, so they juniorised their risk teams. This, no doubt, marginally improved their net peacetime return on their relationship with Archegos. But those wage savings — even if $10m annually, were out of all proportion to the incremental risk that they assumed as a result.
(We are, of course, assuming that better human risk management might have averted that loss. If it would not have, then the firm should not have been in business at all)
The skills and operations you need for these phases are different, more expensive, but likely far more determinative of the success of your organisation over the long run.
The Simpson’s paradox effect: over a short period the efficiency curve may seem to go one way; over a longer period it may run perpendicular.
The perils, therefore, of data: it is necessarily a snapshot, and in our impatient times we imagine time horizons that are far too short. A sensible time horizon should be determined not by reference to your expected regular income, but to your worst possible day. Take our old friend Archegos: it hardly matters that you can earn $20m from a client in a year, consistently, every year for twenty years if you stand to lose five billion dollars in the twenty-first.
Then, your time horizon for redundancy is not one year, or twenty years, but two-hundred and fifty years. Quarter of a millennium: that is how long it would take to earn back $5 billion in twenty million dollar clips.