Template:M intro design System redundancy: Difference between revisions

no edit summary
No edit summary
Line 99: Line 99:


Just as we must hold share capital in fair weather as well as foul, we should not expect to run expertise in fair weather on a shoestring. You can’t buy-in institutional knowledge in a time of crisis. You can’t buy institutional knowledge ''at all''. Even un-contextualised expertise, at a time of panic, will command outrageous premiums.
Just as we must hold share capital in fair weather as well as foul, we should not expect to run expertise in fair weather on a shoestring. You can’t buy-in institutional knowledge in a time of crisis. You can’t buy institutional knowledge ''at all''. Even un-contextualised expertise, at a time of panic, will command outrageous premiums.
===Redundancy as a key to successful change management===
 
===''Jidoka''===
But what, a finance director might ask, would these expensive experts do if they are technically “redundant”?
 
Unlike [[tier 1 capital]], ''human'' capital need not just sit there costing money. These are people you can use as systems design and process experts, to analyse systems, root out anachronisms, build parallel state-of-the-art IT systems from which legacy infrastructure can be migrated. This is jidoka — automation with a human touch. This is creative, rewarding, builds
 
We run the gamut from super-fragility, where component failure triggers system ''meltdown'' — these are {{author|Charles Perrow}}’s“[[system accident]]s”; a continuum between normal fragility, where component failure causes system disruption and normal robustness where there is enough redundancy in the system that it can withstand outages and component failures, bit components will continue to fail in predictable ways, and then antifragility, where the redundancy itself is able to respond to component failures and secular challenges, and resigns the system in light of experience to ''reduce'' the risk of known failures.
 
The difference between robustness and antifragility here is the quality of the redundant components. If your redundancy strategy is to have lots of excess stock, lots of spare components and an inexhaustible supply of itinerant, enthusiastic but inexpert school-leavers from Bucharest ,then your machine will be robust and functional will be able to keep operating as long as macro conditions persist, but it will not learn it will not develop, and it will not adapt to changing circumstances.
 
An [[antifragile]] system requires both kinds of redundancy: plant and stock, to keep the machine going, but tools and knowhow, to tweak the machine. Experience, expertise and insight. The same things — though they are expensive — that can head off catastrophic events can apprehend and capitalise upon outsized business opportunities. ChatGPT will not help with that.
 
===Suitable candidates for regulatory human capital ===
Speaking of systems there is, too, a ''negative'' feedback loop operating here. The institutional knowledge lives with loyal, long-serving employees. The 20-year operations veteran you made redundant in the last cost-cutting challenge is not [[fungible]] with the team of [[school leavers from Bucharest]] to whom you diffused his role. Nor will those call-centre contractors have the expertise or disposition to be much use as a system redundancy: they don’t have the skills, commitment or continuity of experience to help with re-engineering systems and optimising processes: that is a job that requires subtlety, and intimate understanding of how the organisation ticks. Outsourced contractors neither know nor care.
 
===“Redundancy” as a key to successful change management===
{{Quote|
{{Quote|
But gravity always wins.
But gravity always wins.
Line 111: Line 126:
In a fight between logic and gravity, gravity always wins.
In a fight between logic and gravity, gravity always wins.


It stands to reason that a single “change agent” who arrives from outside and says, “hey, fellas, wouldn’t it be great if we fixed this?” won’t get far with the veteran crew who run the process now. The thing about an operating paradigm is that it is operating. On its own terms, it works. It ''isn’t in crisis''. Now in {{author|Thomas Kuhn}}’s conception of them,<ref>{{br|The Structure of Scientific Revolutions}}. Wonderful book.</ref> paradigms generally only break down if they don’t work. As far as its constituents are concerned, it is working ''fine''. They may regard it as a thing of beauty, a many-splendoured contraption that
It stands to reason that a single “change agent” who arrives from outside and says, “hey, fellas, wouldn’t it be great if we fixed this?” won’t get far with the veteran crew who run the process now. The thing about an operating paradigm is that it is operating. On its own terms, it works. It ''isn’t in crisis''. Now in {{author|Thomas Kuhn}}’s conception of them,<ref>{{br|The Structure of Scientific Revolutions}}. Wonderful book.</ref> paradigms generally only break down if they don’t work. As far as its constituents are concerned, it is working ''fine''. They may regard it as a thing of beauty, a many-splendoured contraption that they have, over the ages, grown into and dependent on, the way a beaver grows into and dependent on its dam. They will not easily give it up — cannot: they would be lost without it. We should not be surprised to see well-meant change initiatives foundering against this kind of entropy: this will for things to settle back to how they were.
 
===''Jidoka''===
But what, a finance director might ask, would these expensive experts do if they are technically “redundant”?
 
Unlike [[tier 1 capital]], ''human'' capital need not just sit there costing money. These are people you can use as systems design and process experts, to analyse systems, root out anachronisms, build parallel state-of-the-art IT systems from which legacy infrastructure can be migrated. This is jidoka — automation with a human touch. This is creative, rewarding, builds
 
We run the gamut from super-fragility, where component failure triggers system ''meltdown'' — these are {{author|Charles Perrow}}’s“[[system accident]]s”; a continuum between normal fragility, where component failure causes system disruption and normal robustness where there is enough redundancy in the system that it can withstand outages and component failures, bit components will continue to fail in predictable ways, and then antifragility, where the redundancy itself is able to respond to component failures and secular challenges, and resigns the system in light of experience to ''reduce'' the risk of known failures.
 
The difference between robustness and antifragility here is the quality of the redundant components. If your redundancy strategy is to have lots of excess stock, lots of spare components and an inexhaustible supply of itinerant, enthusiastic but inexpert school-leavers from Bucharest ,then your machine will be robust and functional will be able to keep operating as long as macro conditions persist, but it will not learn it will not develop, and it will not adapt to changing circumstances.
 
An antifragile system requires both kinds of redundancy: plant and stock, to keep the machine going, but tools and knowhow, to tweak the machine. Experience, expertise and insight. The same things — though they are expensive — that can head off catastrophic events can apprehend and capitalise upon outsized business opportunities. ChatGPT will not help with that.