Template:M intro design System redundancy: Difference between revisions

Line 82: Line 82:


Redundancy — slack — in this environment, is a virtue.
Redundancy — slack — in this environment, is a virtue.
====Regulatory ''human'' capital?====
Instinctively, we all know this.
We build certain kinds of redundancy into our systems precisely as a failsafe against catastrophic failure. Financial services regulators require banks to hold [[regulatory capital]] — cash, held against no specific risk, as a bulwark against divers credit and liquidity crises.
[[Tier 1 capital]] is a buffer — slack; a ''[[system redundancy]]'' — designed to protect not just the individual institutions who must hold it ''but the wider system''. As executives at [[Lehman]] and [[Credit Suisse]] would tell us, after the fact, capital takes you so far. (''Before'' the fact they might have grumbled, too, that capital is an expensive dead weight on corporate returns.)
For [[Tier 1 capital|regulatory capital]] is an [[Airbag - steering-wheel continuum|airbag]]: protects you in a prang, but doesn’t help you avoid one in the first place. To be sure, there are accounting techniques that do: [[risk weighting]], [[leverage ratio]]s, [[regulatory margin]] — when they work, they are better than airbags, but they suffer from being determinate responses to unpredictable problems.
There have already been three [[Basel Accords]]; they are working on a fourth, because the first three haven’t had the desired effect. We still have periodic market meltdowns, not because the Basel rules aren’t detailed enough, but because, fundamentally, fixed rules cannot manage indeterminate risk situations. We have seen over and over well-meant rules behave counterintuitively at times of stress.<ref>Quoth the [[Basel Committee on Banking Supervision|Basel Committee]], explaining its most recent rules: <br>“''An underlying cause of the global financial crisis was the build-up of excessive on- and off-balance sheet leverage in the banking system. In many cases, banks built up excessive leverage while apparently maintaining strong risk-based capital ratios. At the height of the crisis, financial markets forced the banking sector to reduce its leverage in a manner that amplified downward pressures on asset prices. This deleveraging process exacerbated the feedback loop between losses, falling bank capital and shrinking credit availability.''”</ref>
We should not be surprised: accounting rules aren't sentient. They cannot read the market, understand a given institution’s cultural dynamics, let alone its particular risk profile in times of unforeseeable stress. 
But different kinds of buffers might be more effective at avoiding the pickles that leveraged financial institutions can get themselves into. Buffers of resource, material and significantly expert people: overabundance of skill, experience and expertise that ''can'' diagnose, react to, prevent and manage liquidity crises.
Why not, as well as regulatory ''share'' capital, encourage our institutions to hold excess [[human capital|''human'' capital]]? Or at least be less cavalier about systematically ''removing'' it, in the name of short-term cost savings.
Just as we must hold share capital in fair weather as well as foul, we should not expect to run expertise in fair weather on a shoestring. You can’t buy-in institutional knowledge in a time of crisis. You can’t buy institutional knowledge ''at all''. Even un-contextualised expertise, at a time of panic, will command outrageous premiums.
===''Jidoka''===
But what, a finance director might ask, would these expensive experts do if they are technically “redundant”?
Unlike [[tier 1 capital]], ''human'' capital need not just sit there costing money. These are people you can use as systems design and process experts, to analyse systems, root out anachronisms, build parallel state-of-the-art IT systems from which legacy infrastructure can be migrated. This is jidoka — automation with a human touch. This is creative, rewarding, builds
We run the gamut from super-fragility, where component failure triggers system ''meltdown'' — these are {{author|Charles Perrow}}’s“[[system accident]]s”; a continuum between normal fragility, where component failure causes system disruption and normal robustness where there is enough redundancy in the system that it can withstand outages and component failures, bit components will continue to fail in predictable ways, and then antifragility, where the redundancy itself is able to respond to component failures and secular challenges, and resigns the system in light of experience to ''reduce'' the risk of known failures.
The difference between robustness and antifragility here is the quality of the redundant components. If your redundancy strategy is to have lots of excess stock, lots of spare components and an inexhaustible supply of itinerant, enthusiastic but inexpert school-leavers from Bucharest ,then your machine will be robust and functional will be able to keep operating as long as macro conditions persist, but it will not learn it will not develop, and it will not adapt to changing circumstances.
An antifragile system requires both kinds of redundancy: plant and stock, to keep the machine going, but tools and knowhow, to tweak the machine. Experience, expertise and insight. The same things — though they are expensive — that can head off catastrophic events can apprehend and capitalise upon outsized business opportunities. ChatGPT will not help with that.