Template:M intro design System redundancy: Difference between revisions

Jump to navigation Jump to search
Tags: Mobile edit Mobile web edit
Line 84: Line 84:


Redundancy — slack — in this environment, is a virtue.  
Redundancy — slack — in this environment, is a virtue.  
====Regulatory human capital?====
====Regulatory ''human'' capital?====
Instinctively, we all know this. We build certain kinds of redundancy into our systems precisely as a failsafe. Financial services regulators require banks to hold [[regulatory capital]] — spare cash, allocated against no particular risk at all, attributed only to [[tier 1 capital]] holders — as a bulwark against credit and liquidity crises.  
Instinctively, we all know this.  


Spare cash is a buffer — slack; a ''[[system redundancy]]'' designed to protect not just individual institutions ''but the wider system'' against shocks but, as executives at [[Lehman]] and [[Credit Suisse]] would tell us, after the fact, capital buffers only take you so far. (Before the fight they might have grumbled, too, that is an expensive dead weight on corporate returns.)
We build certain kinds of redundancy into our systems precisely as a failsafe against catastrophic failure. Financial services regulators require banks to hold [[regulatory capital]] — cash, held against no specific risk, as a bulwark against divers credit and liquidity crises.  


Regulatory capital is an airbag, though: it doesn’t help you steer, but protects you in a prang. There are accounting techniques that operate as collision avoidance: risk weighting, liquidity ratios, regulatory margin. When these work, they are generally better than airbags, but they suffer from being determinate rules managing indeterminate risk situations. There have been 3 Basel Accords; they are working on a fourth, because the rules so far have not worked when needed.
[[Tier 1 capital]] is a buffer — slack; a ''[[system redundancy]]'' — designed to protect not just the individual institutions who must hold it ''but the wider system''. As executives at [[Lehman]] and [[Credit Suisse]] would tell us, after the fact, capital takes you so far. (''Before'' the fact they might have grumbled, too, that capital is an expensive dead weight on corporate returns.)


We should not be surprised: accounting rules aren't sentient. They cannot read the market, understand a given institution’s cultural dynamics of , let alone its particular risk profile in times of unforeseeable stress. Rules are algorithms.  
For [[Tier 1 capital|regulatory capital]] is an [[Airbag - steering-wheel continuum|airbag]]: protects you in a prang, but doesn’t help you avoid one in the first place. To be sure, there are accounting techniques that do: [[risk weighting]], [[leverage ratio]]s, [[regulatory margin]] — when they work, they are better than airbags, but they suffer from being determinate responses to unpredictable problems.  
There have already been three [[Basel Accords]]; they are working on a fourth, because first three haven’t had the desired effect. We still have periodic market meltdowns, not because the Basel rules aren’t detailed enough, but because, fundamentally, fixed rules cannot manage indeterminate risk situations. We have seen over and over well-meant rules behave counterintuitively at times of stress.<ref>Quoth the [[Basel Committee on Banking Supervision|Basel Committee]], explaining its most recent rules: <br>“''An underlying cause of the global financial crisis was the build-up of excessive on- and off-balance sheet leverage in the banking system. In many cases, banks built up excessive leverage while apparently maintaining strong risk-based capital ratios. At the height of the crisis, financial markets forced the banking sector to reduce its leverage in a manner that amplified downward pressures on asset prices. This deleveraging process exacerbated the feedback loop between losses, falling bank capital and shrinking credit availability.''”</ref>


We should not be surprised: accounting rules aren't sentient. They cannot read the market, understand a given institution’s cultural dynamics, let alone its particular risk profile in times of unforeseeable stress. 


But other kinds of buffer might be just as effective at avoiding the kinds of pickle that leveraged financial institutions can get themselves into. Buffers of resource, system redundancies and overabundances of skill, experience and expertise.
But different kinds of buffer might be more effective at avoiding the kinds of pickle that leveraged financial institutions can get themselves into. Buffers of resource, material and significantly expert people: overabundances of skill, experience and expertise that ''can'' diagnose, react to, prevent and manage liquidity crises.


Why not, as well as regulatory share capital, encourage our institutions to hold excess [[human capital|''human'' capital]]?
Why not, as well as regulatory ''share'' capital, encourage our institutions to hold excess [[human capital|''human'' capital]]? Or at least be less cavalier about systematically ''removing'' it, in the name of short-term cost savings.


Just as we cannot expect to only hold capital when we need it, nor should we expect to only run expertise in fair weather on a shoestring. You can't buy in institutional knowledge in a time of crisis. (You can’t buy institutional knowledge at all). Even uncontextualised expertise, at a time of panic, will command outrageous premiums.  
Just as we one must hold share capital in fair weather as well as foul, we should not expect to run expertise in fair weather on a shoestring. You can’t buy in institutional knowledge in a time of crisis. You can’t buy institutional knowledge ''at all''. Even un-contextualised expertise, at a time of panic, will command outrageous premiums.  
===''Jidoka''===
But what, a finance director might ask, would these expensive experts do if they are technically “redundant”?


We should consider holding regulatory ''human'' capital.
Unlike [[tier 1 capital]], ''human'' capital need not just sit there costing money. These are people you can use as systems design and process experts, to analyse systems, root out anachronisms, build parallel state-of-the-art it systems from which legacy infrastructure can be migrated. This is jidoka — automation with a human touch. This is creative, rewarding, builds  
 
Unlike tier 1 capital, human capital does not just sit there costing money. These are people you can use as systems design and process experts, to analyse systems, root out anachronisms, build parallel state-of-the-art it systems from which legacy infrastructure can be migrated. This is jidoka — automation with a human touch. This is creative, rewarding, builds  


We run the gamut from superfragility, where component failure triggers system ''meltdown'' — these are {{author|Charles Perrow}}’s“[[system accident]]s”; a continuum between normal fragility, where component failure causes system disruption and normal robustness where there is enough redundancy in the system that it can withstand outages and component failures, bit components will continue to fail in predictable ways, and then antifragility, where the redundancy itself is able to respond to component failures and secular challenges, and resigns the system in light of experience to ''reduce'' the risk of known failures.
We run the gamut from superfragility, where component failure triggers system ''meltdown'' — these are {{author|Charles Perrow}}’s“[[system accident]]s”; a continuum between normal fragility, where component failure causes system disruption and normal robustness where there is enough redundancy in the system that it can withstand outages and component failures, bit components will continue to fail in predictable ways, and then antifragility, where the redundancy itself is able to respond to component failures and secular challenges, and resigns the system in light of experience to ''reduce'' the risk of known failures.


The difference between robustness and antifragility here is the quality of the redundant components. If your redundancy strategy is to have lots of excess stock, lots of spare components and an inexhaustible supply of itinerant,enthusiastic but inexpert school-leavers from Bucharest ,then your machine will be robust and functional will be able to keep operating as long as macro conditions persist, but it will not learn it will not develop, and it will not adapt to changing circumstances.
The difference between robustness and antifragility here is the quality of the redundant components. If your redundancy strategy is to have lots of excess stock, lots of spare components and an inexhaustible supply of itinerant, enthusiastic but inexpert school-leavers from Bucharest ,then your machine will be robust and functional will be able to keep operating as long as macro conditions persist, but it will not learn it will not develop, and it will not adapt to changing circumstances.


An antifragile system requires both kinds of redundancy: plant and stock, to keep the machine going, but tools and knowhow, to tweak the machine. Experience, expertise and insight. The same things — though they are expensive — that can head off catastrophic events can apprehend and capitalise upon outsized business opportunities. ChatGPT will not help with that.
An antifragile system requires both kinds of redundancy: plant and stock, to keep the machine going, but tools and knowhow, to tweak the machine. Experience, expertise and insight. The same things — though they are expensive — that can head off catastrophic events can apprehend and capitalise upon outsized business opportunities. ChatGPT will not help with that.