Normal Accidents: Living with High-Risk Technologies: Difference between revisions

Jump to navigation Jump to search
no edit summary
No edit summary
No edit summary
Line 23: Line 23:
Why make the distinction between [[complex]] and [[complicated]] like this? Because we in the financial services industry are in the swoon of automated, pre-configured safety mechanisms — think [[chatbot]]s, [[risk taxonomy|risk taxonomies]], [[playbook]]s, [[checklist]]s, [[neural networks]], even ~ ''cough'' ~ [[contract|contractual rights]]s  — and while these may help resolve isolated and expected failures in ''complicated'' components, they have ''no'' chance of resolving systems failures, which, by definition, will confound them. Instead, these safety mechanisms ''will get in the way''. They are ''of'' the system. They are ''part'' of what has failed. Not only that:  safety mechanisms, by their existence, ''add'' [[complexity]] in the system — they create their own unexpected interactions — and when a system failure happens they can make it ''harder'' to detect what is going on, much less how to stop it.
Why make the distinction between [[complex]] and [[complicated]] like this? Because we in the financial services industry are in the swoon of automated, pre-configured safety mechanisms — think [[chatbot]]s, [[risk taxonomy|risk taxonomies]], [[playbook]]s, [[checklist]]s, [[neural networks]], even ~ ''cough'' ~ [[contract|contractual rights]]s  — and while these may help resolve isolated and expected failures in ''complicated'' components, they have ''no'' chance of resolving systems failures, which, by definition, will confound them. Instead, these safety mechanisms ''will get in the way''. They are ''of'' the system. They are ''part'' of what has failed. Not only that:  safety mechanisms, by their existence, ''add'' [[complexity]] in the system — they create their own unexpected interactions — and when a system failure happens they can make it ''harder'' to detect what is going on, much less how to stop it.


===Inadvertent complexity===
===When Kramer hears about this ...===
So far, so hoopy; but here’s the rub: we can make our systems less complex and, to an extent, reduce [[tight coupling]] by careful design and iterative improvement:<ref>Air transport has become progressively less complex as it has developed. It has learned from each accident.</ref> But it is axiomatic that we can’t eliminate complexity altogether.  And we tend not to simplify. We like to add prepackaged components: policies, processes, rules, and new-fangled bits of kit to the process.
[[File:Shit hits fan.jpg|300px|thumb|right|Kramer hearing about this, yesterday.]]
So far, so hoopy; but here’s the rub: we can make our systems less complex and, to an extent, reduce [[tight coupling]] by careful design and iterative improvement:<ref>Air transport has become progressively less complex as it has developed. It has learned from each accident.</ref> But it is axiomatic that we can’t eliminate complexity altogether.  And we tend ''not'' to simplify: to the contrary, we like to add prepackaged “risk mitigation” components: [[Policy|policies]], processes, rules, and [[Chatbot|new-fangled bits of kit]] to the process. These give our [[middle management]] layer comfort; they allow them to set their [[RAG status]]es green, and may justify eviscerating that expensive cohort of [[subject matter expert]]s who will turn out to be just the people you need when Kramer hears about this.


Here is where the folly of [[complicated]] safety mechanisms comes in: adding linear safety systems to a system ''increases'' its complexity, and makes dealing with systems failures, when they occur, even harder. Not only do linear safety mechanisms exacerbate or even create their own accidents, but they also afford a degree of false comfort that encourages managers, who typically have financial targets to meet, not safety ones — to run the system harder, thus increasing the tightness of the coupling between unrelated components. That same Triple A rating that lets your risk officer catch some zeds at the switch encourages your trader to double down. ''I’m covered. What could go wrong?''  
Here is where the folly of [[complicated]] safety mechanisms comes in: adding linear safety systems to a system ''increases'' its complexity, and makes dealing with systems failures, when they occur, even harder. Not only do linear safety mechanisms exacerbate or even create their own accidents, but they also afford a degree of false comfort that encourages managers, who typically have financial targets to meet, not safety ones — to run the system harder, thus increasing the tightness of the coupling between unrelated components. That same Triple A rating that lets your risk officer catch some zeds at the switch encourages your trader to double down. ''I’m covered. What could go wrong?''  

Navigation menu