Normal Accidents: Living with High-Risk Technologies: Difference between revisions

no edit summary
No edit summary
No edit summary
Line 3: Line 3:
}}This is one of those “books that will change your life”. Well — that ''should'' change lives — that it was written in 1984 — {{author|Charles Perrow}} passed away in 2019 — suggests that, maybe it hasn’t: that the irrationalities that motivate so much of what we do are more pervasive than plainly written common sense.
}}This is one of those “books that will change your life”. Well — that ''should'' change lives — that it was written in 1984 — {{author|Charles Perrow}} passed away in 2019 — suggests that, maybe it hasn’t: that the irrationalities that motivate so much of what we do are more pervasive than plainly written common sense.


{{author|Charles Perrow}} was a sociologist who fell into the discipline of [[systems analysis]]: analysing how social structures like businesses, governments and public utilities, being loose networks of autonomous individuals, work. Perrow’s focus fell upon organisations that present specific risks to operators, passengers, innocent bystanders — nuclear and other power stations, airways, shipping lines, but the read-across to the financial systems is obvious — where a combination of what he termed '''[[complexity|complex interactions]]''' and '''[[tight coupling]]''' in distributed systems mean that catastrophic accidents are not just likely but, from time to time, ''inevitable''. Such unpredictable failures are an intrinsic property of a complex, tightly coupled system, not merely a function of “operator error” that can be blamed on a negligent employee — although be assured, that is how management will be [[inclined]] to characterise any such event if they can possibly get away with it.
{{author|Charles Perrow}} was a sociologist who fell into the discipline of [[systems analysis]]: analysing how social structures like businesses, governments and public utilities, being loose networks of autonomous individuals, work. Perrow’s focus fell upon organisations that present specific risks to operators, passengers, innocent bystanders — nuclear and other power stations, airways, shipping lines, but the read-across to the financial systems is obvious — where a combination of what he termed '''[[complexity|complex interactions]]''' and '''[[tight coupling]]''' in distributed systems mean that catastrophic accidents are not just likely but, from time to time, ''inevitable''. Such unpredictable failures are an intrinsic property of a complex, tightly coupled system, not merely a function of “operator error” that can be blamed on a negligent employee — although be assured, that is how management will be [[inclined]] to characterise it if given half a chance.


If this is right, it has profound consequences for how we who inhabit complex, tightly-coupled systems, should think about risk.  
If this is right, it has profound consequences for how we who inhabit [[complex]], [[tightly-coupled]] systems, should think about risk.
 
If you work in financial services, you ''do'' inhabit a complex, tightly-coupled system.
 
It seems unarguably right.
 
Yet you don’t hear many people talking about how to handle [[normal accidents]].


It seems inarguably right.
===[[Complex interaction]]s and [[tight coupling]]===
===[[Complex interaction]]s and [[tight coupling]]===
First, some definitions.  
First, some definitions.  
*'''[[Complex interaction]]s''': Perrow anticipates the later use of the concept of “[[complexity]]” — a topic which is beginning to infuse the advocacy part of this site — without the benefit of [[systems analysis]], since it hadn’t really been invented when he was writing, but to describe interactions between non-adjacent subcomponents of a system that were neither intended nor anticipated by the designers of the system. Complex interactions are not only unexpected, but for a period of time (which may be critical, if the interacting components are [[tightly coupled]]) will be ''incomprehensible''. This may be because the interactions cannot be seen, buried under second-order control and safety systems, or even because they are not ''believed''.  If your  — ''wrong'' — theory of the game is that the risk in question is a [[ten sigma event]],  expected only once in one hundred million years,  you may have a hard time believing it could be happening in your fourth year of operation, as the partners of [[Long Term Capital Management]] may tell you. Here even [[epistemology]] is in play. Interactions that were not in our basic conceptualisation the world, are not ones we can reasonably anticipate. These interactions were, QED, not ''designed'' into the system; no one ''intended'' them. “They baffle us because we acted in terms of our own designs of a world that we expected to exist—but the world was different.”<ref>{{br|Normal Accidents}}, p. 75. Princeton University Press. Kindle Edition. </ref>
*'''[[Complex interaction]]s''': Perrow anticipates the later use of the concept of “[[complexity]]” — a topic which is beginning to infuse the advocacy part of this site — without the benefit of [[systems analysis]], since it hadn’t really been invented when he was writing, but to describe interactions between non-adjacent sub-components of a system that were neither intended nor anticipated by the designers of the system. Complex interactions are not only unexpected, but for a period of time (which may be critical, if the interacting components are [[tightly coupled]]) will be ''incomprehensible''. This may be because the interactions cannot be seen, buried under second-order control and safety systems, or even because they are not ''believed''.  If your  — ''wrong'' — theory of the game is that the risk in question is a [[ten sigma event]],  expected only once in one hundred million years,  you may have a hard time believing it could be happening in your fourth year of operation, as the partners of [[Long Term Capital Management]] may tell you. Here even [[epistemology]] is in play. Interactions that were not in our basic conceptualisation the world, are not ones we can reasonably anticipate. These interactions were, QED, not ''designed'' into the system; no one ''intended'' them. “They baffle us because we acted in terms of our own designs of a world that we expected to exist—but the world was different.”<ref>{{br|Normal Accidents}}, p. 75. Princeton University Press. Kindle Edition. </ref>
*'''[[Linear interaction]]s''': Contrast [[complex interaction]]s with much more common “[[linear interaction]]s”, where parts of the system interact with other components that precede or follow them in the system in ways that are expected and planned: “if ''this'', then ''that''”. In a well-designed system, these will (of course) predominate: any decent system should mainly do what it is designed to do and not act erratically in normal operation. Some systems are more complex than others, but even in the most linear systems are susceptible to some complexity — where they interact with the environment.<ref>Perrow characterises a “complex system” as one where ten percent of interactions are complex; and a “linear system” where less than one percent or interactions are complex. The greater the percentage of complex interactions in a system, the greater the potential for system accidents.</ref> Cutting back into the language of [[systems analysis]] for a moment, consider that [[linear interaction]]s are a ''feature'' of [[simple]] and [[complicated system]]s, and can be “pre-solved” and brute-force computed; at least in theory. They can be managed by [[algorithm]], or [[playbook]]. But [[complex interactions]], by definition, ''cannot'' — they are the interactions the [[algorithm]] ''didn’t expect''.
*'''[[Linear interaction]]s''': Contrast [[complex interaction]]s with much more common “[[linear interaction]]s”, where parts of the system interact with other components that precede or follow them in the system in ways that are expected and planned: “if ''this'', then ''that''”. In a well-designed system, these will (of course) predominate: any decent system should mainly do what it is designed to do and not act erratically in normal operation. Some systems are more complex than others, but even in the most linear systems are susceptible to some complexity — where they interact with the environment.<ref>Perrow characterises a “complex system” as one where ten percent of interactions are complex; and a “linear system” where less than one percent or interactions are complex. The greater the percentage of complex interactions in a system, the greater the potential for system accidents.</ref> Cutting back into the language of [[systems analysis]] for a moment, consider that [[linear interaction]]s are a ''feature'' of [[simple]] and [[complicated system]]s, and can be “pre-solved” and brute-force computed; at least in theory. They can be managed by [[algorithm]], or [[playbook]]. But [[complex interactions]], by definition, ''cannot'' — they are the interactions the [[algorithm]] ''didn’t expect''.
*'''[[Tight coupling]]''': [[Complex interactions]] are only a source of catastrophe if another condition is satisfied: that unexpectedly-interacting components of the [[complex system]] are “tightly coupled” — processes happen fast, can’t be turned off, failing components can’t be isolated. Perrow’s observation is that complex systems tend to be more tightly coupled than we realise, and we usually only find out the hard way.
*'''[[Tight coupling]]''': [[Complex interactions]] are only a source of catastrophe if another condition is satisfied: that unexpectedly-interacting components of the [[complex system]] are “tightly coupled” — processes happen fast, can’t be turned off, failing components can’t be isolated. Perrow’s observation is that complex systems tend to be more tightly coupled than we realise, and we usually only find out the hard way.