Normal Accidents: Living with High-Risk Technologies: Difference between revisions

no edit summary
No edit summary
No edit summary
Line 12: Line 12:


Cutting back into the language of [[systems analysis]] for a moment, consider this: [[linear interaction]]s are a feature of [[simple]] or [[complicated system]]s: they can be “solved” in advance by pre-configuration. They can be [[brute-force]] computed; at least in theory. They can be managed by [[algorithm]]. [[Complex interactions]], by definition, can’t — they are the interactions the [[algorithm]] ''didn’t expect''.
Cutting back into the language of [[systems analysis]] for a moment, consider this: [[linear interaction]]s are a feature of [[simple]] or [[complicated system]]s: they can be “solved” in advance by pre-configuration. They can be [[brute-force]] computed; at least in theory. They can be managed by [[algorithm]]. [[Complex interactions]], by definition, can’t — they are the interactions the [[algorithm]] ''didn’t expect''.
Accidents arising from unexpected non-linear interactions are “normal”, not in the sense of regular or expected, but in the sense that it is an inherent property of the system to have this kind of accident. Financial services [[risk manager]]s take note: you can’t solve for these kinds of accidents. You can’t prevent them. You have to have arrangements in place to deal with them. And these arrangements need to be designed to deal with the unexpected outputs of a ''[[complex]]'' system, not the predictable effects of a merely ''[[complicated]]'' one.


===Inadvertent complexity===
===Inadvertent complexity===
So far so hoopy; but here’s the rub: we can make systems and processes more or less complex and, to an extent, reduce tight coupling by careful system design. But adding safety systems to a system ''increases'' its complexity.  
So far, so hoopy; but here’s the rub: we can make systems and processes more or less complex and, to an extent, reduce tight coupling by careful system design. But adding linear safety systems to a system ''increases'' its complexity, and makes dealing with complex interactions even harder. Not only do they create potential accidents of their own, but they also afford a degree of false comfort that encourages managers, who typically have financial targets to meet, not safety ones — to run the system harder, thus increasing the coupling of unrelated components. Perrow catalogues the chain of events leading up to the meltdown at Three Mile Island.
 
===“Operator error” is almost always the wrong answer===
Human beings being system components, it is rash to blame for failure a component constitutionally disposed to fail, even when not put in a position, through system design or economic incentive — a ship’s captain being expected to work a 48-hour watch — where failure is more or less inevitable (Perrow calls these “forced operator errors”).
:''But again, “operator error” is an easy classification to make. What really is at stake is an inherently dangerous working situation where production must keep moving and risk-taking is the price of continued employment.<ref>{{br|Normal Accidents}} p. 249.</ref>
If an operator's role is simply to carry out a tricky but routine part of the system then the inevitable march of technology makes this ever more fault of design and not personnel: humans, we know, are not good computers. They are good at figuring out what to do when something unexpected happens; making decisions; exercising judgment. But they — ''we'' — are ''lousy'' at doing repetitive tasks and following instructions. As ''The Six Million Dollar Man'' had it, ''we have the technology''. We should damn well use it.
If, on the other hand, the operator’s role is to manage ''complexity'' —
 
Yet if you are facing
 


{{sa}}
{{sa}}
*[[Complexity]]
*[[Complexity]]
{{ref}}
{{ref}}