Normal accident: Difference between revisions

From The Jolly Contrarian
Jump to navigation Jump to search
No edit summary
No edit summary
Line 1: Line 1:
{{a|devil|}}
{{a|devil|{{subtable|{{complex capsule}}}}}}
*[[complex]] versus [[complicated]] versus [[simple]]
*[[complex]] versus [[complicated]] versus [[simple]]
*[[Executive failure]]  ↔  [[Operational failure]]
*[[Executive failure]]  ↔  [[Operational failure]]

Revision as of 21:04, 3 August 2020

Complex systems present as “wicked problems”. They are dynamic, unbounded, incomplete, contradictory and constantly changing. They comprise an indefinite set of subcomponents that interact with each other and the environment in unexpected, non-linear ways. They are thus unpredictable, chaotic and “insoluble” — no algorithm can predict how they will behave in all circumstances. Probabilistic models may work passably well most of the time, but the times where statistical models fail may be exactly the times you really wish they didn’t, as Long Term Capital Management would tell you. Complex systems may comprise many other simple, complicated and indeed complex systems, but their interaction with each other will be a whole other thing. So while you may manage the simple and complicated sub-systems effectively with algorithms, checklists, and playbooks — and may manage tthe system on normal times, you remain at risk to “tail events” in abnormal circumstances. You cannot eliminate this risk: accidents in complex systems are inevitable — hence “normal”, in Charles Perrow’s argot. However well you manage a complex system it remains innately unpredictable.

In which the curmudgeonly old sod puts the world to rights.
Index — Click ᐅ to expand:
Tell me more
Sign up for our newsletter — or just get in touch: for ½ a weekly 🍺 you get to consult JC. Ask about it here.