The Field Guide to Human Error Investigations

Revision as of 22:30, 25 October 2020 by Amwelladmin (talk | contribs)
The Jolly Contrarian’s book review service™


Index: Click to expand:

Comments? Questions? Suggestions? Requests? Insults? We’d love to 📧 hear from you.
Sign up for our newsletter.


Of a piece with Charles Perrow’s Normal Accidents, Sidney Dekker’s book is compelling in rooting the cause of accidents in poor system design and unnecessary complexity, overlaying safety features and compliance measures which only make the problem worse — that is, at the door of management and not poor benighted subject matter experts who are expected to make sense of the Rube Goldberg machine that management expect them to operate.

There are two ways of looking at system accidents:

  • It’s the meatware: Complex (and complicated) systems would be fine were it not for the meatware screwing things up. Human error is the main contributor to most system accidents, introduce unexpected failures into an essentially robust mechanism. Here, system design and management direction and oversight are otherwise effective strategies that are let down by unreliable human operators.
  • It’s the system: Accidents are an inevitable by-product of operators doing the best they can within complex systems that contain unpredictable vulnerabilities, where risks shift and change over time and priorities are unclear, conflicting and variable. Here, human operators are let down by shortcomings in system design and conflicting management pressure.

Blame the meatware

Those investigating accidents are motivated in ways which will favour the “meatware” theory:

  • Resource constraints: the wish for a simple narrative leading to a quick and inexpensive means of remediation
  • Failure reaction: an instinctive reaction that since an accident has happened there must have been a failure
  • Hindsight bias: the benefit of hindsight in that (i) the accident has happened (ii) the causal chain is now clear (thanks to the simple narrative) (iii) there has been a failure.
  • Personal responsibility: In an ironic self-hack those who are paid to be safety practitioners assume great responsibility, with some pride, notwithstanding that they don’t have the power or authority (what Daniel Pink would call “mastery” and “autonomy”) to effectively discharge that responsibility. But if they too are motivated to fall on their sword, as a matter of pride, it is an easy conclusion for an accident investigator to draw.
  • Political imperative: the “best” outcome for the organisation — and its management — is that there is no deep, systemic root cause requiring extensive recalibration of the system's fundamental architecture (much less criticism of management for its design or supervision), and that superficial action (rooting out/remediating “bad apples”) is all that is needed.

Recommendations thus tend to double down on the system design, further constraining employees from effectively managing situations and setting them up for blame should future accidents occur:

  • Tightening procedures: Further removing autonomy from employees, obliging them to follow even more detailed instructions, rules and policies
  • Introducing safety mechanisms: Adding more fail-safes, warning lights, system breakers and other mechanical second-order derivatives designed to eliminate meatware screw-ups but which actually further complicate the system and bury opportunities to directly observe its operation
  • Downgrading employees: removing subject matter experts and replacing them with lower calibre (i.e., cheaper) employees with even less autonomy to follow the even more complicated rules and processes now introduced.

But blaming the meatware is to ignore history and be condemned yourself to repeat it. Changing the make-up of your operational workforce won’t make much difference if you leave the basic conditions under which they were obliged to operate unaddressed. Just adding more, increasingly detailed, policies — “codified over-reactions to situations that are unlikely to happen again” in Jason Fried’s elegant words[1] — will only make the gap between theory and practice wider.

There is an often-stated but still wildly optimistic idea that all policies are complied with. Not only are they not, but they are disregarded explicitly. All concerned understand that optimal — even basically acceptable performance requires turning a blind eye to the rules. There is no better example than the work-to-rule: a form of industrial action adopted by those who are, by regulation, not permitted to go out on strike. The work-to-rule involves, literally, insisting rigorously on complying with every aspect of every prescribed policy as a means of frustrating the commercial objectives of the organisation.

What does it mean? It means that if people don’t want to or cannot go on strike they say to one another: “let’s follow all the rules for a change!” Systems come to a grinding halt. Gridlock is the result. Follow the letter of the law, and the work will not get done. It is as good as, or better than, going on strike.

Sidney Dekker, The Field Guide to Human Error Investigations

The vibe is: “Oh, I see, Mr. Employer, is that it? Are we being dicks about out employment relationship? Well, two can play at that game.”

See also

References