82,891
edits
Amwelladmin (talk | contribs) No edit summary |
Amwelladmin (talk | contribs) No edit summary |
||
Line 12: | Line 12: | ||
As far as they go, [[playbook]]s speak to the belief that, as [[normal science]], ''the only material [[risk]] lies in not complying with established rules'': They are of a piece with the [[doctrine of precedent]]: when they run out of road, one must appeal to the help of a higher authority, by means of [[escalation]] to a [[control function]], the idea being (in theory, if not in practice) that the [[control function]] will further develop the [[algorithm]] to deal with the new situation — ''[[stare decisis]]'' — and it will become part of the corpus and be fed back down into the playbook of established [[process]]es.<ref>This rarely happens in practice. [[Control function]]s make ''[[ad hoc]]'' exceptions to the process, do not build them into the playbook as standard rules, meaning that the [[playbook]] has a natural sogginess (and therefore inefficiency).</ref> The [[algorithm]] operates entirely ''inside'' the organisation’s real {{tag|risk}} tolerance boundaries. This is a good thing from a risk monitoring perspective, and is inevitable as a matter of organisational psychology — [[if in doubt, stick it in]] — but it all comes at the cost of efficiency. The [[escalation]]s it guarantees are a profoundly [[waste]]ful use of scarce resources. | As far as they go, [[playbook]]s speak to the belief that, as [[normal science]], ''the only material [[risk]] lies in not complying with established rules'': They are of a piece with the [[doctrine of precedent]]: when they run out of road, one must appeal to the help of a higher authority, by means of [[escalation]] to a [[control function]], the idea being (in theory, if not in practice) that the [[control function]] will further develop the [[algorithm]] to deal with the new situation — ''[[stare decisis]]'' — and it will become part of the corpus and be fed back down into the playbook of established [[process]]es.<ref>This rarely happens in practice. [[Control function]]s make ''[[ad hoc]]'' exceptions to the process, do not build them into the playbook as standard rules, meaning that the [[playbook]] has a natural sogginess (and therefore inefficiency).</ref> The [[algorithm]] operates entirely ''inside'' the organisation’s real {{tag|risk}} tolerance boundaries. This is a good thing from a risk monitoring perspective, and is inevitable as a matter of organisational psychology — [[if in doubt, stick it in]] — but it all comes at the cost of efficiency. The [[escalation]]s it guarantees are a profoundly [[waste]]ful use of scarce resources. | ||
In theory the [[control function]] will have its own playbook, and the “court of first instance” is as bound by that as the baseline process is by the basic playbook. There is an [[algorithm]], a recipe, and the main ill that comes about is by not following it. | In theory the [[control function]] will have its own playbook, and the “court of first instance” is as bound by that as the baseline process is by the basic playbook. There is an [[algorithm]], a recipe, and the main ill that comes about is by not following it. Hence the existence of an [[internal audit]] function. | ||
Hence the existence of an [[internal audit]] function | |||
And are we even going to talk about the fact that the big shock risks that hit the systems are never ones that have previously been recognised, analysed and subjected to constant monitoring? [[Black swan]]s gonna be [[black swan]]s, yo. | And are we even going to talk about the fact that the big shock risks that hit the systems are never ones that have previously been recognised, analysed and subjected to constant monitoring? [[Black swan]]s gonna be [[black swan]]s, yo. | ||
{{seealso}} | {{seealso}} | ||
*[[Process]] | |||
*[[Escalation]] | *[[Escalation]] | ||
*[[Control function]] | *[[Control function]] |