Playbook: Difference between revisions

From The Jolly Contrarian
Jump to navigation Jump to search
No edit summary
No edit summary
Line 12: Line 12:
As far as they go, [[playbook]]s speak to the belief that, as [[normal science]], ''the only material [[risk]] lies in not complying with established rules'': They are of a piece with the [[doctrine of precedent]]: when they run out of road, one must appeal to the help of a higher authority, by means of [[escalation]] to a [[control function]], the idea being (in theory, if not in practice) that the [[control function]] will further develop the [[algorithm]] to deal with the new situation — ''[[stare decisis]]'' — and it will become part of the corpus and be fed back down into the playbook of established [[process]]es.<ref>This rarely happens in practice. [[Control function]]s make ''[[ad hoc]]'' exceptions to the process, do not build them into the playbook as standard rules, meaning that the [[playbook]] has a natural sogginess (and therefore inefficiency).</ref> The [[algorithm]] operates entirely ''inside'' the organisation’s real {{tag|risk}} tolerance boundaries. This is a good thing from a risk monitoring perspective, and is inevitable as a matter of organisational psychology — [[if in doubt, stick it in]] — but it all comes at the cost of efficiency. The [[escalation]]s it guarantees are a profoundly [[waste]]ful use of scarce resources.
As far as they go, [[playbook]]s speak to the belief that, as [[normal science]], ''the only material [[risk]] lies in not complying with established rules'': They are of a piece with the [[doctrine of precedent]]: when they run out of road, one must appeal to the help of a higher authority, by means of [[escalation]] to a [[control function]], the idea being (in theory, if not in practice) that the [[control function]] will further develop the [[algorithm]] to deal with the new situation — ''[[stare decisis]]'' — and it will become part of the corpus and be fed back down into the playbook of established [[process]]es.<ref>This rarely happens in practice. [[Control function]]s make ''[[ad hoc]]'' exceptions to the process, do not build them into the playbook as standard rules, meaning that the [[playbook]] has a natural sogginess (and therefore inefficiency).</ref> The [[algorithm]] operates entirely ''inside'' the organisation’s real {{tag|risk}} tolerance boundaries. This is a good thing from a risk monitoring perspective, and is inevitable as a matter of organisational psychology — [[if in doubt, stick it in]] — but it all comes at the cost of efficiency. The [[escalation]]s it guarantees are a profoundly [[waste]]ful use of scarce resources.


In theory the [[control function]] will have its own playbook, and the “court of first instance” is as bound by that as the baseline process is by the basic playbook. There is an [[algorithm]], a recipe, and the main ill that comes about is by not following it.  
In theory the [[control function]] will have its own playbook, and the “court of first instance” is as bound by that as the baseline process is by the basic playbook. There is an [[algorithm]], a recipe, and the main ill that comes about is by not following it. Hence the existence of an [[internal audit]] function.  
 
Hence the existence of an [[internal audit]] function. Two roles:
:(i) identifying the rule set, and
:(ii) seeking data as to compliance with it. It is a formal role only.
 
Note the behaviour that this encourages: following an if/then logic structure requires no understanding of the underlying subject of the process (you don’t need to know how an internal combustion engine works to drive a car), and indeed such comprehension risks challenge to or subversion of that process: [[subject matter expert]]ise might incline one to ''take a view'' on a formal, non material issue. That might accelerates the particular item through the system, but at a cost to the ''integrity of the process''.
 
Integrity of the process is ''everything'' in modern risk management {{t|dogma}}.
 
The other thing about [[subject matter expert]]s is that they are expensive, also a cardinal sin in an industry where the highest calling is cost reduction. The ideal “[[process participant]]” costs nothing, follows instructions with perfect fidelity, doesn't break down or make errors, and certainly doesn't think or question the process: that is, it is ''a computer''. In the same way a machine doesn't question its programme (it can't), a [[process participant]] escalates within the process, but doesn’t question it. the difference is that cantakerous human process participants ''can''.
 
But therein the problem: if the process ''can'' be computerised, why ''hasn’t'' it been? 
 
There is a paradox here, though, because to get the best outcome within the playbook parameters requires a degree of advocacy, inasmuch as the process participant is facing the outside world (beyond the playbook control) - you can best negotiate if you understand your subject material.
 
The portfolio risk engine ascribes the same value to any outcome as long as it conforms to the playbook. The principle measurement is ''cost'' (lack of) and then ''speed''.
 
The theory is we operationalise a negotiation process. We divide into ''doers'' — [[process participants]] and thinkers “[[process designers]]”. Wherever there is a playbook, the demands of fidelity and economy require a deskilling and de-emphasis of [[subject matter expert]]ise from the process participants.
 
The same does not hold for the [[process designers]]. BUT — and here's the thing: if we also operationalise the [[escalation]] process — and the dogma of [[internal audit]] and the bottom line imperative see to it that we do — we wind up with a series of nested playbooks stretching up and across the organisation, and the real expertise ([[internal audit]]s) becomes expertise in the operational parameters of the different layers and abstractions of operational playbook: reconciling them, testing them for consistency and compatibility, while in the mean time [[subject matter expert]]ise — of the actual substantive content of the operation — has leaked out of the whole system.


And are we even going to talk about the fact that the big shock risks that hit the systems are never ones that have previously been recognised, analysed and subjected to constant monitoring? [[Black swan]]s gonna be [[black swan]]s, yo.
And are we even going to talk about the fact that the big shock risks that hit the systems are never ones that have previously been recognised, analysed and subjected to constant monitoring? [[Black swan]]s gonna be [[black swan]]s, yo.


{{seealso}}
{{seealso}}
*[[Process]]
*[[Escalation]]
*[[Escalation]]
*[[Control function]]
*[[Control function]]

Revision as of 17:10, 2 September 2019

The Jolly Contrarian’s Glossary
The snippy guide to financial services lingo.™
Index — Click the ᐅ to expand:
Tell me more
Sign up for our newsletter — or just get in touch: for ½ a weekly 🍺 you get to consult JC. Ask about it here.
Negotiation Anatomy™

File:Lear.jpg
A playbook yesterday
Tell me more
Sign up for our newsletter — or just get in touch: for ½ a weekly 🍺 you get to consult JC. Ask about it here.


A playbook is a comprehensive set of guidelines, policies, rules and fall-backs for the legal and credit terms of a contract that you can hand to the school-leaver in Bucharest to whom you have off-shored your master agreement negotiations. She will need it because, being a school-leaver from Bucharest, she won’t have the first clue about the subject matter of the negotiation, and will need to consult it to decide what do to should her counterparty object[1] to any of the preposterous terms your risk team has insisted go in the first draft of the contract.

Playbooks derive from a couple of mistaken beliefs: One, that a valuable business can be “solved” and run as an algorithm, not a heuristic;[2] and two, that, having been solved, it is a sensible allocation of resources to have a cheap and stupid human being run that process rather than a machine.[3]

In Thomas Kuhn’s argot[4] playbooks are “normal science”: They map out the discovered world. They contain no mysteries or conundrums. They represent tilled, tended, bounded, fenced, arable land. Boundaries have been set, tolerances limited, parameters fixed, risks codified and processes fully understood.

Playbooks are algorithms for the meatware: they maximise efficiency when operating within a fully understood environment. They are inhabited exclusively by known knowns. No playbook will ever say, “if the counterparty will not agree this, make a judgment about what you think is best.” All will say, “any deviations from this requirement must be approved by Litigation and at least one Credit officer of at least C3 rank.”

As far as they go, playbooks speak to the belief that, as normal science, the only material risk lies in not complying with established rules: They are of a piece with the doctrine of precedent: when they run out of road, one must appeal to the help of a higher authority, by means of escalation to a control function, the idea being (in theory, if not in practice) that the control function will further develop the algorithm to deal with the new situation — stare decisis — and it will become part of the corpus and be fed back down into the playbook of established processes.[5] The algorithm operates entirely inside the organisation’s real risk tolerance boundaries. This is a good thing from a risk monitoring perspective, and is inevitable as a matter of organisational psychology — if in doubt, stick it in — but it all comes at the cost of efficiency. The escalations it guarantees are a profoundly wasteful use of scarce resources.

In theory the control function will have its own playbook, and the “court of first instance” is as bound by that as the baseline process is by the basic playbook. There is an algorithm, a recipe, and the main ill that comes about is by not following it. Hence the existence of an internal audit function.

And are we even going to talk about the fact that the big shock risks that hit the systems are never ones that have previously been recognised, analysed and subjected to constant monitoring? Black swans gonna be black swans, yo.

See also

References

  1. As it will certainly do, to all of them.
  2. This is a bad idea. See Roger Martin’s The Design of Business: Why Design Thinking is the Next Competitive Advantage.
  3. Assumption two in fact falsifies assumption one. If it really is entirely mechanistic, there is absolutely no reason to have a human operating the process.
  4. The Structure of Scientific Revolutions. Brilliant book. Read it.
  5. This rarely happens in practice. Control functions make ad hoc exceptions to the process, do not build them into the playbook as standard rules, meaning that the playbook has a natural sogginess (and therefore inefficiency).