Agency problem: Difference between revisions

no edit summary
No edit summary
No edit summary
Line 1: Line 1:
{{a|design|
{{a|design|


[[File:Secret agent problem.png|450px|thumb|center|A special agent’s problem, yesterday]]}}The [[agency problem]] addresses the intrinsic [[conflict of interest]] any [[agent]] working on a [[commission]] faces — any [[introducing broker]], [[broker/dealer]], [[asset manager]], [[architect]], building contractor — and that is that once it have received its [[commission]], it doesn’t really care a hill of beans what its [[principal]] gets, however much it might protest to the contrary. In a sense, this is a basic articulation of the [[prisoner’s dilemma]] and so shouldn’t surprise anyone — and ''should'' be cured by repeat interactions — your clients have memories and will remember when you ripped them off.  
[[File:Secret agent problem.png|450px|thumb|center|A special agent’s problem, yesterday]]}}The [[agency problem]] addresses the intrinsic [[conflict of interest]] any [[agent]] working on a [[commission]] faces — any [[introducing broker]], [[broker/dealer]], [[asset manager]], [[architect]], [[contractor]], [[chief executive officer]], [[employee]] — and that is that as long as it gets its [[commission]], it doesn’t really care a hill of beans what its [[principal]] gets, however much it might protest to the contrary. In a sense, this is a basic articulation of the [[prisoner’s dilemma]] and so shouldn’t surprise anyone — and ''should'' be cured by repeat interactions — your clients have memories and will remember when you ripped them off.  


But the [[iterated prisoner’s dilemma]] has a couple of natural limits. One is that it relies on repeated interactions with an indeterminate end. When the sky is falling on your head, it looks like a final interaction, and the calculus is different. Second, it takes no account of [[convexity]] effects. I can build up my reputation incrementally with thousands of small transactions — I can look like a five-star collaborator — only to blow it on one big position and defect. When that one outsized reward more than compensates for all the pennies in front of the steamroller, the normal rules don’t apply and an iterated game of prisoner’s dilemma becomes a one-off. This is what {{author|Nassim Nicholas Taleb}} calls the “Rubin Trade”.
But the [[iterated prisoner’s dilemma]] has a couple of natural limits.  
 
One is that it relies on repeated interactions with an indeterminate end — the promise of another opportunity, on another day, to clip your ticket. When the sky is falling on your head, it looks like a final interaction, and the calculus is different.  
 
Second, it takes no account of [[convexity]] effects. I can build up my reputation incrementally with thousands of small transactions — I can ''look'' like a five-star collaborator — only to blow it on one big position and defect. I can sell ten thousand ball point pens in utter good faith and welch the one time I sell a Ferrari. When that one outsized reward more than compensates for all the thousands of pennies in front of the steamroller, the normal rules don’t apply and an iterated game of prisoner’s dilemma becomes a one-off. This is what {{author|Nassim Nicholas Taleb}} calls the “Rubin Trade”.
 
Thus, the agency problem is the classic “[[skin in the game]]” problem: an agent gets paid, no matter what. The [[investment manager]] puts no capital up, takes a small slice of ''yours'', by way of a fee, whatever their performance. Nice work if you can get it. A lot of people in the city can get it.


Skin in the game problem: Agents paid no matter what. [[Investment manager]]s take a slice of your pie - a small(ish) slice, admittedly, but they put no capital up, and they take out their fee first, whatever the performance.
===The agency problem and [[corporate personality]]===
===The agency problem and [[corporate personality]]===
Theory: the “legal revolution” theorists — academics, [[GC]]s, [[COO]]s and [[thought leader]]s generally — make the category error of assuming the interests of client ''[[corporation]]s'' drive the market. This is, of course, fully aligned with axiomatic legal theory: a corporation is a person, has its own personality, interests and desires. But the corporation as a [[res legis]] — a [[legally significant thing|legal thing]] — is only a [[res cogitans]] — a “thinking thing” — through the agency of its representatives, each of whom is a [[res cogitans]] in her own right. This tension, between the overriding life goals of an [[agent]] and those of {{sex|her}} [[principal]] is the crux of the agency problem. They do align — but ''only so far''.
Theory: the “legal revolution” theorists — academics, [[GC]]s, [[COO]]s and [[thought leader]]s generally — make the [[category error]] of assuming the interests of client ''[[corporation]]s'' drive the market. This aligns with legal theory: a corporation is a person and has its own [[Legal personality|personality]], interests and desires. But the corporation as a [[res legis]]— a [[legally significant thing|''legal'' thing]] — is only a [[Res cogitans|''thinking'' thing]]through the agency of its representatives, each of whom is a [[res cogitans|thinking thing]] in her own right. This tension, between the overriding life goals of an [[agent]] and those of {{sex|her}} [[principal]] is the crux of the agency problem. They do align — but ''only so far''.


Note the critical difference between human and [[corporation]] here: ''a [[corporation]] cannot speak for itself''. A ''human'' principal, being a thinking, animate thing, can apprehend the [[conflicts of interest]] of which {{sex|he}} may be a casualty, and police them. A pile of papers filed at companies house cannot. It can only ''crowd-source'' defence of its own interests to its “friends” who ''are'' animate, but who have interests of their own. It can seek to nullify any ''one'' agent’s conflicting interest by asking the aggregated weight of its ''other'' agents to represent its against that one agent in a kind of “wisdom of crowds” way — their ''individual'' interests disappearing through some kind of phase cancellation effect to which their ''common'' interest — furthering the interest of their mutual principal the [[corporation]] — is immune. This works as long as the self-interests of each of the other agents ''do'' cancel themselves out: if all the agents have a ''common'' self-interest which conflicts with the corporation’s interests, this crowdsourcing strategy won’t work.
The critical difference between human and [[corporation]] is that ''a [[corporation]] cannot speak for itself''. A ''human'' principal, being a thinking, animate thing, can apprehend the [[conflicts of interest]] of which {{sex|he}} may be a casualty, and police them. A pile of papers filed at companies house cannot. It can only ''crowd-source'' defence of its own interests to its “friends” who ''are'' animate, but who have interests of their own. It can seek to nullify any ''one'' agent’s conflicting interest by asking the aggregated weight of its ''other'' agents to represent its against that one agent in a kind of “wisdom of crowds” way — their ''individual'' interests disappearing through some kind of phase cancellation effect to which their ''common'' interest — furthering the interest of their mutual principal the [[corporation]] — is immune. This works as long as the self-interests of each of the other agents ''do'' cancel themselves out: if all the agents have a ''common'' self-interest which conflicts with the corporation’s interests, this crowdsourcing strategy won’t work.


So do all its agents have such a common conflicting interest? ''Yes''.  
So do all its agents have such a common conflicting interest? ''Yes''.