Prisoner’s dilemma: Difference between revisions
Amwelladmin (talk | contribs) No edit summary |
Amwelladmin (talk | contribs) No edit summary |
||
Line 16: | Line 16: | ||
{{prisonersdilemmatable}} | {{prisonersdilemmatable}} | ||
Another way of looking at this, for those easily triggered, is as the [[eBayer’s dilemma]]. | |||
===Single round prisoner’s dilemma=== | ===Single round prisoner’s dilemma=== |
Revision as of 13:35, 28 August 2019
|
Risk Anatomy™
|
Trusting is a risky strategy. Generally one side doesn’t survive. How can trust survive?
An exercise in calculating economic outcomes by means of metaphor, the prisoner’s dilemma was developed at the RAND corporation in the 1950s by those splendid brainboxes as a way of predicting individuals’ behaviour in situations requiring trust among strangers - for very good example, when unacquainted participants buy or sell in an unregulated market. This field developed into game theory.
The original dilemma
Two people are charged with a conspiracy[1]. Each is held separately. They cannot communicate. There is enough evidence to convict both on a lesser charge, but not the main charge. Each prisoner is separately offered the same plea bargain. The offer is:
- If A informs B but B refuses to inform on A:
- A will not be prosecuted at all and will go free
- B will be convicted of the main charge and will get 3 years in prison.
- If A informs B and B informs on A:
- A will get 2 years in prison
- B will get 2 years in prison
- If A refuses to inform on B and B refuses to inform on A:
- A will get 1 year in prison (on the lesser charge).
- B will get 1 year in prison (on the lesser charge).
Pay-off table |
A cooperates |
A defects |
B cooperates |
A gets 1 year |
A goes free |
B defects |
A gets 3 years |
A gets 2 years |
Another way of looking at this, for those easily triggered, is as the eBayer’s dilemma.
Single round prisoner’s dilemma
If you play the game once, and in isolation — with someone you don’t know and whom you do not expect to meet again, the payoff is grim: those who cooperate will get reamed. Cooperation is a bad strategy. Your best interest is in defecting on the other guy, because his best interest is defecting on you.
This looks like a bad outcome for commerce: if homo economicus should weasel on every deal, how can we have any faith in the market? How, come to think of it, has any kind of market ever got off the ground? Why would anyone take on a sure fire losing bet? Is this the smoking gun that homo economicus doesn’t exist?[2]
Because trust, faith and confidence changes everything. The single round prisoner’s dilemma stipulates there is no consequence on a bad actor for reneging. The defector is guaranteed to get away with it: these are the rules.
But in real life, one-off interactions with strangers — counterparts whom you are certain never to see again — are extremely rare, especially in our interconnected age. Business is the process of cultivating relationships. Establishing trust.
The game theorists found an easy way to replicate that concept of trust: run the same game again. Repeatedly. An indefinite amount of times. This is the iterated prisoner’s dilemma.
Iterated prisoner’s dilemma
The same actors get to observe how each other act, and respond accordingly. If your counterpart defects, you have a means of retaliating: by defecting on the next game, or by refusing to play the game any more with that counterparty.
Now, as well as the short-term payoff, there is a longer-term payoff, and it dwarfs the short term payoff. If I defect once, I earn £150. If I cooperate a thousand times, I earn £50,000. If I defect first time round, sure: I am £100 up, but at what cost: if my counterparty refuses to play with me again — and if she tells other players in the market — I will struggle to make much money. No one will trust me.
See also
References
- ↑ Whether or not they are guilty is beside the point. If it helps you empathise with their predicament, assume they’re innocent
- ↑ No. Homo economicus doesn’t exist, but this is not the reason why.