What We Owe The Future: Difference between revisions

no edit summary
No edit summary
No edit summary
Line 26: Line 26:
It is as if MacAskill has got this perfectly backward. He talks about the present as if we are at some single crossroads; a one-time determining fork in the history of the planet where by our present course of action we can steer it conclusively this way or that, and that we have the wherewithal (or even the necessary information) to understand all the dynamics, all the second, third, fourth ... nth-order consequences to deliver a future appropriate for the organisms we expect to be.   
It is as if MacAskill has got this perfectly backward. He talks about the present as if we are at some single crossroads; a one-time determining fork in the history of the planet where by our present course of action we can steer it conclusively this way or that, and that we have the wherewithal (or even the necessary information) to understand all the dynamics, all the second, third, fourth ... nth-order consequences to deliver a future appropriate for the organisms we expect to be.   


But this is absurd. Literally countless determining forks happen every day, everywhere. Most of them are entirely beyond our control. ''Some'' future is assured. What it is, and who will enjoy it, is literally impossible to know. This [[uncertainty]] is a profoundly important engine of our non-zero-sum existence.  
This is absurd. Literally countless determining forks happen every day, everywhere. Most of them are entirely beyond our control. ''Some'' future is assured. What it is, and who will enjoy it, is literally impossible to know. This [[uncertainty]] is a profoundly important engine of our non-zero-sum existence.  


=== Expected value theory does not help ===
=== Expected value theory does not help ===
MacAskill uses probability theory (again: too many books, not enough common sense) and what financiers might call “linear interpolation” to deduce, from what has already happened in the world, a theory about what will happen, and what we should therefore do to accommodate the forthcoming throng. This is madness.  
MacAskill uses probability theory (again: too many books, not enough common sense) and what financiers might call “linear interpolation” to deduce, from what has already happened in the world, a theory about what will happen, and what we should therefore do to accommodate the forthcoming throng. This is madness.  


[[Probabilities]] are suitable for closed, bounded systems with a ''complete'' set of ''known'' outcomes. The probability when rolling dice is ⅙ because a die has six equal sides, is equally likely to land on any side, must land on one, and no other outcome is possible. This is an artificial, tight, closed system. We can only calculate an expected value ''because'' of this artificially constrained outcome. Probabilities only work for such [[finite game]]s. ''Almost nothing in every day life works like that''.<ref>Ironically, not even dice: even a carefully machined die will not have exactly even sides and may fall off the table, or land crookedly, or fracture on landing!</ref> The future is [[Finite and Infinite Games|infinite]]: unbounded, ambiguous, incomplete, the range of possible outcomes are not known. ''You can’t calculate probabilities about it''.
[[Probabilities]] are suitable for closed, bounded systems with a ''complete'' set of ''known'' outcomes. The probability when rolling dice is ⅙ because a die has six equal sides, is equally likely to land on any side, must land on one, and no other outcome is possible. This is an artificial, tight, closed system. We can only calculate an expected value ''because'' of this artificially constrained outcome. Probabilities only work for such [[finite game]]s.  


It is a situation of ''[[doubt]]'', not ''risk''. Here, expectation theory is ''worthless''.
''Almost nothing in everyday life works like that''.<ref>Ironically, not even dice: even a carefully machined die will not have exactly even sides and may fall off the table, or land crookedly, or fracture on landing!</ref>


This is a ''good thing.''
The future is [[Finite and Infinite Games|infinite]]: unbounded, ambiguous, incomplete, the range of possible outcomes are not known. ''You can’t calculate probabilities about it''.
 
It is a situation of ''[[doubt]]'', not ''risk''. Here, expectation theory is ''worthless''. This is a ''good thing.''  
===About that thought experiment===
===About that thought experiment===
MacAskill came to his thesis courtesy of the thought experiment mentioned above: imagine living the life of every being that has inhabited the planet from [[Mitochondrial Eve]] up to the present day. The exercise is meant to illustrate our own personal contingency and microscopic insignificance in the Grand Scheme. There are a paltry eight billion of us; ten times that have gone before and a thousand times that are — if we don’t bugger everything up — yet to come.  
MacAskill came to his thesis courtesy of the thought experiment mentioned above: imagine living the life of every being that has inhabited the planet from Mitochondrial Eve up to the present day. The exercise is meant to illustrate our own personal contingency and microscopic insignificance in the Grand Scheme. There are a paltry eight billion of us; ten times that have gone before, and a thousand times that are — if we don’t bugger everything up — yet to come.  


The human condition: despite our mortal insignificance we are here, they are not. We lowly ants are disproportionately empowered to determine the future.
This is the human condition: despite our mortal insignificance we are here, they are not. This is MacAskill’s Big Idea: ''we'', lowly ants though we are, are disproportionately empowered to determine the future.


The idea chimes for a moment and then falls apart. For this is to see our ''present'' existence as no more than the task of cranking the ''right'' handle on the cosmic machine, to vouchsafe a calculable outcome for someone else. We are but set builders, moving quietly about a dark theatre. As long as we do as bidden, on time, all will be well and performers will shine. Our role is barely worth a mention in the final credits.  
The idea chimes for a moment and then falls apart. For it is to see our ''present'' existence as no more than the task of cranking the ''right'' handle on the cosmic machine, to vouchsafe a calculable outcome for someone else. We are but set-builders, moving quietly about a dark theatre. As long as we do as bidden, on time, all will be well and performers will shine. Our role is barely worth a mention in the final credits.  


But we are not Sisyphus. We have our own [[lived experience]]s to think about. It does not follow, ''[[a priori]]'' that we are bound to practise forbearance for the sake of generations unimagined.  
But we are not Sisyphus. We have our own [[lived experience]]s to think about. It does not follow, ''[[a priori]]'', that we are bound to practise forbearance for the sake of generations unimagined.  


Indeed, ''[[a priori]]'', it presents a [[paradox]]: for every step we take, the future keeps retreating. Who get to be the players to shine upon our set? As each generation rolls around, won’t the dismal calculus that applied to us be just the same for them? Who gets to enjoy all this self-restraint? Isn’t each generation just as relatively unimportant as the last?
Indeed, ''[[a priori]]'', it presents a [[paradox]]: for every step we take, the future keeps retreating. Who get to be the players to shine upon our set? As each generation rolls around, won’t the dismal calculus that applied to us be just the same for them? Who gets to enjoy all this self-restraint? Isn’t each generation just as relatively unimportant as the last?


This idea of iteration should give a clue. the future is not dependent on a single collective decision a generation makes now, but upon an impossibly complex array of microdecisions made by individuals and groups, every moment throughout space-time. This is as misconceived as is [[Richard Dawkins]]’ idea that a fielder does, or even ''could'', functionally calculate differential equations to catch a ball.  
This idea of [[iteration]] should give a clue. The future is not dependent on a single collective decision a generation makes now, but upon an impossibly complex array of micro-decisions, made by individuals and groups, every moment throughout space-time. This is as misconceived as is [[Richard Dawkins]]’ idea that a fielder does, or even ''could'', functionally [[Epistemic priority|calculate differential equations to catch a ball]].
 
The thought experiment betrays is an unflinchingly [[deterministic]] world-view: the universe is a clockwork machine to be set and configured. Take readings, perform calculations, twiddle dials, progress to the designated place, hold out your hand at the appointed time and the ball will drop into it.


The thought experiment betrays is an unflinchingly deterministic world-view: the universe is a clockwork machine to be set and configured. Take readings, perform calculations, twiddle dials, progress to the designated place, hold out your hand at the appointed time and the ball will drop into it.
We don’t image [[Richard Dawkins|Professor Dawkins]] was much good at [[cricket]].  


We don’t image Professor Dawkins was much good at [[cricket]]. Set in opposition to the [[heuristic]], [[iterative]], provisional mode of behaving that characterises any evolving organism in an ecosystem. Here it is in a nutshell, the great distinction between [[reductionism]] and [[pragmatism]].
Here it is, in a nutshell: the great distinction between [[reductionism]] and [[pragmatism]].
=== An infinity of possibilities ===
=== An infinity of possibilities ===
We can manufacture plausible stories about whence we came easily enough: that’s what scientists and historians do, though they have a hard time agreeing with each other. But where we are ''going'' is a different matter. We don’t have the first clue. [[Evolution by natural selection|Evolution]] makes no predictions. Alternative possibilities branch every which way. Over a generation or two we have some dim prospect of anticipating who our progeny might be and what they might want. [[Darwin’s Dangerous Idea|Darwin’s dangerous algorithm]] wires us, naturally, to do this.
We can manufacture plausible stories about whence we came easily enough: that’s what scientists and historians do, though they have a hard time agreeing with each other.  
 
But where we are ''going'' is a different matter. We don’t have the first clue. [[Evolution by natural selection|Evolution]] makes no predictions. Alternative possibilities branch every which way. Over a generation or two we have some dim prospect of anticipating who our progeny might be and what they might want. [[Darwin’s Dangerous Idea|Darwin’s dangerous algorithm]] wires us, naturally, to do this.
 
But over millions of years — “the average lifespan of a mammalian species” we are confidently told — the sheer volume of chaotic interactions between the co-[[Evolve|evolving]] organisms, mechanisms, [[Systems theory|systems]] and [[Algorithm|algorithms]] that comprise our [[Complexity|hypercomplex]] ecosystem, mean literally ''anything'' could happen. There are ''squillions'' of possible futures. Each has its own unique set of putative inheritors.  


But over millions of years — “the average lifespan of a mammalian species” we are confidently told — the sheer volume of chaotic interactions between the co-evolving organisms, mechanisms, systems and algorithms that comprise our hypercomplex ecosystem, mean literally ''anything'' could happen. There are ''squillions'' of possible futures. Each has its own unique set of putative inheritors. How do we know to whom we owe a duty? How on earth would we frame it? Don’t we owe them ''all'' a duty? Doesn’t action to promote the interests of ''one'' branch consign infinitely more to oblivion?
How do we know to whom we owe a duty? What would that duty be? How on earth would we frame it? Don’t we owe them ''all'' a duty? Doesn’t action to promote the interests of ''one'' branch consign infinitely more to oblivion?


Who are we to play with such cosmic dice? With what criteria? By reference to whose morality? An uncomfortable regression, through storeys of turtles and elephants, beckons. This is just the sort of thing ethics professors like, of course.
In any case, who are we to play such cosmic dice? With what criteria? By reference to whose morality — ours, or theirs? If they are anything like our children, they will be revolted by our values, but we can’t even begin to guess what their values will be. So, an uncomfortable regression, through storeys of turtles and elephants, beckons. This is just the sort of thing ethics professors like, of course.


For if the grand total of unborn interests down the pathway time’s arrow eventually takes drowns out the assembled present, then those interests, in turn, are drowned out by the collected interests of those down the literally infinite number of possible pathways time’s arrow ''doesn’t'' end up taking. Who are we to judge? How do we arbitrate between our possible futures, if not by reference to our own values? In that case is this really “altruism” or just motivated selfish behaviour?
For if those whose interests lie along the pathway time’s arrow eventually takes drown out us, then they, in turn, will be drowned out by those whose interests lie along the infinity of pathways time’s arrow ''doesn’t'' take. Who are we to judge? How do we arbitrate between our possible futures, if not by reference to our own values? In that case is this really “altruism” or just motivated selfish behaviour?


Causality may or may not be true, but still forward progress is [[non-linear]]. There is no “if-this-then-that” over five years, let alone fifty, let alone ''a million''. Each of these gazillion branching pathways is a possible future. Only one can come true. We don’t, and ''can’t'', know which one it will be.  
Causality may or may not be true, but still forward progress is [[non-linear]]. There is no “if-this-then-that” over five years, let alone fifty, let alone ''a million''. Each of these gazillion branching pathways is a possible future. Only one can come true. We don’t, and ''can’t'', know which one it will be.  


And [[There’s the rub|here is the rub]]: Amazonian [[Butterfly effect|butterflies]] causing typhoons in Manila: ''anything'' and ''everything'' we do infinitesimally and ineffably alters the calculus, re-routing evolutionary design forks and making this outcome or that more likely. Decisions that prefer one outcome surely disfavour an infinity of others.   
And [[There’s the rub|here is the rub]]: like Amazonian [[Butterfly effect|butterflies]] causing typhoons in Manila, ''anything'' and ''everything'' ''anyone'' does infinitesimally and ineffably alters the calculus, re-routing evolutionary design forks and making this outcome or that more likely. Decisions that prefer one outcome surely disfavour an infinity of others.   
 
If you take causal regularities for granted, all you need to be wise ''in hindsight'' is enough [[data]]. In this story, the [[Causation|causal chain]] behind us is unbroken back to where records begin — the probability of an event happening when it has already happened is ''one hundred percent''; never mind that we’ve had to be quite imaginative in reconstructing it.
 
Another thing: does this self-sacrifice for the hereafter apply to non-sapient beasts, fish and fowls, too? Bushes and trees? Invaders from Mars? If not, why not?


If you take causal regularities for granted then all you need to be wise ''in hindsight'' is enough data. In this story, the [[Causation|causal chain]] behind us is unbroken back to where records begin — the probability of an event happening when it has already happened is ''one hundred percent''; never mind that we’ve had to be quite imaginative in reconstructing it.
If present homo sapiens really is such a hopelessly venal case, who is to say it can redeem itself millennia into the future? What makes Macaskill think ''future'' us deserves that chance that ''present'' us is blowing so badly? Perhaps it would be better off for everyone else — especially said saintly beasts, fish fowls, bushes and trees — if we just winked out of existence now?
===Brainboxes to the rescue===
===Brainboxes to the rescue===
But ultimately it is MacAskill’s sub-[[Yuval Noah Harari|Harari]], wiser-than-thou, top-down moral counselling that grates: humanity needs to solve the problems of the future centrally; this requires brainy people from the academy, like MacAskill, to do it. And though the solution might be at the great expense of all you mouth-breathing oxygen wasters out there, it is for the future’s good.
But ultimately it is MacAskill’s sub-Harari, wiser-than-thou, top-down moral counselling that grates: humanity needs to solve the problems of the future centrally, and now (no-one in the past felt the need to solve ''our'' problems: what changed?). This requires brainy thirty-five year-olds from the academy, like MacAskill, to do it. And though the solution might be at the great expense of all you mouth-breathing oxygen wasters out there, it is for the future’s good. So suck it up.


We should sacrifice you lot — birds in the hand — for our far-distant descendants — birds in a bush who may or may not be there in a million years.
We should sacrifice you lot — birds in the hand — for our far-distant descendants — birds in a bush who may or may not be there in a million years.


Thanks — but no thanks.  
Thanks — but no thanks.  
===Why stop with humans?===
Does this self-sacrifice for the hereafter also apply to non-sapient beasts, fish and fowls, too? Bushes and trees? Invaders from Mars? If not, why not?
If present homo sapiens really is such a hopelessly venal case, who is to say it can redeem itself millennia into the future? What makes Macaskill think ''future'' us deserves that chance that ''present'' us is blowing so badly? Perhaps it would be better off for everyone else — especially said saintly beasts, fish fowls, bushes and trees — if we just winked out now?
===The [[FTX]] connection===
===The [[FTX]] connection===
MacAskill’s loopy Futurism appeals to the silicon valley demi-god types who have a weakness for Wagnerian psychodrama and glib [[a priori]] [[simulation hypothesis|sci fi futurism]].  
MacAskill’s loopy Futurism appeals to the silicon valley demi-god types who have a weakness for Wagnerian psychodrama and glib [[a priori]] [[simulation hypothesis|sci fi futurism]].  


Elon Musk is a fan. So, to MacAskill’s chagrin, is deluded crypto fantasist [[Sam Bankman-Fried]]. He seems to have “altruistically” given away a large portion of his investors’ money to the cause. I wonder what the expected value of ''that'' outcome was. You perhaps shouldn’t judge a book by the company it keeps on bookshelves, but still.
Elon Musk is a fan. So, to MacAskill’s chagrin, is deluded crypto fantasist [[Sam Bankman-Fried]]. He seems to have “altruistically” given away a large portion of his investors’ money to the cause. I wonder what the expected value of ''that'' outcome was. You shouldn’t judge a book by the company it keeps on bookshelves, but still.
===See the [[Long Now Foundation]]===
===See the Long Now Foundation===
If you want sensible and thoughtful writing about the planet and its long term future, try [[Stewart Brand]] and Brian Eno and the good folk of the Long Now Foundation. Give this hokum the swerve.
If you want sensible and thoughtful writing about the planet and its long term future, try [[Stewart Brand]] and Brian Eno and the good folk of the Long Now Foundation. Give this hokum the swerve.
{{Sa}}
{{Sa}}