What We Owe The Future: Difference between revisions

no edit summary
No edit summary
No edit summary
Line 24: Line 24:
We are, thus, minding the shop not just for our children and grandchildren, but for generations unconceived — in every sense of the word — millennia hence. ''Thousands'' of millennia hence.
We are, thus, minding the shop not just for our children and grandchildren, but for generations unconceived — in every sense of the word — millennia hence. ''Thousands'' of millennia hence.


Perhaps to talk us down from our grandiosity, MacAskill spends some time remarking on our contingency — that ''we'' happen to be the ones here to talk about it is basically a fluke — but neglects to appreciate that this contingency does not now stop.
Perhaps to talk us down from our grandiosity, MacAskill spends some time remarking, rightly, on our contingency — that ''we'' happen to be the ones here to talk about the future is basically a fluke — but then neglects to appreciate that this contingency is by no means in our gift, and ''nor does it now stop''.


It is as if MacAskill has got this perfectly backward. He talks about 2 as if we are at some single crossroads; a single determining fork in the history of the planet. But countless determining forks happen every day, everywhere. Most of them are entirely beyond our control. Some future is assured. What it is is literally impossible to know.
It is as if MacAskill has got this perfectly backward. He talks about the present as if we are at some single crossroads; a one-time determining fork in the history of the planet where by our present course of action we can steer it conclusively this way or that, and that we have the wherewithal (or even the necessary information) to understand all the dynamics, all the second, third, fourth ... nth-order consequences to deliver a future. But this is absurd. Literally countless determining forks happen every day, everywhere. Most of them are entirely beyond our control. ''Some'' future is assured. What it is is literally impossible to know. This uncertainty is a profoundly important engine of our non-zero-sum existence.
 
What is our duty, though? What are their expectations?


=== Expected value theory does not help ===
=== Expected value theory does not help ===
MacAskill uses probability theory and what financiers might call “linear interpolation” to deduce, from what has already happened in the world, a theory about what will happen, and what we should therefore do to accommodate this as-yet-unimagined throng. This is madness.  
MacAskill uses probability theory (again: too many books, not enough common sense) and what financiers might call “linear interpolation” to deduce, from what has already happened in the world, a theory about what will happen, and what we should therefore do to accommodate the forthcoming throng. This is madness.  
 
[[Probabilities]] are suitable for closed, bounded systems with a ''complete'' set of ''known'' outcomes. The probability of rolling a six is ⅙ because a die has six equal sides, is equally likely to land on any side, and must land on one, and no other outcome is possible. We can only calculate this expected value because of these dramatically constrained outcome.+
 
''This is not how most things in life work''. Probabilities work for [[finite game]]s. ''The future is in no sense a finite game''. It is unbounded, ambiguous, incomplete, the range of possible outcomes are not known and may as well be infinite. ''You can't calculate probabilities about it''. {{Author|Gerd Gigerenzer}} would say it is a situation of ''uncertainty'', not ''risk''. ''Expectation theory is worthless.''
 
=== [[Expected value theory]] and [[complex systems]] ===
It is not at all clear what anyone can do to influence the unknowably distant future — a meteor could wipe us out any time — but in any case [[expected value]] probability calculations won’t help. Nor does MacAskill say ''why'' we organisms who are here ''now'' should give a flying hoot for the race of pan-dimensional hyperbeings we will have evolved into — or been eaten by — countless millennia into the future. 


Presumably our duty isn’t a function of simple lineage — that feels ''un''altruistic — but is a generally derived obligation to whatever living thing is, for the time being, here?
[[Probabilities]] are suitable for closed, bounded systems with a ''complete'' set of ''known'' outcomes. The probability when rolling dice is ⅙ because a die has six equal sides, is equally likely to land on any side, must land on one, and no other outcome is possible. This is an artificial, tight, closed system. We can only calculate an expected value ''because'' of the dramatically constrained outcome.


This demolishes MacAskill’s foundational premise — applied “expectation theory” is how he draws his conclusions about the plight of the [[Morlock]]s of our future and is enough to trash the book’s thesis ''in toto''.  
''Almost nothing in every day life works like that''.<ref>Ironically, not even dice: even a carefully machined die will not have exactly even sides and may fall off the table, or land crookedly, or fracture on landing!</ref> Probabilities work for [[finite game]]s.  The future is [[Finite and Infinite Games|infinite]]:  unbounded, ambiguous, incomplete, the range of possible outcomes are not known. ''You can’t calculate probabilities about it''.  


But the gating question he glosses over is this: how do we even know who these putative beings will be, let alone what their interests are, let alone which of them is worth protecting?
{{Author|Gerd Gigerenzer}} would say it is a situation of ''uncertainty'', not ''risk''.  Here, expectation theory is ''worthless.''


=== An infinity of possibilities ===
=== An infinity of possibilities ===
Line 53: Line 44:
But over millions of years — “the average lifespan of a mammalian species,” MacAskill informs us — the gargantuan volume of chaotic interactions between the trillions of co-evolving organisms, mechanisms, systems and algorithms that comprise our hypercomplex ecosystem, mean literally ''anything'' could happen. There are ''squillions'' of possible futures. Each has its own unique set of putative inheritors. Don’t we owe them ''all'' a duty? Doesn’t action to promote the interests of ''one'' branch consign infinitely more to oblivion?
But over millions of years — “the average lifespan of a mammalian species,” MacAskill informs us — the gargantuan volume of chaotic interactions between the trillions of co-evolving organisms, mechanisms, systems and algorithms that comprise our hypercomplex ecosystem, mean literally ''anything'' could happen. There are ''squillions'' of possible futures. Each has its own unique set of putative inheritors. Don’t we owe them ''all'' a duty? Doesn’t action to promote the interests of ''one'' branch consign infinitely more to oblivion?


Who are we to play with such cosmic dice? With what criteria? By reference to whose morality? An uncomfortable regression through storeys of turtles and elephants beckons. This is just the sort of thing ethics professors like, of course.
Who are we to play with such cosmic dice? With what criteria? By reference to whose morality? An uncomfortable regression, through storeys of turtles and elephants, beckons. This is just the sort of thing ethics professors like, of course.


For if the grand total of unborn interests down the pathway time’s arrow eventually takes drowns out the assembled present, then those interests, in turn, are drowned out by the collected interests of those down the literally infinite number of possible pathways time’s arrow ''doesn’t'' end up taking. Who are we to judge?
For if the grand total of unborn interests down the pathway time’s arrow eventually takes drowns out the assembled present, then those interests, in turn, are drowned out by the collected interests of those down the literally infinite number of possible pathways time’s arrow ''doesn’t'' end up taking. Who are we to judge? How do we arbitrate between our possible futures, if not by reference to our own values? In that case is this really “altruism” or just motivated selfish behaviour?


Causality may or may not be true, but still forward progress  is [[non-linear]]. There is no “if-this-then-that” over five years, let alone fifty, let alone ''a million''. Each of these gazillion branching pathways is a possible future. Only one can come true. We don’t, and ''can’t'', know which one it will be.  
Causality may or may not be true, but still forward progress  is [[non-linear]]. There is no “if-this-then-that” over five years, let alone fifty, let alone ''a million''. Each of these gazillion branching pathways is a possible future. Only one can come true. We don’t, and ''can’t'', know which one it will be.  
Line 62: Line 53:


If you take causal regularities for granted then all you need to be wise ''in hindsight'' is enough data. In this story, the [[Causation|causal chain]] behind us is unbroken back to where records begin —  the probability of an event happening when it has already happened is ''one hundred percent''; never mind that we’ve had to be quite imaginative in reconstructing it.  
If you take causal regularities for granted then all you need to be wise ''in hindsight'' is enough data. In this story, the [[Causation|causal chain]] behind us is unbroken back to where records begin —  the probability of an event happening when it has already happened is ''one hundred percent''; never mind that we’ve had to be quite imaginative in reconstructing it.  
We ''don’t'' know.
Don’t ''all'' these possible futures deserve equal consideration? If yes, then ''anything'' we do will benefit some future, so there is nothing to see here. If ''no'', how do we arbitrate between our possible futures, if not by reference to our own values? In that case is this really “altruism” or just motivated selfish behaviour?
===Brainboxes to the rescue===
===Brainboxes to the rescue===
But ultimately it is MacAskill’s sub-[[Yuval Noah Harari|Harari]], wiser-than-thou, top-down moral counselling that grates: humanity needs to solve the problems of the future centrally; this requires brainy people from the academy, like MacAskill, to do it. And though the solution might be at the great expense of all you mouth-breathing oxygen wasters out there, it is for the future’s good.
But ultimately it is MacAskill’s sub-[[Yuval Noah Harari|Harari]], wiser-than-thou, top-down moral counselling that grates: humanity needs to solve the problems of the future centrally; this requires brainy people from the academy, like MacAskill, to do it. And though the solution might be at the great expense of all you mouth-breathing oxygen wasters out there, it is for the future’s good.