81,841
edits
Amwelladmin (talk | contribs) No edit summary |
Amwelladmin (talk | contribs) No edit summary |
||
Line 24: | Line 24: | ||
We are, thus, minding the shop not just for our children and grandchildren, but for generations unconceived — in every sense of the word — millennia hence. ''Thousands'' of millennia hence. | We are, thus, minding the shop not just for our children and grandchildren, but for generations unconceived — in every sense of the word — millennia hence. ''Thousands'' of millennia hence. | ||
Perhaps to talk us down from our grandiosity, MacAskill spends some time remarking on our contingency — that ''we'' happen to be the ones here to talk about | Perhaps to talk us down from our grandiosity, MacAskill spends some time remarking, rightly, on our contingency — that ''we'' happen to be the ones here to talk about the future is basically a fluke — but then neglects to appreciate that this contingency is by no means in our gift, and ''nor does it now stop''. | ||
It is as if MacAskill has got this perfectly backward. He talks about | It is as if MacAskill has got this perfectly backward. He talks about the present as if we are at some single crossroads; a one-time determining fork in the history of the planet where by our present course of action we can steer it conclusively this way or that, and that we have the wherewithal (or even the necessary information) to understand all the dynamics, all the second, third, fourth ... nth-order consequences to deliver a future. But this is absurd. Literally countless determining forks happen every day, everywhere. Most of them are entirely beyond our control. ''Some'' future is assured. What it is is literally impossible to know. This uncertainty is a profoundly important engine of our non-zero-sum existence. | ||
=== Expected value theory does not help === | === Expected value theory does not help === | ||
MacAskill uses probability theory and what financiers might call “linear interpolation” to deduce, from what has already happened in the world, a theory about what will happen, and what we should therefore do to accommodate | MacAskill uses probability theory (again: too many books, not enough common sense) and what financiers might call “linear interpolation” to deduce, from what has already happened in the world, a theory about what will happen, and what we should therefore do to accommodate the forthcoming throng. This is madness. | ||
[[Probabilities]] are suitable for closed, bounded systems with a ''complete'' set of ''known'' outcomes. The probability when rolling dice is ⅙ because a die has six equal sides, is equally likely to land on any side, must land on one, and no other outcome is possible. This is an artificial, tight, closed system. We can only calculate an expected value ''because'' of the dramatically constrained outcome. | |||
''Almost nothing in every day life works like that''.<ref>Ironically, not even dice: even a carefully machined die will not have exactly even sides and may fall off the table, or land crookedly, or fracture on landing!</ref> Probabilities work for [[finite game]]s. The future is [[Finite and Infinite Games|infinite]]: unbounded, ambiguous, incomplete, the range of possible outcomes are not known. ''You can’t calculate probabilities about it''. | |||
{{Author|Gerd Gigerenzer}} would say it is a situation of ''uncertainty'', not ''risk''. Here, expectation theory is ''worthless.'' | |||
=== An infinity of possibilities === | === An infinity of possibilities === | ||
Line 53: | Line 44: | ||
But over millions of years — “the average lifespan of a mammalian species,” MacAskill informs us — the gargantuan volume of chaotic interactions between the trillions of co-evolving organisms, mechanisms, systems and algorithms that comprise our hypercomplex ecosystem, mean literally ''anything'' could happen. There are ''squillions'' of possible futures. Each has its own unique set of putative inheritors. Don’t we owe them ''all'' a duty? Doesn’t action to promote the interests of ''one'' branch consign infinitely more to oblivion? | But over millions of years — “the average lifespan of a mammalian species,” MacAskill informs us — the gargantuan volume of chaotic interactions between the trillions of co-evolving organisms, mechanisms, systems and algorithms that comprise our hypercomplex ecosystem, mean literally ''anything'' could happen. There are ''squillions'' of possible futures. Each has its own unique set of putative inheritors. Don’t we owe them ''all'' a duty? Doesn’t action to promote the interests of ''one'' branch consign infinitely more to oblivion? | ||
Who are we to play with such cosmic dice? With what criteria? By reference to whose morality? An uncomfortable regression through storeys of turtles and elephants beckons. This is just the sort of thing ethics professors like, of course. | Who are we to play with such cosmic dice? With what criteria? By reference to whose morality? An uncomfortable regression, through storeys of turtles and elephants, beckons. This is just the sort of thing ethics professors like, of course. | ||
For if the grand total of unborn interests down the pathway time’s arrow eventually takes drowns out the assembled present, then those interests, in turn, are drowned out by the collected interests of those down the literally infinite number of possible pathways time’s arrow ''doesn’t'' end up taking. Who are we to judge? | For if the grand total of unborn interests down the pathway time’s arrow eventually takes drowns out the assembled present, then those interests, in turn, are drowned out by the collected interests of those down the literally infinite number of possible pathways time’s arrow ''doesn’t'' end up taking. Who are we to judge? How do we arbitrate between our possible futures, if not by reference to our own values? In that case is this really “altruism” or just motivated selfish behaviour? | ||
Causality may or may not be true, but still forward progress is [[non-linear]]. There is no “if-this-then-that” over five years, let alone fifty, let alone ''a million''. Each of these gazillion branching pathways is a possible future. Only one can come true. We don’t, and ''can’t'', know which one it will be. | Causality may or may not be true, but still forward progress is [[non-linear]]. There is no “if-this-then-that” over five years, let alone fifty, let alone ''a million''. Each of these gazillion branching pathways is a possible future. Only one can come true. We don’t, and ''can’t'', know which one it will be. | ||
Line 62: | Line 53: | ||
If you take causal regularities for granted then all you need to be wise ''in hindsight'' is enough data. In this story, the [[Causation|causal chain]] behind us is unbroken back to where records begin — the probability of an event happening when it has already happened is ''one hundred percent''; never mind that we’ve had to be quite imaginative in reconstructing it. | If you take causal regularities for granted then all you need to be wise ''in hindsight'' is enough data. In this story, the [[Causation|causal chain]] behind us is unbroken back to where records begin — the probability of an event happening when it has already happened is ''one hundred percent''; never mind that we’ve had to be quite imaginative in reconstructing it. | ||
===Brainboxes to the rescue=== | ===Brainboxes to the rescue=== | ||
But ultimately it is MacAskill’s sub-[[Yuval Noah Harari|Harari]], wiser-than-thou, top-down moral counselling that grates: humanity needs to solve the problems of the future centrally; this requires brainy people from the academy, like MacAskill, to do it. And though the solution might be at the great expense of all you mouth-breathing oxygen wasters out there, it is for the future’s good. | But ultimately it is MacAskill’s sub-[[Yuval Noah Harari|Harari]], wiser-than-thou, top-down moral counselling that grates: humanity needs to solve the problems of the future centrally; this requires brainy people from the academy, like MacAskill, to do it. And though the solution might be at the great expense of all you mouth-breathing oxygen wasters out there, it is for the future’s good. |