What We Owe The Future: Difference between revisions
Amwelladmin (talk | contribs) No edit summary Tags: Mobile edit Mobile web edit |
Amwelladmin (talk | contribs) No edit summary Tags: Mobile edit Mobile web edit |
||
Line 21: | Line 21: | ||
But over million years — being the average lifespan of a mammalian species, MacAskill informs us — the gargantuan mass of non-linear interactions between trillions of co-evolving organisms in our hypercomplex ecosystem, mean our possible future is so [[uncertain]] and disparate that we cannot possibly predict the effect of our actions today. | But over million years — being the average lifespan of a mammalian species, MacAskill informs us — the gargantuan mass of non-linear interactions between trillions of co-evolving organisms in our hypercomplex ecosystem, mean our possible future is so [[uncertain]] and disparate that we cannot possibly predict the effect of our actions today. | ||
Causality may or may not be true, but forward progress in an open ecosystem of independent agents is [[non-linear]]. There is no “if-this-then-that” over five years, let alone fifty, let alone ''five million''. | |||
This concern for generations unborn does feel a bit Catholic. Should we be playing Gods when we are sitting in their lap? | |||
===In infinity of possibilities=== | |||
MacAskill proceeds as if he knows who our future generations are. But that is to confuse hindsight (easy) with foresight (hard). We ''don’t'' know. Decisions we make now to prefer one outcome is surely to disfavour an infinity of others. | |||
Don’t ''all'' these possible futures deserve equal consideration? If yes, then ''anything'' we do will benefit some future, so there is nothing to see here. If ''no'', how do we arbitrate between our possible futures, if not by reference to our own values? In that case is this really “altruism” or just motivated selfish behaviour? | |||
===On getting out more=== | |||
[[William MacAskill]] is undoubtedly intelligent, widely-read, and he applies his polymathic range to his million-year-old argument with some panache. But he is probably ''too'' well-read. You sense it would do him a world of good to put the books down spend some time pulling pints or labouring on a building site, getting some education from the school of life. | |||
Still, it took me a while to put my finger on what was so irritating about this book. To be sure, there’s a patronising glibness about it: it is positively jammed full of the sort of sophomore thought experiments (“imagine you had to live the life of every sentient being on the planet” kind of thing) that give [[philosophy]] undergraduates a bad name. | |||
Indeed, MacAskill is barely out of undergraduate [[philosophy]] class himself. He hasn’t yet left the university. A thirty-something meta-ethics lecturer would strike most people (other than himself) as an unlikely source of cosmic advice for the planet’s distant future. So it proves. | |||
But ultimately it is MacAskill’s sub-[[Yuval Noah Harari|Harari}}, wiser-than-thou, top-down moral counselling that grates: humanity needs to solve the problems of the future centrally; this requires brainy people from the academy, like MacAskill, to do it. And though the solution might be at the great expense of all you mouth-breathing oxygen wasters out there, it is for the future’s good. | But ultimately it is MacAskill’s sub-[[Yuval Noah Harari|Harari}}, wiser-than-thou, top-down moral counselling that grates: humanity needs to solve the problems of the future centrally; this requires brainy people from the academy, like MacAskill, to do it. And though the solution might be at the great expense of all you mouth-breathing oxygen wasters out there, it is for the future’s good. | ||
We should sacrifice you lot — birds in the hand — for | We should sacrifice you lot — birds in the hand — for our far-distant descendants — birds in a bush who may or may not be there in a million years. | ||
Thanks — but no thanks. | Thanks — but no thanks. | ||
It is not at all clear what anyone can do to influence the unknowably distant future — a meteor could wipe us out any time — but in any case expected value probability calculations sure aren’t going to help. Nor does MacAskill ever say ''why'' organisms who are around ''now'' should give the merest flying hoot for | It is not at all clear what anyone can do to influence the unknowably distant future — a meteor could wipe us out any time — but in any case expected value probability calculations sure aren’t going to help. Nor does MacAskill ever say ''why'' organisms who are around ''now'' should give the merest flying hoot for the race of pan-dimensional hyperbeings we will have evolved into by then, he does not say. | ||
{{Quote|Quick side bar: [[Probabilities]] are suitable for closed, bounded systems with a ''complete'' set of ''known'' outcomes. The probability of rolling a six is ⅙ because a die has six equal sides, is equally likely to land on any side, and must land on one, and no other outcome is possible. ''This is not how most things in life work''. Probabilities work for [[finite game]]s. ''The future is in no sense a finite game''. It is unbounded, ambiguous, incomplete, the range of possible outcomes are not known and may as well be infinite. ''You can't calculate probabilities about it''. {{Author|Gerd Gigerenzer}} would say it is a situation of ''uncertainty'', not ''risk''. ''Expectation theory is worthless.''}} | {{Quote|Quick side bar: [[Probabilities]] are suitable for closed, bounded systems with a ''complete'' set of ''known'' outcomes. The probability of rolling a six is ⅙ because a die has six equal sides, is equally likely to land on any side, and must land on one, and no other outcome is possible. ''This is not how most things in life work''. Probabilities work for [[finite game]]s. ''The future is in no sense a finite game''. It is unbounded, ambiguous, incomplete, the range of possible outcomes are not known and may as well be infinite. ''You can't calculate probabilities about it''. {{Author|Gerd Gigerenzer}} would say it is a situation of ''uncertainty'', not ''risk''. ''Expectation theory is worthless.''}} | ||
This demolishes MacAskill’s foundational premise — applied “expectation theory” is how he draws his conclusions about the plight of the [[Morlock]]s of our future — and is enough to trash the book’s thesis ''in toto''. | This demolishes MacAskill’s foundational premise — applied “expectation theory” is how he draws his conclusions about the plight of the [[Morlock]]s of our future — and is enough to trash the book’s thesis ''in toto''. | ||
===Why stop with humans?=== | |||
Does this self-sacrifice for the hereafter also apply to non-sapient beasts, fish and fowls, too? Bushes and trees? If not, why not? | Does this self-sacrifice for the hereafter also apply to non-sapient beasts, fish and fowls, too? Bushes and trees? If not, why not? | ||
If homo sapiens really is | If present homo sapiens really is such a hopeless case, who is to say it can redeem itself millennia into the future? What makes Macaskill think ''future'' us deserves that chance that ''present'' us is blowing so badly? Perhaps it would be better off for everyone else — especially said saintly beasts, fish fowls, bushes and trees — if we just winked out now? | ||
===The [[FTX]] connection=== | |||
MacAskill’s loopy Futurism appeals to the silicon valley demi-god types who have a weakness for Wagnerian psychodrama and glib [[a priori]] [[simulation hypothesis|sci fi futurism]] | MacAskill’s loopy Futurism appeals to the silicon valley demi-god types who have a weakness for Wagnerian psychodrama and glib [[a priori]] [[simulation hypothesis|sci fi futurism]]. | ||
If you want sensible and thoughtful writing about the planet and its long term future, try Stewart Brand and Brian Eno and the good folk of the Long Now Foundation. Give this hokum the swerve. | Elon Musk is a fan. So, to MacAskill’s chagrin, is deluded crypto fantasist [[Sam Bankman-Fried]]. He seems to have “altruistically” given away a large portion of his investors’ money to the cause. I wonder what the expected value of ''that'' outcome was. You perhaps shouldn’t judge a book by the company it keeps on bookshelves, but still. | ||
===See the [[Long Now Foundation]]=== | |||
If you want sensible and thoughtful writing about the planet and its long term future, try [[Stewart Brand]] and Brian Eno and the good folk of the Long Now Foundation. Give this hokum the swerve. | |||
{{Sa}} | {{Sa}} | ||
*[[Simulation hypothesis]] | *[[Simulation hypothesis]] | ||
*[[Gerd Gigerenzer]] | *[[Gerd Gigerenzer]] | ||
*{{Br|The Clock of the Long Now}} |
Revision as of 12:22, 27 November 2022
|
They flutter behind you, your possible pasts:
Some bright-eyed and crazy,
Some frightened and lost.
A warning to anyone still in command
Of their possible future
To take care.
- —Roger Waters, Your Possible Pasts
Be careful what you wish for
Per the second law of thermodynamics but pace Pink Floyd, there is but one possible past, one possible now, and an infinite array of possible futures stretching out into an unknown black void. Some short, some long, some dystopian, some enlightened. Some cut off by apocalypse, some fading gently into warm entropic soup.
William MacAskill’s premise is this: barring near-term cataclysm, there are so many more people in our species’ future than in its present, that our duty of care to those yet to come swamps our own short-term interests.
We are minding the shop not just for our children and grandchildren but for generations millennia hence. Thousands of millennia hence, in deep time.
But here is his first logical lacuna. The causal chain behind us stretches unbroken back to where records begin. Behind us, the probability of each successive step is 1. As the saying goes, it is easy to be wise in hindsight.
But in front of us, alternate possibilities branch into the infinite. Over a generation or two it is easy enough to anticipate and provide for our own progeny. Darwin’s dangerous algorithm naturally wires us to do this.
But over million years — being the average lifespan of a mammalian species, MacAskill informs us — the gargantuan mass of non-linear interactions between trillions of co-evolving organisms in our hypercomplex ecosystem, mean our possible future is so uncertain and disparate that we cannot possibly predict the effect of our actions today.
Causality may or may not be true, but forward progress in an open ecosystem of independent agents is non-linear. There is no “if-this-then-that” over five years, let alone fifty, let alone five million.
This concern for generations unborn does feel a bit Catholic. Should we be playing Gods when we are sitting in their lap?
In infinity of possibilities
MacAskill proceeds as if he knows who our future generations are. But that is to confuse hindsight (easy) with foresight (hard). We don’t know. Decisions we make now to prefer one outcome is surely to disfavour an infinity of others.
Don’t all these possible futures deserve equal consideration? If yes, then anything we do will benefit some future, so there is nothing to see here. If no, how do we arbitrate between our possible futures, if not by reference to our own values? In that case is this really “altruism” or just motivated selfish behaviour?
On getting out more
William MacAskill is undoubtedly intelligent, widely-read, and he applies his polymathic range to his million-year-old argument with some panache. But he is probably too well-read. You sense it would do him a world of good to put the books down spend some time pulling pints or labouring on a building site, getting some education from the school of life.
Still, it took me a while to put my finger on what was so irritating about this book. To be sure, there’s a patronising glibness about it: it is positively jammed full of the sort of sophomore thought experiments (“imagine you had to live the life of every sentient being on the planet” kind of thing) that give philosophy undergraduates a bad name.
Indeed, MacAskill is barely out of undergraduate philosophy class himself. He hasn’t yet left the university. A thirty-something meta-ethics lecturer would strike most people (other than himself) as an unlikely source of cosmic advice for the planet’s distant future. So it proves.
But ultimately it is MacAskill’s sub-[[Yuval Noah Harari|Harari}}, wiser-than-thou, top-down moral counselling that grates: humanity needs to solve the problems of the future centrally; this requires brainy people from the academy, like MacAskill, to do it. And though the solution might be at the great expense of all you mouth-breathing oxygen wasters out there, it is for the future’s good.
We should sacrifice you lot — birds in the hand — for our far-distant descendants — birds in a bush who may or may not be there in a million years.
Thanks — but no thanks.
It is not at all clear what anyone can do to influence the unknowably distant future — a meteor could wipe us out any time — but in any case expected value probability calculations sure aren’t going to help. Nor does MacAskill ever say why organisms who are around now should give the merest flying hoot for the race of pan-dimensional hyperbeings we will have evolved into by then, he does not say.
Quick side bar: Probabilities are suitable for closed, bounded systems with a complete set of known outcomes. The probability of rolling a six is ⅙ because a die has six equal sides, is equally likely to land on any side, and must land on one, and no other outcome is possible. This is not how most things in life work. Probabilities work for finite games. The future is in no sense a finite game. It is unbounded, ambiguous, incomplete, the range of possible outcomes are not known and may as well be infinite. You can't calculate probabilities about it. Gerd Gigerenzer would say it is a situation of uncertainty, not risk. Expectation theory is worthless.
This demolishes MacAskill’s foundational premise — applied “expectation theory” is how he draws his conclusions about the plight of the Morlocks of our future — and is enough to trash the book’s thesis in toto.
Why stop with humans?
Does this self-sacrifice for the hereafter also apply to non-sapient beasts, fish and fowls, too? Bushes and trees? If not, why not?
If present homo sapiens really is such a hopeless case, who is to say it can redeem itself millennia into the future? What makes Macaskill think future us deserves that chance that present us is blowing so badly? Perhaps it would be better off for everyone else — especially said saintly beasts, fish fowls, bushes and trees — if we just winked out now?
The FTX connection
MacAskill’s loopy Futurism appeals to the silicon valley demi-god types who have a weakness for Wagnerian psychodrama and glib a priori sci fi futurism.
Elon Musk is a fan. So, to MacAskill’s chagrin, is deluded crypto fantasist Sam Bankman-Fried. He seems to have “altruistically” given away a large portion of his investors’ money to the cause. I wonder what the expected value of that outcome was. You perhaps shouldn’t judge a book by the company it keeps on bookshelves, but still.
See the Long Now Foundation
If you want sensible and thoughtful writing about the planet and its long term future, try Stewart Brand and Brian Eno and the good folk of the Long Now Foundation. Give this hokum the swerve.