What We Owe The Future: Difference between revisions

From The Jolly Contrarian
Jump to navigation Jump to search
No edit summary
Tags: Mobile edit Mobile web edit
No edit summary
Tags: Mobile edit Mobile web edit Advanced mobile edit
 
(32 intermediate revisions by the same user not shown)
Line 1: Line 1:
{{a|book review|}}It took me a while to put my finger on what was so irritating about this book, but there’s a patronising glibness about it, and it is positively jammed full of the sort of thought experiments (imagine you had to live the life of every sentient being on the planet kind of thing) that give philosophy undergraduates a bad name.
{{a|book review|{{image|what we owe the future|jpg|}}}}{{quote|
All mass movements deprecate the present by depicting it as a mean preliminary to a glorious future: a doormat on the threshold of the millennium.
:—Eric Hoffer, {{br|The True Believer: Thoughts on the Nature of Mass Movements}} 1951}}


{{Author|William MacAskill}} is, as best as I can make out, barely out of undergraduate [[philosophy]] class, still hasn’t left the university, and strikes me as a singularly unlikely person to be dispensing cosmic advice for the planet’s distant future.
===On getting out more===
[[William MacAskill]] is undoubtedly intelligent, widely-read — perhaps ''too'' widely-read — and he applies his polymathic range to ''What We Owe The Future'' with some panache.  


But ultimately it is the sub-Sagan, sub-Harari style
So it took me a while to put my finger on what was so irritating about his book. To be sure, there’s a glibness about it: it is jammed full of the sophomore thought experiments (“imagine you had to live the life of every sentient being on the planet” kind of thing) that give [[philosophy]] undergraduates a bad name.
of wiser-than-thou top-down moral counselling that really grates: humanity needs to solve the problems of the future centrally and this requires brainy people in the academy, like me, to do it. And the answer might be at the great expense of all you stupid nose-breathing oxygen wasters out there.


We should sacrifice you lot — birds in the hand — for your far-distant descendants — birds in a bush that may or may not be there in 500m years.
Indeed, MacAskill, a thirty-something ethics lecturer who has divided his adult life between Oxford and Cambridge universities, is barely out of undergraduate [[philosophy]] class himself. You sense it would do him a world of good to put the books down and get some education from the school of life: pulling pints, waiting tables or labouring.


Thanks but no thanks.
===Of lived and not-yet-lived experience===
William MacAskill’s premise is this: barring near-term cataclysm, there are so many more people in our future than in the present, that our duty of care to this horde of sacred unborn swamps any concern for the here and now. We must do what we can to avoid that cataclysm, and vouchsafe [[the future]]’s well — ''future''.


It is not at all clear that we can do anything to influence the distant future (a meteor could wipe us out any time), nor why organisms around ''now'' should give the merest flying hoot for the future of their species in 500 million years which, if it survives, will have doubtlessly evolved beyond all recognition.
We are, thus, minding the shop not just for our children and grandchildren, but for generations not yet conceived — in any sense of the word — millennia hence. ''Thousands'' of millennia hence.


{{Quote|Quick side bar: [[Probabilities]] are suitable for closed, bounded systems with a ''complete'' set of ''known'' outcomes. The probability of rolling a six is ⅙ because a die has six equal sides, is equally likely to land on any side, and must land on one, and no other outcome is possible. ''This is not how most things in life work''. Probabilities work for [[finite game]]s. ''The future is in no sense a finite game''. It is unbounded, ambiguous, incomplete, the range of possible outcomes are not known and may as well be infinite. ''You can't calculate probabilities about it''. {{Author|Gerd Gigerenzer}} would say it is a situation of ''uncertainty'', not ''risk''. ''Expectation theory is worthless.''}}
Perhaps to talk us down from our grandiosity, MacAskill spends some time remarking, rightly, on our contingency — that ''we'' happen to be the ones here to talk about the future is basically a fluke — but then neglects to appreciate that this contingency is by no means in our gift, and ''nor does it now stop''.


This demolishes MacAskill’s foundational premise — applied “expectation theory” is how he draws his conclusions about the plight of the [[Morlock]]s of our future — and is enough to trash the book’s thesis ''in toto''.  
For, per the [[entropy|second law of thermodynamics]] — ''pace'' the ’Floyd — there is just ''one'' possible past, ''one'' possible now, and an ''infinite'' array of possible futures. They stretch out ''in front of us'' into an unknown black void. Some are short, some long, some dystopian, some enlightened. Some will be cut off by the apocalypse. Some will fade gently into warm [[Entropy|entropic]] soup.


Does this self-sacrifice for the hereafter also apply to non-sapient beasts, fish and fowls, too? Bushes and trees? If not, why not?
It is as if MacAskill has got this perfectly backward. He talks about the present as if we are at some single crossroads; a one-time determining fork in the history of the planet where by our present course of action we can steer it conclusively this way or that, and that we have the wherewithal (or even the necessary information) to understand all the dynamics, all the second, third, fourth ... nth-order consequences to deliver a future appropriate for the organisms we expect to be. MacAskill appears under the illusion that [[Butterfly effect|we Amazonian butterflies have the gift to avert future Filipino hurricanes]]. 


If homo sapiens really is as hopeless a case as MacAskill thinks, who is to say it can redeem itself millennia into the future? What makes Macaskill think future us deserves that chance that present us is blowing so badly? Perhaps it would be better off for everyone else said saintly beasts, fish fowls, bushes and trees) if we just winked out now.
This is absurd. Literally countless determining forks happen every day, everywhere. Most of them are entirely beyond our control. ''Some'' future is assured. What it is, and who will enjoy it, is literally impossible to know. This [[uncertainty]] is a profoundly important engine of our non-zero-sum existence.  


MacAskill’s loopy Futurism appeals to the silicon valley demi-god types who have a weakness for Wagnerian psychodrama and glib [[a priori]] [[simulation hypothesis|sci fi futurism]]. To MacAskill’s chagrin, deluded fantasist [[Sam Bankman-Fried]] is a fan, supporter, and seems to have “altruistically” given away a large portion of his investors’ money to the cause. I wonder what the expected value of ''that'' outcome was. You perhaps shouldn’t judge a book by the company it keeps on bookshelves, but still.
=== “Expected value” theory does not help ===
MacAskill uses [[probability]] theory (again: too many books, not enough common sense) and what financiers might call “linear interpolation” to deduce, from what has already happened in the world, a theory about what will happen, and what we should therefore do to accommodate the forthcoming throng.


If you want sensible and thoughtful writing about the planet and its long term future, try Stewart Brand and Brian Eno and the good folk of the Long Now Foundation. Give this hokum the swerve.
But [[probabilities]] are suitable for closed, bounded systems with a ''complete'' set of ''known'' outcomes. The probability when rolling a die is ⅙ because it has six equal sides, is equally likely to land on any side, must land on one, and no other outcome is possible. This is an artificial, tight, closed system. We can only calculate an [[expected value]] ''because'' of this artificially constrained outcome. Probabilities only work for such [[finite game]]s.
 
''Almost nothing in everyday life works like that''.<ref>Ironically, not even dice: even a carefully machined die will not have exactly even sides and may fall off the table, or land crookedly, or fracture on landing!</ref>
 
The future is [[Finite and Infinite Games|infinite]]: unbounded, ambiguous, incomplete, the range of possible outcomes are not known. ''You can’t calculate probabilities about it''.
 
It is a situation of ''[[doubt]]'', not ''risk''. Here, expectation theory is ''worthless''. This is a ''good thing.''
===About that thought experiment===
MacAskill came to his thesis courtesy of the thought experiment mentioned above: imagine living the life of every being that has inhabited the planet from Mitochondrial Eve up to the present day. The exercise is meant to illustrate our own personal contingency and microscopic insignificance in the Grand Scheme. There are a paltry eight billion of us; ten times that have gone before, and a thousand times that are — if we don’t bugger everything up — yet to come.
 
This is the human condition: despite our mortal insignificance we are here, they are not. This is MacAskill’s Big Idea: ''we'', lowly ants though we are, are disproportionately empowered to determine ''their'' future.
 
The idea chimes for a moment and then falls apart. For it is to see our ''present'' existence as no more than the task of cranking the ''right'' handle on the cosmic machine, to vouchsafe a calculable outcome for someone else. We are but set-builders, moving quietly about a dark theatre. As long as we do as bidden, on time, all will be well and performers will shine. Our role is barely worth a mention in the final credits.
 
But we are not Sisyphus. We have our own [[lived experience]]s to think about. It does not follow, ''[[a priori]]'', that we are bound to practise forbearance for the sake of generations unimagined.
 
Indeed, ''[[a priori]]'', it presents a [[paradox]]: for every step we take, the future keeps retreating. Who get to be the players to shine upon our set? As each generation rolls around, won’t the dismal calculus that applied to us be just the same for them? Who gets to enjoy all this self-restraint? Isn’t each generation, relatively, just as unimportant as the last?
 
This idea of [[iteration]] should give a clue: the future does not depend on one collective decision a generation makes now, but upon an impossibly complex array of micro-decisions, made by individuals and groups, every moment throughout [[space-time]]. 
 
This is as misconceived as is [[Richard Dawkins]]’ idea that a fielder does, or even ''could'', functionally [[Epistemic priority|calculate differential equations to catch a ball]]. The thought experiment betrays is an unflinchingly [[deterministic]] world-view: the universe is a clockwork machine to be set and configured. Take readings, perform calculations, twiddle dials, progress to the designated place, hold out your hand at the appointed time and the ball will drop into it.
 
We don’t image [[Richard Dawkins|Professor Dawkins]] was much good at [[cricket]].
 
Here it is, in a nutshell: the great distinction between [[reductionism]] and [[pragmatism]].
=== An infinity of possibilities ===
We can manufacture plausible stories about whence we came easily enough: that’s what scientists and historians do, though they have a hard time agreeing with each other.
 
But where we are ''going'' is a different matter. We don’t have the first clue. [[Evolution by natural selection|Evolution]] makes no predictions. Alternative possibilities branch every which way. Over a generation or two we have some dim prospect of anticipating who our progeny might be and what they might want. [[Darwin’s Dangerous Idea|Darwin’s dangerous algorithm]] wires us, naturally, to do this.
 
But over millions of years — “the average lifespan of a mammalian species” we are confidently told — the sheer volume of chaotic interactions between the co-[[Evolve|evolving]] organisms, mechanisms, [[Systems theory|systems]] and [[Algorithm|algorithms]] that comprise our [[Complexity|hypercomplex]] ecosystem, mean literally ''anything'' could happen. There are ''squillions'' of possible futures. Each has its own unique set of putative inheritors.
 
So how do we know ''to whom'' we owe a duty? What would that duty be? How on earth would we frame it? Don’t we owe them ''all'' a duty? Doesn’t action to promote the interests of ''one'' branch consign infinitely more to oblivion?
 
In any case, who are we to play such cosmic dice? With what criteria? By reference to whose morality — ours, or theirs? If they are anything like our children, they will be revolted by our values, but we can’t even begin to guess what their values will be. So, an uncomfortable regression, through storeys of turtles and elephants, beckons. This is just the sort of thing ethics professors like, of course.
 
For if those whose interests lie along the pathway time’s arrow eventually takes drown out us, then they, in turn, will be drowned out by those whose interests lie along the infinity of pathways time’s arrow ''doesn’t'' take. Who are we to judge? How do we arbitrate between our possible futures, if not by reference to our own values? In that case is this really “altruism” or just motivated selfish behaviour?
 
Causality may or may not be true, but still forward progress is [[non-linear]]. There is no “if-this-then-that” over five years, let alone fifty, let alone ''a million''. Each of these gazillion branching pathways is a possible future. Only one can come true. We don’t, and ''can’t'', know which one it will be.
 
And [[There’s the rub|here is the rub]]: like Amazonian [[Butterfly effect|butterflies]] causing typhoons in Manila, ''anything'' and ''everything'' ''anyone'' does infinitesimally and ineffably alters the calculus, re-routing evolutionary design forks and making this outcome or that more likely. Decisions that prefer one outcome surely disfavour an infinity of others. 
 
If you take causal regularities for granted, all you need to be wise in hindsight is ''enough [[data]]''. In this story, the [[Causation|causal chain]] behind us is unbroken back to where records begin — the probability of an event happening when it has already happened is ''one hundred percent''; never mind that we’ve had to be quite imaginative in reconstructing it.
 
Another thing: does this self-sacrifice for the hereafter apply to non-sapient beasts, fish and fowls, too? Bushes and trees? Invaders from Mars? If not, why not?
 
If present homo sapiens really is such a hopelessly venal case, who is to say it can redeem itself millennia into the future? What makes Macaskill think future us deserves that chance that present us is blowing so badly? Perhaps it would be better off for everyone else — especially said saintly beasts, fish fowls, bushes and trees — if we just winked out of existence now?
===Brainboxes to the rescue===
But ultimately it is MacAskill’s sub-Harari, wiser-than-thou, top-down moral counselling that grates: humanity needs to solve the problems of the future centrally, and ''now''.
 
This requires brainy thirty-five year-olds from the academy, like MacAskill, to do it. And though the solution might be at the great expense of all you mouth-breathing oxygen wasters out there, it is for the future’s good. So suck it up.
 
But no-one in the past felt the need to solve ''our'' problems: what changed?
 
Should we really sacrifice you lot — ugly though you may be, you made it here, so you’re birds in the hand — for our far-distant descendants — birds in a bush — who may or may not be there in a million years?
 
Thanks — but no thanks.
===The [[FTX]] connection===
MacAskill’s loopy Futurism appeals to the silicon valley demi-god types who have a weakness for Wagnerian psychodrama and glib [[a priori]] [[simulation hypothesis|sci fi futurism]].
 
Elon Musk is a fan. So, to MacAskill’s chagrin, is deluded crypto fantasist [[Sam Bankman-Fried]]. He seems to have “altruistically” given away a large portion of his investors’ money to the cause. I wonder what the expected value of ''that'' outcome was. You shouldn’t judge a book by the company it keeps on bookshelves, but still.
===See the Long Now Foundation===
If you want sensible and thoughtful writing about the planet and its long term future, try [[Stewart Brand]], Brian Eno and the good folk of the Long Now Foundation. Give this hokum the swerve.
{{Sa}}
{{Sa}}
*[[Utopia]]
*[[The future]]
*[[Expected value]]
*[[Effective altruism]]
*{{br|Finite and Infinite Games}}
*[[Simulation hypothesis]]
*[[Simulation hypothesis]]
*[[Gerd Gigerenzer]]
*{{Br|The Clock of the Long Now}}
{{c2|Futurism|Systems theory}}
{{ref}}

Latest revision as of 06:20, 30 April 2024

The Jolly Contrarian’s book review service™
Index: Click to expand:
Tell me more
Sign up for our newsletter — or just get in touch: for ½ a weekly 🍺 you get to consult JC. Ask about it here.

All mass movements deprecate the present by depicting it as a mean preliminary to a glorious future: a doormat on the threshold of the millennium.

—Eric Hoffer, The True Believer: Thoughts on the Nature of Mass Movements 1951

On getting out more

William MacAskill is undoubtedly intelligent, widely-read — perhaps too widely-read — and he applies his polymathic range to What We Owe The Future with some panache.

So it took me a while to put my finger on what was so irritating about his book. To be sure, there’s a glibness about it: it is jammed full of the sophomore thought experiments (“imagine you had to live the life of every sentient being on the planet” kind of thing) that give philosophy undergraduates a bad name.

Indeed, MacAskill, a thirty-something ethics lecturer who has divided his adult life between Oxford and Cambridge universities, is barely out of undergraduate philosophy class himself. You sense it would do him a world of good to put the books down and get some education from the school of life: pulling pints, waiting tables or labouring.

Of lived and not-yet-lived experience

William MacAskill’s premise is this: barring near-term cataclysm, there are so many more people in our future than in the present, that our duty of care to this horde of sacred unborn swamps any concern for the here and now. We must do what we can to avoid that cataclysm, and vouchsafe the future’s — well — future.

We are, thus, minding the shop not just for our children and grandchildren, but for generations not yet conceived — in any sense of the word — millennia hence. Thousands of millennia hence.

Perhaps to talk us down from our grandiosity, MacAskill spends some time remarking, rightly, on our contingency — that we happen to be the ones here to talk about the future is basically a fluke — but then neglects to appreciate that this contingency is by no means in our gift, and nor does it now stop.

For, per the second law of thermodynamicspace the ’Floyd — there is just one possible past, one possible now, and an infinite array of possible futures. They stretch out in front of us into an unknown black void. Some are short, some long, some dystopian, some enlightened. Some will be cut off by the apocalypse. Some will fade gently into warm entropic soup.

It is as if MacAskill has got this perfectly backward. He talks about the present as if we are at some single crossroads; a one-time determining fork in the history of the planet where by our present course of action we can steer it conclusively this way or that, and that we have the wherewithal (or even the necessary information) to understand all the dynamics, all the second, third, fourth ... nth-order consequences to deliver a future appropriate for the organisms we expect to be. MacAskill appears under the illusion that we Amazonian butterflies have the gift to avert future Filipino hurricanes.

This is absurd. Literally countless determining forks happen every day, everywhere. Most of them are entirely beyond our control. Some future is assured. What it is, and who will enjoy it, is literally impossible to know. This uncertainty is a profoundly important engine of our non-zero-sum existence.

“Expected value” theory does not help

MacAskill uses probability theory (again: too many books, not enough common sense) and what financiers might call “linear interpolation” to deduce, from what has already happened in the world, a theory about what will happen, and what we should therefore do to accommodate the forthcoming throng.

But probabilities are suitable for closed, bounded systems with a complete set of known outcomes. The probability when rolling a die is ⅙ because it has six equal sides, is equally likely to land on any side, must land on one, and no other outcome is possible. This is an artificial, tight, closed system. We can only calculate an expected value because of this artificially constrained outcome. Probabilities only work for such finite games.

Almost nothing in everyday life works like that.[1]

The future is infinite: unbounded, ambiguous, incomplete, the range of possible outcomes are not known. You can’t calculate probabilities about it.

It is a situation of doubt, not risk. Here, expectation theory is worthless. This is a good thing.

About that thought experiment

MacAskill came to his thesis courtesy of the thought experiment mentioned above: imagine living the life of every being that has inhabited the planet from Mitochondrial Eve up to the present day. The exercise is meant to illustrate our own personal contingency and microscopic insignificance in the Grand Scheme. There are a paltry eight billion of us; ten times that have gone before, and a thousand times that are — if we don’t bugger everything up — yet to come.

This is the human condition: despite our mortal insignificance we are here, they are not. This is MacAskill’s Big Idea: we, lowly ants though we are, are disproportionately empowered to determine their future.

The idea chimes for a moment and then falls apart. For it is to see our present existence as no more than the task of cranking the right handle on the cosmic machine, to vouchsafe a calculable outcome for someone else. We are but set-builders, moving quietly about a dark theatre. As long as we do as bidden, on time, all will be well and performers will shine. Our role is barely worth a mention in the final credits.

But we are not Sisyphus. We have our own lived experiences to think about. It does not follow, a priori, that we are bound to practise forbearance for the sake of generations unimagined.

Indeed, a priori, it presents a paradox: for every step we take, the future keeps retreating. Who get to be the players to shine upon our set? As each generation rolls around, won’t the dismal calculus that applied to us be just the same for them? Who gets to enjoy all this self-restraint? Isn’t each generation, relatively, just as unimportant as the last?

This idea of iteration should give a clue: the future does not depend on one collective decision a generation makes now, but upon an impossibly complex array of micro-decisions, made by individuals and groups, every moment throughout space-time.

This is as misconceived as is Richard Dawkins’ idea that a fielder does, or even could, functionally calculate differential equations to catch a ball. The thought experiment betrays is an unflinchingly deterministic world-view: the universe is a clockwork machine to be set and configured. Take readings, perform calculations, twiddle dials, progress to the designated place, hold out your hand at the appointed time and the ball will drop into it.

We don’t image Professor Dawkins was much good at cricket.

Here it is, in a nutshell: the great distinction between reductionism and pragmatism.

An infinity of possibilities

We can manufacture plausible stories about whence we came easily enough: that’s what scientists and historians do, though they have a hard time agreeing with each other.

But where we are going is a different matter. We don’t have the first clue. Evolution makes no predictions. Alternative possibilities branch every which way. Over a generation or two we have some dim prospect of anticipating who our progeny might be and what they might want. Darwin’s dangerous algorithm wires us, naturally, to do this.

But over millions of years — “the average lifespan of a mammalian species” we are confidently told — the sheer volume of chaotic interactions between the co-evolving organisms, mechanisms, systems and algorithms that comprise our hypercomplex ecosystem, mean literally anything could happen. There are squillions of possible futures. Each has its own unique set of putative inheritors.

So how do we know to whom we owe a duty? What would that duty be? How on earth would we frame it? Don’t we owe them all a duty? Doesn’t action to promote the interests of one branch consign infinitely more to oblivion?

In any case, who are we to play such cosmic dice? With what criteria? By reference to whose morality — ours, or theirs? If they are anything like our children, they will be revolted by our values, but we can’t even begin to guess what their values will be. So, an uncomfortable regression, through storeys of turtles and elephants, beckons. This is just the sort of thing ethics professors like, of course.

For if those whose interests lie along the pathway time’s arrow eventually takes drown out us, then they, in turn, will be drowned out by those whose interests lie along the infinity of pathways time’s arrow doesn’t take. Who are we to judge? How do we arbitrate between our possible futures, if not by reference to our own values? In that case is this really “altruism” or just motivated selfish behaviour?

Causality may or may not be true, but still forward progress is non-linear. There is no “if-this-then-that” over five years, let alone fifty, let alone a million. Each of these gazillion branching pathways is a possible future. Only one can come true. We don’t, and can’t, know which one it will be.

And here is the rub: like Amazonian butterflies causing typhoons in Manila, anything and everything anyone does infinitesimally and ineffably alters the calculus, re-routing evolutionary design forks and making this outcome or that more likely. Decisions that prefer one outcome surely disfavour an infinity of others.

If you take causal regularities for granted, all you need to be wise in hindsight is enough data. In this story, the causal chain behind us is unbroken back to where records begin — the probability of an event happening when it has already happened is one hundred percent; never mind that we’ve had to be quite imaginative in reconstructing it.

Another thing: does this self-sacrifice for the hereafter apply to non-sapient beasts, fish and fowls, too? Bushes and trees? Invaders from Mars? If not, why not?

If present homo sapiens really is such a hopelessly venal case, who is to say it can redeem itself millennia into the future? What makes Macaskill think future us deserves that chance that present us is blowing so badly? Perhaps it would be better off for everyone else — especially said saintly beasts, fish fowls, bushes and trees — if we just winked out of existence now?

Brainboxes to the rescue

But ultimately it is MacAskill’s sub-Harari, wiser-than-thou, top-down moral counselling that grates: humanity needs to solve the problems of the future centrally, and now.

This requires brainy thirty-five year-olds from the academy, like MacAskill, to do it. And though the solution might be at the great expense of all you mouth-breathing oxygen wasters out there, it is for the future’s good. So suck it up.

But no-one in the past felt the need to solve our problems: what changed?

Should we really sacrifice you lot — ugly though you may be, you made it here, so you’re birds in the hand — for our far-distant descendants — birds in a bush — who may or may not be there in a million years?

Thanks — but no thanks.

The FTX connection

MacAskill’s loopy Futurism appeals to the silicon valley demi-god types who have a weakness for Wagnerian psychodrama and glib a priori sci fi futurism.

Elon Musk is a fan. So, to MacAskill’s chagrin, is deluded crypto fantasist Sam Bankman-Fried. He seems to have “altruistically” given away a large portion of his investors’ money to the cause. I wonder what the expected value of that outcome was. You shouldn’t judge a book by the company it keeps on bookshelves, but still.

See the Long Now Foundation

If you want sensible and thoughtful writing about the planet and its long term future, try Stewart Brand, Brian Eno and the good folk of the Long Now Foundation. Give this hokum the swerve.

See also

References

  1. Ironically, not even dice: even a carefully machined die will not have exactly even sides and may fall off the table, or land crookedly, or fracture on landing!