What We Owe The Future: Difference between revisions

Jump to navigation Jump to search
no edit summary
No edit summary
No edit summary
Line 7: Line 7:
To take care.
To take care.
:—Roger Waters, ''Your Possible Pasts''}}
:—Roger Waters, ''Your Possible Pasts''}}
===On getting out more===
[[William MacAskill]] is undoubtedly intelligent, widely-read — perhaps ''too'' widely-read — and he applies his polymathic range to ''What We Owe The Future'' with some panache.
So it took me a while to put my finger on what was so irritating about his book. There’s a patronising glibness about it: it is positively jammed full of the sort of sophomore thought experiments (“imagine you had to live the life of every sentient being on the planet” kind of thing) that give [[philosophy]] undergraduates a bad name.
Indeed, MacAskill, a thirty-something ethics lecturer who has divided his time between Oxford and Cambridge universities, is barely out of undergraduate [[philosophy]] class himself. He hasn’t yet left university. For those of us who haven’t led such a life, he is an unlikely source of cosmic advice for the planet’s distant future.
You sense it would do him a world of good to put the books down spend some time pulling pints or labouring on a building site, getting some education from the school of life.


===Of lived and not-yet-lived experience===
===Of lived and not-yet-lived experience===
Per the [[entropy|second law of thermodynamics]] but ''pace'' Pink Floyd, there is but ''one'' possible past, ''one'' possible now, and an infinite array of  possible futures stretching out into an unknown black void. Some short, some long, some dystopian, some enlightened. Some cut off by apocalypse, some fading gently into warm [[Entropy|entropic]] soup.
Per the [[entropy|second law of thermodynamics]] but ''pace'' Pink Floyd, there is but ''one'' possible past, ''one'' possible now, and an infinite array of  possible futures stretching out into an unknown black void. Some short, some long, some dystopian, some enlightened. Some cut off by apocalypse, some fading gently into warm [[Entropy|entropic]] soup.


William MacAskill’s  premise is this: barring near-term cataclysm, there are so many more people in our future than in the present, that our duty of care to this horde of sacred unborn swamps any concern for the here and now. This feels a bit Roman Catholic except Catholics require at least conception before rights arise.when it isn’t feeling like a manifesto for Neo-Calvinism.
William MacAskill’s  premise is this: barring near-term cataclysm, there are so many more people in our future than in the present, that our duty of care to this horde of sacred unborn swamps any concern for the here and now. If this feels a bit Roman Catholic, remember that Catholics require at least conception before rights arise. Thus it feels more like abstract denial: a kind of manifesto for Neo-Calvinism.


Anyhow: we are minding the shop not just for our children and grandchildren but for generations unconceived — in every sense of the word — millennia hence. ''Thousands'' of millennia hence.
Anyhow: we are minding the shop not just for our children and grandchildren but for generations unconceived — in every sense of the word — millennia hence. ''Thousands'' of millennia hence.
Line 18: Line 27:


=== An infinity of possibilities ===
=== An infinity of possibilities ===
We can manufacture plausible stories about whence we came easily enough: that’s what scientists and historians do, though they have a hard time agreeing with each other. Where we are going, on the other hand, is a different matter. We don’t have the first clue. Evolution makes no predictions. Alternative possibilities branch every which way. The forward possibilities of a game as simple as [[chess]] become incalculable, even with ENIAC, within five moves. Organic life is quite a lot more complicated than that
We can manufacture plausible stories about whence we came easily enough: that’s what scientists and historians do, though they have a hard time agreeing with each other. Where we are going, on the other hand, is a different matter. We don’t have the first clue. Evolution makes no predictions. Alternative possibilities branch every which way. The forward possibilities of a game as simple as [[chess]] become incalculable, even with ENIAC, within five moves. Organic life is quite a lot more complicated than that.


So, over a generation or two we some dim prospect of anticipating who our progeny might be and what they might want. [[Darwin’s Dangerous Idea|Darwin’s dangerous algorithm]] wires us, naturally, to do this.   
So, over a generation or two we some dim prospect of anticipating who our progeny might be and what they might want. [[Darwin’s Dangerous Idea|Darwin’s dangerous algorithm]] wires us, naturally, to do this.   


But over millions of years — “the average lifespan of a mammalian species,” MacAskill informs us — the gargantuan number of chaotic interactions between the trillions of co-evolving organisms, mechanisms, systems and algorithms that comprise our hypercomplex ecosystem, mean literally ''anything'' could happen. There are ''squillions'' of possible futures. Each has its own unique set of putative inheritors. Don’t we owe them ''all'' a duty? Doesn’t action to promote the interests of one branch consign infinitely more to oblivion?
But over millions of years — “the average lifespan of a mammalian species,” MacAskill informs us — the gargantuan volume of chaotic interactions between the trillions of co-evolving organisms, mechanisms, systems and algorithms that comprise our hypercomplex ecosystem, mean literally ''anything'' could happen. There are ''squillions'' of possible futures. Each has its own unique set of putative inheritors. Don’t we owe them ''all'' a duty? Doesn’t action to promote the interests of ''one'' branch consign infinitely more to oblivion?


Who are we to play with such cosmic dice? With what criteria? By reference to whose morality? An uncomfortable regression through storeys of turtles and elephants beckons.
Who are we to play with such cosmic dice? With what criteria? By reference to whose morality? An uncomfortable regression through storeys of turtles and elephants beckons. This is just the sort of thing ethics professors like, of course.


For if the grand total of unborn interests down the pathway time’s arrow eventually takes drowns out the assembled present, then those interests, in turn, are drowned out by the collected interests of those down the literally infinite number of possible pathways time’s arrow ''doesn’t'' end up taking. Who are we to judge?
For if the grand total of unborn interests down the pathway time’s arrow eventually takes drowns out the assembled present, then those interests, in turn, are drowned out by the collected interests of those down the literally infinite number of possible pathways time’s arrow ''doesn’t'' end up taking. Who are we to judge?
Line 30: Line 39:
Causality may or may not be true, but still forward progress  is [[non-linear]]. There is no “if-this-then-that” over five years, let alone fifty, let alone ''a million''. Each of these gazillion branching pathways is a possible future. Only one can come true. We don’t, and ''can’t'', know which one it will be.  
Causality may or may not be true, but still forward progress  is [[non-linear]]. There is no “if-this-then-that” over five years, let alone fifty, let alone ''a million''. Each of these gazillion branching pathways is a possible future. Only one can come true. We don’t, and ''can’t'', know which one it will be.  


And here is the rub: [[Butterfly effect|butterfly wings]] in Amazon rainforests causing typhoons in Manila: ''anything'' and ''everything'' we do infinitesimally and ineffably alters the calculus, re-routing evolutionary design forks and making this outcome or that more likely. Decisions we make now that transpire to prefer one outcome surely disfavour an infinity of others. So isn
And [[There’s the rub|here is the rub]]: Amazonian [[Butterfly effect|butterflies]] causing typhoons in Manila: ''anything'' and ''everything'' we do infinitesimally and ineffably alters the calculus, re-routing evolutionary design forks and making this outcome or that more likely. Decisions that prefer one outcome surely disfavour an infinity of others.


If you take causal regularities for granted then all you need to be wise in hindsight is enough data. In this story, the [[Causation|causal chain]] behind us is unbroken back to where records begin —  the probability of an event happening when it has already happened is ''one hundred percent''; never mind that we’ve had to be quite imaginative in reconstructing it.  
If you take causal regularities for granted then all you need to be wise ''in hindsight'' is enough data. In this story, the [[Causation|causal chain]] behind us is unbroken back to where records begin —  the probability of an event happening when it has already happened is ''one hundred percent''; never mind that we’ve had to be quite imaginative in reconstructing it.  


We ''don’t'' know.  
We ''don’t'' know.  


Don’t ''all'' these possible futures deserve equal consideration? If yes, then ''anything'' we do will benefit some future, so there is nothing to see here. If ''no'', how do we arbitrate between our possible futures, if not by reference to our own values? In that case is this really “altruism” or just motivated selfish behaviour?  
Don’t ''all'' these possible futures deserve equal consideration? If yes, then ''anything'' we do will benefit some future, so there is nothing to see here. If ''no'', how do we arbitrate between our possible futures, if not by reference to our own values? In that case is this really “altruism” or just motivated selfish behaviour?  
===On getting out more===
[[William MacAskill]] is undoubtedly intelligent, widely-read, and he applies his polymathic range to his million-year-old argument with some panache. But he is probably ''too'' well-read. You sense it would do him a world of good to put the books down spend some time pulling pints or labouring on a building site, getting some education from the school of life.
Still, it took me a while to put my finger on what was so irritating about this book. To be sure, there’s a patronising glibness about it: it is positively jammed full of the sort of sophomore thought experiments (“imagine you had to live the life of every sentient being on the planet” kind of thing) that give [[philosophy]] undergraduates a bad name.
Indeed, MacAskill is barely out of undergraduate [[philosophy]] class himself. He hasn’t yet left the university. A thirty-something meta-ethics lecturer would strike most people (other than himself) as an unlikely source of cosmic advice for the planet’s distant future. So it proves.
===Brainboxes to the rescue===
===Brainboxes to the rescue===
But ultimately it is MacAskill’s sub-[[Yuval Noah Harari|Harari]], wiser-than-thou, top-down moral counselling that grates: humanity needs to solve the problems of the future centrally; this requires brainy people from the academy, like MacAskill, to do it. And though the solution might be at the great expense of all you mouth-breathing oxygen wasters out there, it is for the future’s good.
But ultimately it is MacAskill’s sub-[[Yuval Noah Harari|Harari]], wiser-than-thou, top-down moral counselling that grates: humanity needs to solve the problems of the future centrally; this requires brainy people from the academy, like MacAskill, to do it. And though the solution might be at the great expense of all you mouth-breathing oxygen wasters out there, it is for the future’s good.

Navigation menu