What We Owe The Future
|
They flutter behind you, your possible pasts:
Some bright-eyed and crazy,
Some frightened and lost.
A warning to anyone still in command
Of their possible future
To take care.
- —Roger Waters, Your Possible Pasts
Of lived and not-yet-lived experience
Per the second law of thermodynamics but pace Pink Floyd, there is but one possible past, one possible now, and an infinite array of possible futures stretching out into an unknown black void. Some short, some long, some dystopian, some enlightened. Some cut off by apocalypse, some fading gently into warm entropic soup.
William MacAskill’s rather Roman Catholic premise is this: barring near-term cataclysm, there are so many more people in our future than in our present, that our duty of care to the sacred unborn swamps our interests in the here and now.
We are minding the shop not just for our children and grandchildren but for generations millennia hence. Thousands of millennia hence.
MacAskill does what financiers might call “linear interpolation” to deduce from what we know has happened, a theory about how we should discharge that duty to the unimagined horde.
Before wondering how beings not yet thought of can have priority over ones who are already here, the gating question that MacAskill glosses over is this: how do we even know who these putative beings will be, let alone which of their interests are worth protecting?
An infinity of possibilities
We can manufacture plausible stories about whence we came easily enough: that’s what scientists and historians do, though they have a hard time agreeing with each other. The future is a different story. No-one has the first clue. Alternative possibilities branch every which way.
Now, over a generation or two we some prospect of anticipating who our progeny might be and what they might want. Darwin’s dangerous algorithm wires us, naturally, to do this.
But over millions of years — “the average lifespan of a mammalian species,” MacAskill informs us — the gargantuan number of chaotic interactions between the trillions of co-evolving organisms, mechanisms, systems and algorithms that comprise our hypercomplex ecosystem, mean literally anything could happen. There are squillions of possible futures. Each has its own unique set of putative inheritors. Don’t we owe them all a duty?
What a conflict.
For if the grand total of unborn interests down the pathway time’s arrow eventually takes out drown the assembled present, then those interests, in turn, are drowned out all the more definitively by the collected interests of those down the literally infinite number of possible pathways time’s arrow doesn’t end up taking. Who are we to judge?
Causality may or may not be true, but still forward progress is non-linear. There is no “if-this-then-that” over five years, let alone fifty, let alone a million. Each of these gazillion branching pathways is a possible future. Only one can come true. We don’t, and can’t, know which one it will be.
And here is the rub: butterfly wings in Amazon rainforests causing typhoons in Manila: anything and everything we do infinitesimally and ineffably alters the calculus, re-routing evolutionary design forks and making this outcome or that more likely. Decisions we make now that transpire to prefer one outcome surely disfavour an infinity of others. So isn
If you take causal regularities for granted then all you need to be wise in hindsight is enough data. In this story, the causal chain behind us is unbroken back to where records begin — the probability of an event happening when it has already happened is one hundred percent; never mind that we’ve had to be quite imaginative in reconstructing it.
We don’t know.
Don’t all these possible futures deserve equal consideration? If yes, then anything we do will benefit some future, so there is nothing to see here. If no, how do we arbitrate between our possible futures, if not by reference to our own values? In that case is this really “altruism” or just motivated selfish behaviour?
On getting out more
William MacAskill is undoubtedly intelligent, widely-read, and he applies his polymathic range to his million-year-old argument with some panache. But he is probably too well-read. You sense it would do him a world of good to put the books down spend some time pulling pints or labouring on a building site, getting some education from the school of life.
Still, it took me a while to put my finger on what was so irritating about this book. To be sure, there’s a patronising glibness about it: it is positively jammed full of the sort of sophomore thought experiments (“imagine you had to live the life of every sentient being on the planet” kind of thing) that give philosophy undergraduates a bad name.
Indeed, MacAskill is barely out of undergraduate philosophy class himself. He hasn’t yet left the university. A thirty-something meta-ethics lecturer would strike most people (other than himself) as an unlikely source of cosmic advice for the planet’s distant future. So it proves.
Brainboxes to the rescue
But ultimately it is MacAskill’s sub-Harari, wiser-than-thou, top-down moral counselling that grates: humanity needs to solve the problems of the future centrally; this requires brainy people from the academy, like MacAskill, to do it. And though the solution might be at the great expense of all you mouth-breathing oxygen wasters out there, it is for the future’s good.
We should sacrifice you lot — birds in the hand — for our far-distant descendants — birds in a bush who may or may not be there in a million years.
Thanks — but no thanks.
Expected value theory and complex systems
It is not at all clear what anyone can do to influence the unknowably distant future — a meteor could wipe us out any time — but in any case expected value probability calculations sure aren’t going to help. Nor does MacAskill ever say why organisms who are around now should give the merest flying hoot for the race of pan-dimensional hyperbeings we will have evolved into by then, he does not say.
Quick side bar: Probabilities are suitable for closed, bounded systems with a complete set of known outcomes. The probability of rolling a six is ⅙ because a die has six equal sides, is equally likely to land on any side, and must land on one, and no other outcome is possible. This is not how most things in life work. Probabilities work for finite games. The future is in no sense a finite game. It is unbounded, ambiguous, incomplete, the range of possible outcomes are not known and may as well be infinite. You can't calculate probabilities about it. Gerd Gigerenzer would say it is a situation of uncertainty, not risk. Expectation theory is worthless.
This demolishes MacAskill’s foundational premise — applied “expectation theory” is how he draws his conclusions about the plight of the Morlocks of our future — and is enough to trash the book’s thesis in toto.
Why stop with humans?
Does this self-sacrifice for the hereafter also apply to non-sapient beasts, fish and fowls, too? Bushes and trees? If not, why not?
If present homo sapiens really is such a hopeless case, who is to say it can redeem itself millennia into the future? What makes Macaskill think future us deserves that chance that present us is blowing so badly? Perhaps it would be better off for everyone else — especially said saintly beasts, fish fowls, bushes and trees — if we just winked out now?
The FTX connection
MacAskill’s loopy Futurism appeals to the silicon valley demi-god types who have a weakness for Wagnerian psychodrama and glib a priori sci fi futurism.
Elon Musk is a fan. So, to MacAskill’s chagrin, is deluded crypto fantasist Sam Bankman-Fried. He seems to have “altruistically” given away a large portion of his investors’ money to the cause. I wonder what the expected value of that outcome was. You perhaps shouldn’t judge a book by the company it keeps on bookshelves, but still.
See the Long Now Foundation
If you want sensible and thoughtful writing about the planet and its long term future, try Stewart Brand and Brian Eno and the good folk of the Long Now Foundation. Give this hokum the swerve.