Fluke: Chance, Chaos and Why Everything We Do Matters
The Jolly Contrarian turns cultural critic
|
So remember, when you’re feeling very small and insecure—
How amazingly unlikely is your birth.
And pray that there’s intelligent life
Somewhere up in space, because
There’s bugger all down here on Earth.
- — Galaxy Song, Monty Python’s Meaning of Life, 1983
A big “but”
Ihave been grappling for some time with what is so troubling about the modern techno synthesis — End of History millenarianism, truth, data modernism, determinism and the oddly illiberal fix these logical ideas put us in — but have struggled to put my finger on it.
In Fluke: Chance, Chaos and Why Everything We Do Matters, “disillusioned social scientist” Brian Klaas helps. But not as he means to.
This is a silly book. It makes a great show of being interesting, but then steers off a cliff in its final chapter, which presents as a big “but”. As G.R.R. Martin puts it, “Everything before the word ‘but’ is horseshit.”
You might think I am being unfair or uninformed in my opinion. So I might — it would hardly be the first time — but even Professor Klaas would say it is not my fault: I can’t help it. Indeed, be obliged to concede that this review, exactly as it appears, has been coming since before he wrote his book. It has its origins, as does his silly book, in the dawn of history.
The great divide
In modern academic discourse, there are two strands of thought: the icy rationalism of the STEM disciplines and the post-modernist mumbo jumbo of the humanities. This is, of course, an outrageous generalisation, but it also, kind of, isn’t.
Occasionally, shots ring out across the aisle: mendacious physicists fox humourless sociology journals into publishing PoMo-tinged gobbledegook, while philosophers and historians ruffle biologists’ feathers by pointing out that their discipline cannot be as rational as they would have us believe.
For all that, there is buyer’s remorse on either side: a clique of humanities academics, uncomfortable with Post-modernism’s reductio ad absurdum that nothing means anything, have organised themselves into a rationalist stance, huddled around the theory of evolution. Prominent among them are linguist Steven Pinker and late philosopher of mind Daniel Dennett but their high priest — if you’ll forgive the expression — is biologist Richard Dawkins.This group is outspoken in its criticism of religion, relativism and other ostensibly magical accounts of human ingenuity.
It was Professor Dennett who did the most to integrate evolutionary concepts into the humanities, sheeting them back to information theory by means of the algorithm — the “universal acid” that explains everything. evolution is profoundly algorithmic — that is Darwin’s Dangerous Idea, in a nutshell — and so, of course, are Turing machines. So why not brains?
At the other end of the STEM spectrum, information theorists and cyberneticists strive to apply their learning to human systems. They work backwards towards the meatware, noting the calculable “Shannon entropy” of a given string of symbols which, after all, is how humans communicate.
Now, computation is a bounded, mathematical, calculable, zero-sum endeavour. It is fully causal. It is intensely deterministic. Therein lies the rugged beauty of the Turing machine: fast, cheap and utterly predictable. It doesn’t make mistakes. It doesn’t pick up the wrong end of the stick. It can’t. It is obliged to follow a determined causal chain of reactions. All steps can be reverse-engineered. This makes it quite a neat nomological machine.
This compels a worldview where everything is directly, bi-directionally causal: assuming a given operation, a machine state n' can be computed by reference to machine state n that preceded it, and vice versa. One can journey up and down this chain of reactions indefinitely without losing fidelity or risking a different outcome.
It would be great — some think — if we could quantise and optimise human intelligence the way we optimise computing. But while we understand pretty well how Turing machines work[1] we don’t, very well, understand how brains do.
Professor Dennett was not the first to see in Turing machines a framework for understanding human consciousness, but he pushed the idea harder and more successfully that anyone before him.
But this is not so much to anthropomorphise machines as to robomorphise humans. For a Turing machine is a weak metaphor for human intelligence. Humans are not like Turing machines: as George Gilder drily noted, that is why we build them. Humans are inconstant, slow, easily bored and they take up space. They are hopeless in the environments in which machines excel. But in the chaotic, unbounded, inchoate complex environments in which humans excel, humans are unparalleled. It is precisely our flexibility, imagination, freedom from causal constraints that gives us this edge. Humans can imagine. Machines cannot.
There is a flip side: as a result, humans are inconstant. They can “misinterpret”. They can pick up the wrong end of the stick.
Fluke
This is the category error Brian Klaas eventually makes in Fluke, having spent three quarters of his text making a great show of avoiding it. There is much to agree with in that first section but it transpires Klaas has laboriously been taking up the wrong end of the stick. Because all computer operations are provably causal — garbage in, garbage out — and because computers superficially resemble brains, the temptation is to infer that humans are unavoidably causal too. This Oolon Colluphid-style puff of logic prioritises causality. It rules out God, the mischievous supernatural, but the principle of free will, too. Hence, if it is true, the inevitability of this review. I didn’t have a choice.
Though the universe seems disinclined to random irregularities — though it conforms, after a fashion, to our cosmological models — our sample size is miniscule in a spacetime of incomprehensible proportions. The data upon which we found our assumption of universal causation is, for all intents, nil. And it is vulnerable to our own evolutionarily-conditioned selection bias. What you see is all there is. We have evolved to truck in regularities.
So this is all a bit wishful. For those who don’t find it desolate.
Still, Fluke starts off brightly. There is good discussion of path dependence, contingency and convergence. Our existence here is the product of a colossal sequence of flukes. Things would not have had to be very different at any point in our evolution (ding) for us not to be here at all.
This seems a glib passing observation, but Klaas hammers it hard. Before long we see why. Klaas turns to chaos theory. Amazonian butterflies. The lesson of chaos theory is not that a butterfly’s wingflap caused the Ottoman collapse, but that for all we know a butterfly’s wingflap could make a difference. That is illustrated by the marvellous contingencies Klaas illustrates, but as crystallised histories they are no longer emblematic of Chaos. They are now — to the extent we recorded them at all — fixed historical data. Strings of symbols. They are potential information, but they require narrative -bearing, pattern-matching beings to run over them to produce meaning. That is not a binary symbol -processing operation, but an imaginative one. It too is contingent, willed, a possibility from a vast plurality of meanings. We fox ourselves into the story that it is true. That is the pattern we draw on the side of the Texan barn.
but at least with the past there is data, however jury-rigged and inadequate, for us to draw over our fancy casual chains and declare them sacred. The same is not true of the future. Here our uncertainty is aleatory, not epistemic. In a complex system, filled with intentional agents, our predictions are useless. They are useless because intentional agents have autonomy.
Klaas even states this, outright: his social science colleagues, he says were systematically unable to predict known outcomes given all available data. This is a neat experiment: it shows the lack of practical difference between unknown events in the past and unknown events in the future. Klaas impliedly draws the opposite conclusion to the one I do: rather than showing that historical data is just as flaky as future unknowns, Klaas concludes that the future is just as set in stone as the past.
He forces himself to confront that hoariest of freshman metaphysics problems: free will. Greater ships than his have foundered on this reef so, you wonder whether this is wise
Being a diligent logical causalist Klaas sees but two alternatives: either there is free will, science is worthless and the universe is random, or the causal principle holds, the cosmos flies by calculable wire, and our inability to predict it is down to our own inadequacy, but in any case there is no free will.
The thing is, on their face, both are plainly preposterous positions. Professor Klaas has boxed himself in with a silly stage one a priori thought experiment.
==
He seems to see the "amazon butterfly" metaphor as describing how intricate causal histories are and how determinate, whereas they are really the foundation of chaos theory and say that however deterministic we think things are, causal chains are in fact impossible to predict
klaas seems to go "it’s either free will, in which case science doesn’t work and the causal principle breaks, or it's determinism, in which everything is caused and there is no scope for any free will we are all just automata, and frankly the world is to ordered and predictable for the former, so it must be the latter". This seems to totally misunderstand the lesson of evolution (which he also teaches) that things that can seem really ordered and intentional don't have to be
the whole "garden of forking paths" which is his blog title, seems to be all about contingency and unpredictability of complex systems, so i find it a bit baffling that he still gets to the point of concluding that everything is pre-ordained. For one thing, even if so, if we lack the practical capability to predict what will happen (and with chaotic systems we clearly do) and if it seems like we have autonomy (and it clearly does) then what actually is the point of saying "well, it is all predetermined". How does that even help us?
- ↑ It seems we are less certain how large language models work, but could probably understand their output in hindsight.