From Bacteria to Bach and Back Again

From The Jolly Contrarian
Revision as of 15:50, 30 October 2022 by Amwelladmin (talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search
The Jolly Contrarian’s book review service™


Index: Click to expand:

Comments? Questions? Suggestions? Requests? Insults? We’d love to 📧 hear from you.
Sign up for our newsletter.

From Bacteria to Bach and Back Again
Daniel Dennett

On how to philosophise with a hammer

Truth cannot be out there — cannot exist independently of the human mind — because sentences cannot so exist, or be out there. The world is out there, but descriptions of the world are not. Only descriptions of the world can be true or false. The world on its own, unaided by the describing activities of humans, cannot.

- Richard Rorty, Contingency, Irony, and Solidarity

Daniel Dennett has a knack for a pithy aphorism. He writes technical philosophy in clear, lively prose which invites engagement from enthusiastic amateurs like me. He is best known for 1995’s Darwin’s Dangerous Idea, an exposition about natural selection. Dennett’s insight was to present evolution as an algorithm:

“ … the idea that this Intelligence could be broken into bits so tiny and stupid that they didn’t count as intelligence at all, and then distributed through space and time in a gigantic, connected network of algorithmic process.”

Darwin’s idea is dangerous because it works on anything that habitually replicates itself with occasional variations: not just organisms. Evolution is, says Dennett, like a “universal acid”, apt to dissolve knotty intellectual conundrums wherever they arise.

Conundrums don’t come knottier than those of classical metaphysics, and Daniel Dennett has spent the 22 years since Darwin’s Dangerous Idea trying to dissolve them with his bottle of universal acid. First, it was free will (in 2003’s Freedom Evolves). Then God (in 2006’s Breaking the Spell) and now, in newly published From Bacteria to Bach and Back Again, Dennett returns to mind, a problem which he declared settled as long ago as 1991, in Consciousness Explained.

A one-line summary of each of these books would be: “You’re thinking about it the wrong way. It’s evolution. Everything else is an illusion”.

Now, depending on how you feel about metaphysics, this will strike you as either a great relief or tremendously unsatisfying. In any case, a philosopher once possessed of a broad range has grown ever more tunnel-visioned: evolution, and only evolution, explains everything.

To a man with a hammer, everything looks like a nail.

***

Where something replicates itself with occasional, random, minor variations, those variations best suited to the prevailing environment will fare best. Being themselves replication machines, these replications themselves will tend to produce optimally fit offspring. Thanks to nature’s unsentimental elimination game, suboptimal iterations won’t last. Each species will tend to optimality or extinction.

What counts as “suitability”, though, you must measure after the fact, by reference to the particular variation and the then-prevailing environment. Evolution is good at explaining what just happened, but hopeless at predicting what will happen next. It is a means of post-facto rationalisation, in other words: the fact that an organism has survived in an environment is evidence of its fitness, and not the other way around. This is survivor bias on a cosmic scale.

The environment in which an organism’s fitness must be judged includes, of course, the organism itself. So, to some extent, it is judge in its own cause. The more like-minded organisms there are in your environment, the better you will fare. You can stack the deck. Majority rules, by a strange loop.

Strange loops call to mind Douglas Hofstadter’s brilliant but daunting epistemology, in I am a Strange Loop and Gödel, Escher, Bach: an Eternal Golden Braid, whom Dennett is surely name-checking with his own title. Hofstadter’s specialism is artificial intelligence. He wonders whether consciousness arose out of the “strange” feedback loop that arose when it became an adaptive benefit for humans to apprehend themselves as discrete items their own ontologies. The camera began looking at itself: I am a strange loop because “I” is a strange loop.

This would have been a rich line of enquiry. But, despite the hat-tip to Hofstadter, the recursivity of consciousness barely comes up in Dennett’s book.

***

Dennett instead heads in the opposite direction.

“If you make yourself (your “Self”, that is) really small,” Dennett is fond of saying, “you can externalise virtually everything”. To grasp consciousness, set your microscope to maximum, blow through the Cartesian canopy (where all this recursivity happens), and inspect the electrical activity aboard your personal Turing machine. What you can’t see in the physical configuration of brain states, ignore: the algorithm must explain everything. Your own feelings about it don’t count.

This is, par excellence, the machine’s eye view. It doesn’t beg the question so much as rule it out of bounds completely: the very thing we want to know is how the infinitesimal “self” of the human Turing machine (so small that everything is external, and the “self” has vanished completely) transcended its mitochondrial circuit-board to become the slippery, shape-shifting “self” of our own acquaintance: the one whose “sum” is a necessary precondition to our “cogito, ergo”.

There’s a reason Descartes still captures the imagination: I cannot introspect, unless I am. I can’t write myself off as a bad case of the intentional fallacy. Dennett disagrees. He insists, repeatedly, that all this “selfness” — the whole “Cartesian theatre”; the notion of a homunculus steering the great organic contraption — is an illusion.

Surely, this is to surrender before kick-off. It leaves a big question hovering an inch above the ground: An illusion on whom? Whatever magic is working to convert a Turing machine into me deserves a better explanation than “don’t ask silly questions”.

***

“I” am my own personal narrative device. I build an apparently continuous, permanent, enduring, four-dimensional universe out of my sensory data, in which I can arrange “things” — including me — in a workable order. This universe, along with the everything in it — and me — is a metaphor: no better a representation of my brain state than a visual display is of a central processing unit.

Per Richard Rorty: The world is out there, but descriptions of the world are not.

The means by which we articulate these embodied fictions is language: not the binary code of a CPU, but natural, human language. We each speak our own unique, gerrymandered, error-prone, slang-inflected dialect. From this imperfect hodgepodge come all our metaphors (they assign a relatable framework to any novel experiences), and from metaphor comes context, meaning, evaluation, truth, and falsehood.

Per Richard Rorty: Truth is a property of sentences, not things.

So when you toss out introspection, metaphor, qualia, all those clunky metaphors by which philosophers of yore thought about consciousness, you toss out natural language, metaphor and meaning itself with them. That might be quite tempting, because these things are hard to jam into an eliminative materialist view, but it’s still an expensive trade.

***

This abstraction between data and language puts a back on the table a distinction between the “self” and “the brain that generates it” which Dennett has been very keen to banish. Isn’t this dualism? It might seem like it but, to co-opt another of Dennett’s coinages, it’s not greedy dualism. It doesn’t impose a supernatural creator or any other kind of sky-hook. It just observes something special is going on: if you want to go from binary code to rice pudding and income tax, you’ve got a bit more explaining to do.

Dennett barely mentions language or metaphor. He spends a great deal of time talking about words and memes (in their technical sense: gene-like replicating units of cultural transmission, and not cat videos on YouTube).

It is easy to overlook metaphor when reducing sentences to words and cultural experiences to abstract data replicators. A meme is a metaphor, viewed from a mechanic’s perspective, with the casing removed: no user-serviceable parts on display.

This is, par excellence, the machine’s eye view.

Like consciousness — like all layers of golden eggs — a metaphor’s magic evaporates when you examine it on a chopping board. A meme may be a free-floating integral unit, with fixed qualities, detached from any “carrier” and able to hop from mind to mind, but that’s not what’s interesting about it. Its continued existence depends on its fitness, and that depends on what its “carrier” does with it.

The thing that carriers do to recycle memes is to find meaning in them. But “find” is the wrong word, for it implies a meme is a kind of objet trouvée — pre-coded with meaning which the carrier ingests whole and which replicates itself.

But that is neither how memes work nor how they mutate. They are dynamic collaborations between an external “text” and the home-made, four-dimensional universe the user brings to the conversation. The meaning of a meme subsists entirely in the user’s mental world: the very one Daniel Dennett says is an illusion, which is bounded by the user’s imperfect private language.

***

So, how do we recognise a metaphor? Common sense tells us that “Love is a rose” is one, because its literal meaning is incoherent, but nothing in the text does. Someone lacking our “common sense”, or having a different vocabulary might take it literally. A botanist, for example.

And how do we understand it? We can’t, without at least a conception of love, roses, and the features they have in common: (say) beauty, fragrance, ephemerality, freshness, delicacy, vulnerability and the pain they inflict if handled carelessly. Our own personal experience of the metaphor will be coloured by our personal experience of love and roses.

Now, note two things: firstly, there is no canonical meaning of a metaphor. How could there be? It is a linguistic device which reconfigures away from a “canonical meaning”.

Nor can the “coiner” be sure her intended meaning will be the one ultimately received. Steven Pinker claims in The Language Instinct that, with language, we can “shape events in each other’s brains with exquisite precision”. But all we can do is hope we get close, by coining metaphors that will appeal to an experience we believe we share.

Secondly — and this might be another way of saying the same thing — any language in which metaphor is a legitimate device is, by nature, ambiguous. It is impossible to communicate unequivocally. To derive any meaning from such a language — even the literal one — you have to interpret: create a metaphor, and if you can think of more than one, select the one that fits best (does this seem like a familiar idea, by the way?). That is a creative act. The listener creates meaning in a communication. What the listener takes away might be subtly or starkly different to what the speaker intended.

In the vernacular, the ambiguity of the human language is not a bug but a feature: it comprises the human condition. To understand a metaphor is to bring to it one’s entire worldview.

***

This stands in complete contrast to code. Code allows for the unequivocal execution of instructions and complete synchrony between transmitter and receiver. The text is all there is. There is no ambiguity, no scope for misunderstanding, no interpretation. There is no disembodied dualist realm where “meaning” floats free of the code itself. Code has no use for metaphor: there is no tense, no future, no past and no need to infer continuity or contiguity. There is no recursivity. Code does not require its user to create an imaginary world. This robs code of the infinite figurative richness of natural language, but that is the price of total certainty.

This is not to say that machines could never acquire a richly figurative natural language, but that to date none has. How would a machine, whose code has no room for self, no tense, no past, no future, no ambiguity, no room for doubt — jump up to a language in which all of these things are not just possible but essential? In the current environment, acquiring a natural language would offer a “flawless rule-follower” few adaptive benefits. The current environment — increasingly populated by flawless instruction followers, don’t forget — favours reliable, flawless, rapid execution over poetic indolence: that is why we are even having this debate. When it comes to slow, indecisive, unpredictable but imaginative workers, it is a sellers’ market. Machines are displacing humans precisely because machines don’t do metaphor.

So, who knows? Maybe consciousness will turn out to be a blind evolutionary alley. Perhaps we will evolve into Turing machines, and not vice versa?

***

This has been a long review. Let me finish it with a metaphor which I am creatively co-opting to mean something quite different to what its author intended:

There are so many different worlds

So many different suns
We have just one world
But we live in different ones

— Dire Straits, Brothers in Arms.

I doubt Mark Knopfler was talking about literary theory. The modern world seems to be polarising. Daniel Dennett’s reductionist disposition is of a piece with that. But the conscious world each of us inhabits is an ambiguous, ambivalent, imaginarium of a place. Alternative accounts of it and the things in it are just additional tools in the box: we are free to use them, or not, as we wish. If we keep them as alternatives we will not have to philosophise with a hammer when the occasion calls for a soft cotton cloth.

After all, who knows which tool will be fittest for tomorrow’s environment?

See also