Template:M intro design time: Difference between revisions
Amwelladmin (talk | contribs) No edit summary Tags: Mobile edit Mobile web edit |
Amwelladmin (talk | contribs) Tags: Mobile edit Mobile web edit |
||
Line 6: | Line 6: | ||
:—[[Jane Jacobs]] on Ebenezer Howard, in {{br|The Death and Life of Great American Cities}}}} | :—[[Jane Jacobs]] on Ebenezer Howard, in {{br|The Death and Life of Great American Cities}}}} | ||
Modernist schemes are fragile in the face of the | [[Modernist]] schemes are fragile in the face of the unanticipatable contingencies in a [[complex system]]. Predetermined, static rules can provide for ''prior'' surprises — ''expired'' risks — but these are historical. Their probability is now 1.0. This is why you can’t bet on yesterday’s Grand National. The probability attaching to a future events is not just less than 1. It is unknown. | ||
Computer systems that are designed to respond to future contingencies are similarly static. Traditional computing lacks the capacity to autonomously reprogram itself to learn. LLMs are, too. A pattern matching machine that reacts progressively to an unpredictably unfolding situation, by itself, will progressively, systematically, lose touch with “reality”. It can only iterate on its own hallucination. Its hallucinatory world will progressively deepen. Unlike a human, it doesn't see what it is hallucinating. It projects its hallucination onto us. | |||
Imagine playing theatre sports with an AI. There is probably a mathematical equation that will predict how much less based an LLM would become while trying, real-time and unaided, to deal with an unfolding complex event. | Imagine playing theatre sports with an AI. There is probably a mathematical equation that will predict how much less based an LLM would become while trying, real-time and unaided, to deal with an unfolding complex event. |
Latest revision as of 14:22, 15 December 2023
Temporality
/ˌtɛmpəˈralɪti/ (n.)
The unavoidable, but easy to forget, nature of existence through time. Continuity.
The importance of our continuing existence through the entropising sludge of time is much overlooked in our tenseless, stupid age: we surrender ourselves to data, to the dictates of symbol processing; to the unthinking tyranny of if the difference engine. But we discard the fourth dimension to our own detriment and the machines’ great advantage: it is along that underestimated axis that our wicked game of life plays out. That is where the machines cannot fathom us. Humans are good ant handling the vissitudes served up by passing time. Machines are bad.
Static
He conceived of good planning as a series of static acts. In each case, the plan must anticipate all that is needed and be protected, after it is built against any but the most minor subsequent changes. He conceived of planning also as essentially paternalistic if not altogether authoritarian.
- —Jane Jacobs on Ebenezer Howard, in The Death and Life of Great American Cities
Modernist schemes are fragile in the face of the unanticipatable contingencies in a complex system. Predetermined, static rules can provide for prior surprises — expired risks — but these are historical. Their probability is now 1.0. This is why you can’t bet on yesterday’s Grand National. The probability attaching to a future events is not just less than 1. It is unknown.
Computer systems that are designed to respond to future contingencies are similarly static. Traditional computing lacks the capacity to autonomously reprogram itself to learn. LLMs are, too. A pattern matching machine that reacts progressively to an unpredictably unfolding situation, by itself, will progressively, systematically, lose touch with “reality”. It can only iterate on its own hallucination. Its hallucinatory world will progressively deepen. Unlike a human, it doesn't see what it is hallucinating. It projects its hallucination onto us.
Imagine playing theatre sports with an AI. There is probably a mathematical equation that will predict how much less based an LLM would become while trying, real-time and unaided, to deal with an unfolding complex event.
Either we steward the machine, and the meatware keeps it based, or we stick with static algorithms. We can try to anticipate every contingency — in which case our model becomes progressively like an over lawyered trust deed, tediously cataloguing fall backs for vanishingly remote contingencies — or we could just rely on humans to do it.
Intelligence as a card-trick
Computer “intelligence” is a clever card trick: a cheap simulacrum of consciousness composed of still frames. But consciousness — a continuing “self” extending backwards and forwards in time — evolved to solve the single problem that an algorithm cannot: existential continuity.
The computer is like a cine-film: it conjures vibrant, but only ostensible, motion from countless still frames. To the extent it manages history, it does so laterally, in a kind of present perfect, past states catalogued and presented as archived versions of the now.
What sets the conscious apart is the sense of “I”: “myself”, as a unitary, bounded entity — even if those boundaries are a bit fuzzy[1] — that existed yesterday, exists now and will, the Gods willing, still exist tomorrow. “I” exist among a large set of things that are not I, but that still share that characteristic of “longitudinal continuity”.
There are “things” in the universe beyond “my” boundaries with which “I” must interact and that, consequently the problems and opportunities that “I” face in that universe, and with those things, must have the same causal continuity. “I” have discretion, or a sort of agency, to do something about them based upon “my” best interests as “I” perceive them.
It is only if all these continuities are necessary and important to that unitary, continuous self that there is any need to hypothesise a conscious “I” to look out for it.
But there is a chicken and egg problem here: a strange loop: important to whom?
Why plants are not conscious
We suppose that a sense of self is not important to plants, for example.[2] They can get by automatically, by algorithmic operation of evolved functions in their cells. Their cells operate rather like miniature Turing machines: If this then, that.
What great advantage would consciousness yield? It does not matter what happened yesterday, or what will happen tomorrow, except as far as previous events influenced how things are now, and even then everything a cell needs to know about what happened yesterday is encoded in the state of its environment now. The relevant past is encoded, laterally, onto the present.
Evolving a consciousness, a sense of “self”, would require great effort, consume huge resources that might better be used for something else (seeking nutrients in the soil, for example), and confer little benefit. Self-awareness for a plant that cannot move might actually decrease its adaptive fitness: a plant that evolved over millennia to automatically optimise its immediately available resources, but which can now think about whether or not to do so, is unlikely to make better decisions than Darwin’s Dangerous Idea.
For plants, consciousness does not pass the evolutionary test: the business case fails.
Dualism ahoy
Modern materialists will raise a finger at the dreaded dualism: this is a Cartesian view, mystical, godly.
“Je pense, donc je suis” has been out of fashion for a century, but didn’t Daniel Dennett hammer in the last nail with Consciousness Explained?
Well, no — for consciousness in that narrow sense of self-awareness; a comprehension of one’s person as a unitary object in time and space and, therefore, per Descartes, a profoundly different thing than every other thing in the universe — is precisely the thing to be explained.
It is a clever sleight of hand to define it away before kick-off — full marks for the brio, Professor Dennett — but it won’t really do. We are left in the same pickle we started with.
We have evolved a means of comporting just that kind of continuity, and our language has developed some neat tricks for managing that dimension. Tense. Metaphor.
Symbol-processing v reading
Traditional computer code is not like that. It has no tense. It does not understand past, present or future: for the applications to which it is put, it does not need to. It performs millions of discrete “operations” — arithmetic calculations — that come at it as if from a firehose.
It has no need to ask why it performs these operations, what they mean for it, nor how they relate to each other. Hence, garbage in, garbage out: if its instructions are flawed, a Turing machine cannot transcend the text to creatively diagnose the problem, much less speculate as to the programmer’s likely intent and take that course of action. It just crashes.[3]
Yet computers can seem magically conscious — but, as we did with E.L.I.Z.A., we are projecting our consciousness onto them. We are doing the construction, not the computer.
On the dynamic and static
For a thing’s temporality is not apparent in the infinitesimally brief — stationary — snapshot represented by any given data: When they are frozen in time, static things and dynamic things look the same: static. Dynamism, or its absence, is function of the passage of time.
Yet, we assign rights based upon the nature of things, and it matters how permanent they are. A human’s count of arms and legs is the same — barring catastrophe, it can hardly grow — but one’s tastes and opinions, however devoutly held, can change. In a fully functioning adult, indeed, they should. (The seventy year old man who cleaves to the same politics he did when he was fifteen hasn’t learned anything.)
The past is not like the future
In another way is continuity important. Unless you accept some kind of hard deterministic model of utter predestination — in which case, I commiserate you for having to read this foppish screed, but you have no choice, do deal with it — then the past and the future are entirely unalike in an important way. This is obvious, but bear with me.
There is only one possible past: the actual one. There are infinite possible futures.
Things in the past cannot change. We may remember them, learn from them and, in the future, compensate for them, but we cannot change them. We can only change the stories we tell about them. We do this a lot: this is called writing history. But what is done is done.
The past is fully priced. That is why you can’t place a bet on yesterday’s Grand National. (Well you could, but after the bookies’ spread, the odds would be poor).
All risk and all opportunity lies in the future. The only history in the future is the one we choose to take with us, to make sense of it. It is the lens and our filter we choose to apply, and it is entirely within our gift.
This is a profound, obvious, asymmetry.
Similarly, all victories, all defeats, all vindications and all grievances are in the past. We can tell ourselves different stories about them, we can change our minds about how good or bad we think they were, but they are done. The profit and loss is known and booked.
The potential profit and loss for the future is not. To some extent it is — unless you are a determinist, in which case, close your eyes and think of England — within our gift.
Among the infinite branches of our possible futures, there are infinite losses and infinite gains. Which ones we capture is, partly, up to us. The choices we make depend on the history we choose to take with us. The morality tales we apply. The lessons we have learned.
Focusing on remediating the past at the future’s expense is a rum business. Hence, why so many legal rights go unexercised: behold, the commercial imperative. The potential benefits to reap in the unknowable future colossally outweighs the realised loss on this trade. Learn your lesson, take the loss, show yourself to be a good sport, and concentrate on the next win. Trade your magnanimity for extra business. Or sue, and end the revenue for ever.
- ↑ per Daniel Dennett, “If you make yourself really small, you can externalise virtually everything,” and vice versa.
- ↑ We may be wrong. Who knows?
- ↑ Better programs can query ostensible input errors, but only if they have been pre-programmed to. Large learning models can guess, but only by association with familiar patterns: they do not “form an independent theory of the world” much less develop theories about the inentional states of their creators.