Template:M intro design time

From The Jolly Contrarian
Revision as of 08:04, 30 August 2023 by Amwelladmin (talk | contribs)
Jump to navigation Jump to search

Temporality
/ˌtɛmpəˈralɪti/ (n.)
The unavoidable, but easy to forget, nature of existence through time. Continuity.

The importance of continuing existence through time is much overlooked in our tenseless age of data, symbol processing and difference engines. It is an underestimated fourth dimension along which our wicked game of life plays out.

In the same way computer intelligence is an ingenious card trick: a clever simulacrum of consciousness yielded from still frames, even though consciousness — a continuing “self” that extends backwards and forwards in time — evolved to solve the single problem of continuity. The computer is like a cine-film, conjuring a vibrant ostensible motion from countless still frames.

What sets the conscious apart is “I”: myself as a unitary, bounded entity — even if those boundaries are a bit fuzzy[1] — that existed yesterday, exists now and will, the Gods willing, still exist tomorrow, that the same is broadly true of “things” in the universe beyond “my” boundaries with which “I” interact and that, consequently the problems and opportunities that “I” face in that universe have the same causal continuity, and “I” have a discretionary power — based upon “my” best interests as “I” perceive them — to do something about them. It is only if all these continuities are necessarily true that there is any need to hypothesise a conscious “I”.

We suppose that plants, for example, do not need consciousness. They can get by automatically, by algorithmic operation of evolved functions in their cells. Their relationship with their environment is broadly linear. Their cells operate rather like miniature computers. If this then, that. We can see that consciousness would not yield a great natural advantage. It does not matter what happened yesterday, or what will happen tomorrow, except to as far those events in time influenced how things are now, and even then everything a cell needs to know about what happened yesterday is encoded in the state of its environment now. Evolving a consciousness, a sense of “self”, would require great effort, consume huge resources, and confer little benefit. It might actually decreasecc adaptive fitness: a plant that has evolved over millennia to automatically take maximum advantage of immediately available resources, which can now think about whether to do so, is unlikely to make better decisions. It does not pass the evolutionary test: the business case fails.

Traditional computer code is like that. It has no tense. It does not understand past, present or future: for the applications to which it is put it does not need to. It performs millions of discrete “operations” — arithmetic calculations — that come at it in a firehose. It has no need to ask why it performs these operations, nor how they relate to each other. Hence, if the instructions are flawed a Turing machine cannot diagnose the problem, divine the programmer’s intent and correct the syntax error: it just crashes.[2]

Yet computers can seem magically conscious — but, like E.L.I.Z.A., we are projecting our consciousness on them. We are doing the construction, not the computer.

For a thing’s temporality of things is not apparent on an infinitesimal snapshot represented by any presentation of data. Things which are immutable and things which are liable to change look the same. We assign rights based upon the nature of things, and it matters how permanent those things happen to be. A human’s count of arms and legs us the same — barring catastrophe, and it can hardly grow — but ones tastes and opinions, however devoutly held, can change. In a fully functioning adult, indeed, they should.

(The seventy year old man who cleaves to the same politics he did when he was fifteen hasn’t learned anything.)

  1. per Daniel Dennett, “If you make yourself really small, you can externalise virtually everything,” and vice versa
  2. Better programs can query ostensible input errors, but only if they have been programmed to. Large learning models can guess, but only by association with familiar patterns: they do not “ form an independent theory of the world”.