Template:M intro design time: Difference between revisions
Amwelladmin (talk | contribs) No edit summary Tags: Mobile edit Mobile web edit |
Amwelladmin (talk | contribs) No edit summary Tags: Mobile edit Mobile web edit |
||
Line 13: | Line 13: | ||
We suppose that a sense of “I” is not important to plants, for example.<ref>We may be wrong. Who knows?</ref> They can get by automatically, by algorithmic operation of evolved functions in their cells. Their cells operate rather like miniature Turing machines. If this then, that. What great advantage would consciousness yield? It does not matter what happened yesterday, or what will happen tomorrow, except as far as previous events influenced how things are now, and even then everything a cell needs to know about what happened yesterday is encoded in the state of its environment now. The relevant past is encoded, laterally, on the present. Evolving a consciousness, a sense of “self”, would require great effort, consume huge resources, and confer little benefit. It might actually ''decrease'' adaptive fitness: a plant which evolved over millennia to automatically optimise its immediately available resources, but which can now think about whether to do so, is unlikely to make better decisions than [[Darwin’s Dangerous Idea]]. For plants , consciousness does not pass the evolutionary test: the [[business case]] fails. | We suppose that a sense of “I” is not important to plants, for example.<ref>We may be wrong. Who knows?</ref> They can get by automatically, by algorithmic operation of evolved functions in their cells. Their cells operate rather like miniature Turing machines. If this then, that. What great advantage would consciousness yield? It does not matter what happened yesterday, or what will happen tomorrow, except as far as previous events influenced how things are now, and even then everything a cell needs to know about what happened yesterday is encoded in the state of its environment now. The relevant past is encoded, laterally, on the present. Evolving a consciousness, a sense of “self”, would require great effort, consume huge resources, and confer little benefit. It might actually ''decrease'' adaptive fitness: a plant which evolved over millennia to automatically optimise its immediately available resources, but which can now think about whether to do so, is unlikely to make better decisions than [[Darwin’s Dangerous Idea]]. For plants , consciousness does not pass the evolutionary test: the [[business case]] fails. | ||
Traditional computer code is like that. It has no tense. It does not understand past, present or future: for the applications to which it is put it does not ''need'' to. It performs millions of discrete “operations” — arithmetic calculations — that come at it | Traditional computer code is like that. It has no tense. It does not understand past, present or future: for the applications to which it is put it does not ''need'' to. It performs millions of discrete “operations” — arithmetic calculations — that come at it as if from a firehose. It has no need to ask ''why'' it performs these operations, nor how they relate to each other. Hence, GIGO: if its instructions are flawed, a Turing machine cannot diagnose the problem much less speculate as to the programmer’s likely intent and take that course of action. It just crashes.<ref>Better programs can query ostensible input errors, but only if they have been ''programmed'' to. [[Large learning model]]s can guess, but only by association with familiar patterns: they do not “form an independent theory of the world” much less develop theories about the inentional states of their creators.</ref> | ||
Yet computers can seem magically conscious — but, | Yet computers can ''seem'' magically conscious — but, as we did with [[E.L.I.Z.A.]], we are projecting ''our'' consciousness onto them. ''We'' are doing the construction, not the computer. | ||
For a thing’s temporality of things is not apparent on an infinitesimal snapshot represented by any presentation of data. Things which are immutable and things which are liable to change look the same. We assign rights based upon the nature of things, and it matters how permanent those things happen to be. A human’s count of arms and legs us the same — barring catastrophe, and it can hardly ''grow'' — but ones tastes and opinions, however devoutly held, can change. In a fully functioning adult, indeed, they ''should''. | For a thing’s temporality of things is not apparent on an infinitesimal snapshot represented by any presentation of data. Things which are immutable and things which are liable to change look the same. We assign rights based upon the nature of things, and it matters how permanent those things happen to be. A human’s count of arms and legs us the same — barring catastrophe, and it can hardly ''grow'' — but ones tastes and opinions, however devoutly held, can change. In a fully functioning adult, indeed, they ''should''. | ||
(The seventy year old man who cleaves to the same politics he did when he was fifteen ''hasn’t learned anything''.) | (The seventy year old man who cleaves to the same politics he did when he was fifteen ''hasn’t learned anything''.) |
Revision as of 17:42, 30 August 2023
Temporality
/ˌtɛmpəˈralɪti/ (n.)
The unavoidable, but easy to forget, nature of existence through time. Continuity.
The importance of our continuing existence through the entropising sludge of time is much overlooked in our tenseless, stupid age: we surrender ourselves to data, to the dictates of symbol processing; to the unthinking tyranny of if the difference engine. But we discard the fourth dimension to our own detriment and the machines’ great advantage: it is along that underestimated axis that our wicked game of life plays out. That is where the machines cannot fathom us.
Computer “intelligence” is a clever card trick: a cheap simulacrum of consciousness composed of still frames. But consciousness — a continuing “self” extending backwards and forwards in time — evolved to solve the single problem that an algorithm cannot: existential continuity.
The computer is like a cine-film: it conjures vibrant, but only ostensible, motion from countless still frames. To the extent it manages history, it does so laterally, in a kind of present perfect, past states catalogued and presented as archived versions of the now.
What sets the conscious apart is the sense of “I”: myself as a unitary, bounded entity — even if those boundaries are a bit fuzzy[1] — that existed yesterday, exists now and will, the Gods willing, still exist tomorrow, that exists amongst a set of things that are not I, but that otherwise share that same characteristic of longitudinal continuity. There are “things” in the universe beyond “my” boundaries with which “I” must interact and that, consequently the problems and opportunities that “I” face in that universe , and with those things, have the same causal continuity. “I” have discretion, or a sort of agency, to do something about them based upon “my” best interests as “I” perceive them.
It is only if all these continuities are necessary and important to that there is any need to hypothesise a conscious “I”. There is a chicken and egg thing here: a strange loop: important to whom?
We suppose that a sense of “I” is not important to plants, for example.[2] They can get by automatically, by algorithmic operation of evolved functions in their cells. Their cells operate rather like miniature Turing machines. If this then, that. What great advantage would consciousness yield? It does not matter what happened yesterday, or what will happen tomorrow, except as far as previous events influenced how things are now, and even then everything a cell needs to know about what happened yesterday is encoded in the state of its environment now. The relevant past is encoded, laterally, on the present. Evolving a consciousness, a sense of “self”, would require great effort, consume huge resources, and confer little benefit. It might actually decrease adaptive fitness: a plant which evolved over millennia to automatically optimise its immediately available resources, but which can now think about whether to do so, is unlikely to make better decisions than Darwin’s Dangerous Idea. For plants , consciousness does not pass the evolutionary test: the business case fails.
Traditional computer code is like that. It has no tense. It does not understand past, present or future: for the applications to which it is put it does not need to. It performs millions of discrete “operations” — arithmetic calculations — that come at it as if from a firehose. It has no need to ask why it performs these operations, nor how they relate to each other. Hence, GIGO: if its instructions are flawed, a Turing machine cannot diagnose the problem much less speculate as to the programmer’s likely intent and take that course of action. It just crashes.[3]
Yet computers can seem magically conscious — but, as we did with E.L.I.Z.A., we are projecting our consciousness onto them. We are doing the construction, not the computer.
For a thing’s temporality of things is not apparent on an infinitesimal snapshot represented by any presentation of data. Things which are immutable and things which are liable to change look the same. We assign rights based upon the nature of things, and it matters how permanent those things happen to be. A human’s count of arms and legs us the same — barring catastrophe, and it can hardly grow — but ones tastes and opinions, however devoutly held, can change. In a fully functioning adult, indeed, they should.
(The seventy year old man who cleaves to the same politics he did when he was fifteen hasn’t learned anything.)
- ↑ per Daniel Dennett, “If you make yourself really small, you can externalise virtually everything,” and vice versa
- ↑ We may be wrong. Who knows?
- ↑ Better programs can query ostensible input errors, but only if they have been programmed to. Large learning models can guess, but only by association with familiar patterns: they do not “form an independent theory of the world” much less develop theories about the inentional states of their creators.