Template:M intro design time: Difference between revisions

no edit summary
No edit summary
Tags: Mobile edit Mobile web edit
No edit summary
Tags: Mobile edit Mobile web edit
Line 13: Line 13:
We suppose that a sense of “I” is not important to plants, for example.<ref>We may be wrong. Who knows?</ref> They can get by automatically, by algorithmic operation of evolved functions in their cells. Their cells operate rather like miniature Turing machines. If this then, that. What great advantage would consciousness yield? It does not matter what happened yesterday, or what will happen tomorrow, except as far as previous events influenced how things are now, and even then everything a cell needs to know about what happened yesterday is encoded in the state of its environment now. The relevant past is encoded, laterally, on the present. Evolving a consciousness, a sense of “self”, would require great effort, consume huge resources, and confer little benefit. It might actually ''decrease'' adaptive fitness: a plant which evolved over millennia to automatically optimise its immediately available resources, but which can now think about whether to do so, is unlikely to make better decisions than [[Darwin’s Dangerous Idea]]. For plants , consciousness does not pass the evolutionary test: the [[business case]] fails.
We suppose that a sense of “I” is not important to plants, for example.<ref>We may be wrong. Who knows?</ref> They can get by automatically, by algorithmic operation of evolved functions in their cells. Their cells operate rather like miniature Turing machines. If this then, that. What great advantage would consciousness yield? It does not matter what happened yesterday, or what will happen tomorrow, except as far as previous events influenced how things are now, and even then everything a cell needs to know about what happened yesterday is encoded in the state of its environment now. The relevant past is encoded, laterally, on the present. Evolving a consciousness, a sense of “self”, would require great effort, consume huge resources, and confer little benefit. It might actually ''decrease'' adaptive fitness: a plant which evolved over millennia to automatically optimise its immediately available resources, but which can now think about whether to do so, is unlikely to make better decisions than [[Darwin’s Dangerous Idea]]. For plants , consciousness does not pass the evolutionary test: the [[business case]] fails.


Traditional computer code is like that. It has no tense. It does not understand past, present or future: for the applications to which it is put it does not ''need'' to. It performs millions of discrete “operations” — arithmetic calculations — that come at it as if from a firehose. It has no need to ask ''why'' it performs these operations, nor how they relate to each other. Hence, GIGO: if its instructions are flawed, a Turing machine cannot diagnose the problem much less speculate as to the programmer’s likely intent and take that course of action. It just crashes.<ref>Better programs can query ostensible input errors, but only if they have been ''programmed'' to. [[Large learning model]]s can guess, but only by association with familiar patterns: they do not “form an independent theory of the world” much less develop theories about the inentional states of their creators.</ref>
Modern materialists will raise a finger at the dreaded [[dualism]]: this is a [[Cartesian]] view, mystical, godly. “''Je pense, donc je suis''” has been out of fashion for a century but didn’t {{author|Daniel Dennett}} hammer in the last nail with {{br|Consciousness Explained}}? Well, no — for consciousness in that narrow sense of self—awareness; a comprehension of one’s person as a unitary object in time and space, and therefore, per Descartes, a profoundly different thing than ''every other thing in the universe'' — is precisely the thing to be explained. It is a clever sleight of hand to define it away before kick-off — full marks for the brio, professor Dennett — but it won’t really do. We are left with the same conundrum we started with.
 
We have evolved a means of comporting just that kind of continuity, and our language has developed some neat tricks for managing that dimension. [[Tense]]. [[Metaphor]].
 
Traditional computer code is not like that. It has no tense. It does not understand past, present or future: for the applications to which it is put it does not ''need'' to. It performs millions of discrete “operations” — arithmetic calculations — that come at it as if from a firehose. It has no need to ask ''why'' it performs these operations, nor how they relate to each other. Hence, GIGO: if its instructions are flawed, a Turing machine cannot diagnose the problem much less speculate as to the programmer’s likely intent and take that course of action. It just crashes.<ref>Better programs can query ostensible input errors, but only if they have been ''programmed'' to. [[Large learning model]]s can guess, but only by association with familiar patterns: they do not “form an independent theory of the world” much less develop theories about the inentional states of their creators.</ref>


Yet computers can ''seem'' magically conscious — but, as we did with [[E.L.I.Z.A.]], we are projecting ''our'' consciousness onto them. ''We'' are doing the construction, not the computer.
Yet computers can ''seem'' magically conscious — but, as we did with [[E.L.I.Z.A.]], we are projecting ''our'' consciousness onto them. ''We'' are doing the construction, not the computer.
Line 20: Line 24:


(The seventy year old man who cleaves to the same politics he did when he was fifteen ''hasn’t learned anything''.)
(The seventy year old man who cleaves to the same politics he did when he was fifteen ''hasn’t learned anything''.)
===The past is not like the future===
In another way is continuity important. Unless you accept some kind of hard deterministic model of utter predestination — in which case, I commiserate you for having to read this foppish screed, but you have no choice — then the past and the future are entirely unalike in an important way. This is obvious but bear with me.
There is only one past, and infinite futures.
It follows that things in the past are fixed. They cannot change. We may remember them, learn from them, and in the future compensate for them, but we cannot change them.  We can only change the stories we tell about them. We do this a lot This is the function of history. But what is done is done.
All risk and all opportunity therefore lies in the future. The past is fully priced. That is why you can’t place a bet on yesterday’s Grand National. (Well you could, but after the bookies’ spread, the odds would be poor).
This is a profound, obvious, asymmetry.
Similarly, all grievance is in the past. We can tell ourselves different stories about it, we can change our minds about how grievous it is, but it is done.