Template:M intro design time: Difference between revisions

no edit summary
No edit summary
Tags: Mobile edit Mobile web edit
No edit summary
Tags: Mobile edit Mobile web edit
Line 1: Line 1:
{{d|Temporality|/ˌtɛmpəˈralɪti/|n|}}The unavoidable, but easy to forget, nature of existence through [[time]]. Continuity.
{{d|Temporality|/ˌtɛmpəˈralɪti/|n|}}The unavoidable, but easy to forget, nature of existence through [[time]]. Continuity.


The importance of continuing existence through time is much overlooked in our [[tense]]less age of [[data]], [[symbol processing]] and [[difference engines]]. It is an underestimated fourth dimension along which our [[wicked]] game of life plays out.
The importance of our continuing existence through the entropising sludge of time is much overlooked in our [[tense]]less, ''stupid'' age: we surrender ourselves to [[data]], to the dictates of [[symbol processing]]; to the unthinking tyranny of if the [[difference engine]]. But we discard the fourth dimension to our own detriment and the machines’ great advantage: it is along that underestimated axis that our [[wicked]] game of life plays out. That is where the machines cannot fathom us.


In the same way computer intelligence is an ingenious card trick: a clever simulacrum of consciousness yielded from still frames, even though consciousness — a continuing “self” that extends backwards and forwards in time — evolved to solve the single problem of continuity. The computer is like a cine-film, conjuring a vibrant  ostensible motion from countless still frames.
Computer “intelligence” is a clever card trick: a cheap ''simulacrum'' of consciousness composed of still frames. But consciousness — a continuing “self” extending backwards and forwards in time — evolved to solve the single problem that an algorithm cannot: existential continuity.  


What sets the conscious apart is “I”: myself as a unitary, bounded entity — even if those boundaries are a bit fuzzy<ref>per {{author|Daniel Dennett}}, “If you make yourself really small, you can externalise virtually everything,” and vice versa</ref> — that existed yesterday, exists now and will, the Gods willing, still exist tomorrow, that the same is broadly true of “things” in the universe beyond “my” boundaries with which “I” interact and that, consequently the problems and opportunities that “I” face in that universe have the same causal continuity, and “I” have a discretionary power — based upon “my” best interests as “I” perceive them — to do something about them. It is only if all these continuities are necessarily true that there is any need to hypothesise a conscious “I”.
The computer is like a cine-film: it conjures vibrant, but only ostensible, motion from countless still frames. To the extent it manages history, it does so laterally, in a kind of present perfect, past states catalogued and presented as archived versions of the now.


We suppose that plants, for example, do not need consciousness. They can get by automatically, by algorithmic operation of evolved functions in their cells. Their relationship with their environment is broadly linear. Their cells operate rather like miniature computers. If this then, that. We can see that consciousness would not yield a great natural advantage. It does not matter what happened yesterday, or what will happen tomorrow, except to as far those events in time influenced how things are now, and even then everything a cell needs to know about what happened yesterday is encoded in the state of its environment now. Evolving a consciousness, a sense of “self”, would require great effort, consume huge resources, and confer little benefit. It might actually ''decreasecc adaptive fitness: a plant that has evolved over millennia to automatically take maximum advantage of immediately available resources, which can now think about whether to do so, is unlikely to make better decisions. It does not pass the evolutionary test: the [[business case]] fails.
What sets the conscious apart is the sense of “I”:  myself as a unitary, bounded entity — even if those boundaries are a bit fuzzy<ref>per {{author|Daniel Dennett}}, “If you make yourself really small, you can externalise virtually everything,” and vice versa</ref> — that existed yesterday, exists now and will, the Gods willing, still exist tomorrow, that exists amongst a set of things that are ''not'' I, but that otherwise share that same characteristic of longitudinal continuity. There are “things” in the universe beyond “my” boundaries with which “I” must interact and that, consequently the problems and opportunities that “I” face in that universe , and with those things, have the same causal continuity. “I” have discretion, or a sort of agency, to do something about them based upon “my” best interests as “I” perceive them.
 
It is only if all these continuities are necessary and important to that there is any need to hypothesise a conscious “I”. There is a chicken and egg thing here: a [[strange loop]]: ''important to whom?''
 
We suppose that a sense of “I” is not important to plants, for example.<ref>We may be wrong. Who knows?</ref> They can get by automatically, by algorithmic operation of evolved functions in their cells. Their cells operate rather like miniature Turing machines. If this then, that. What great advantage would consciousness yield? It does not matter what happened yesterday, or what will happen tomorrow, except as far as previous events influenced how things are now, and even then everything a cell needs to know about what happened yesterday is encoded in the state of its environment now. The relevant past is encoded, laterally, on the present. Evolving a consciousness, a sense of “self”, would require great effort, consume huge resources, and confer little benefit. It might actually ''decrease'' adaptive fitness: a plant which evolved over millennia to automatically optimise its immediately available resources, but which can now think about whether to do so, is unlikely to make better decisions than [[Darwin’s Dangerous Idea]]. For plants , consciousness does not pass the evolutionary test: the [[business case]] fails.


Traditional computer code is like that. It has no tense. It does not understand past, present or future: for the applications to which it is put it does not ''need'' to. It performs millions of discrete “operations” — arithmetic calculations — that come at it in a firehose. It has no need to ask why it performs these operations, nor how they relate to each other. Hence, if the instructions are flawed a Turing machine cannot diagnose the problem, divine the programmer’s intent and correct the syntax error: it just crashes.<ref>Better programs can query ostensible input errors, but only if they have been ''programmed'' to. Large learning models can guess, but only by association with familiar patterns: they do not “ form an independent theory of the world”.</ref>
Traditional computer code is like that. It has no tense. It does not understand past, present or future: for the applications to which it is put it does not ''need'' to. It performs millions of discrete “operations” — arithmetic calculations — that come at it in a firehose. It has no need to ask why it performs these operations, nor how they relate to each other. Hence, if the instructions are flawed a Turing machine cannot diagnose the problem, divine the programmer’s intent and correct the syntax error: it just crashes.<ref>Better programs can query ostensible input errors, but only if they have been ''programmed'' to. Large learning models can guess, but only by association with familiar patterns: they do not “ form an independent theory of the world”.</ref>