Template:M intro design time: Difference between revisions

From The Jolly Contrarian
Jump to navigation Jump to search
No edit summary
Tags: Mobile edit Mobile web edit
No edit summary
Line 7: Line 7:
The computer is like a cine-film: it conjures vibrant, but only ostensible, motion from countless still frames. To the extent it manages history, it does so laterally, in a kind of present perfect, past states catalogued and presented as archived versions of the now.
The computer is like a cine-film: it conjures vibrant, but only ostensible, motion from countless still frames. To the extent it manages history, it does so laterally, in a kind of present perfect, past states catalogued and presented as archived versions of the now.


What sets the conscious apart is the sense of “I”: myself as a unitary, bounded entity — even if those boundaries are a bit fuzzy<ref>per {{author|Daniel Dennett}}, “If you make yourself really small, you can externalise virtually everything,” and vice versa</ref> — that existed yesterday, exists now and will, the Gods willing, still exist tomorrow, that exists amongst a set of things that are ''not'' I, but that otherwise share that same characteristic of longitudinal continuity. There are “things” in the universe beyond “my” boundaries with which “I” must interact and that, consequently the problems and opportunities that “I” face in that universe , and with those things, have the same causal continuity. “I” have discretion, or a sort of agency, to do something about them based upon “my” best interests as “I” perceive them.  
What sets the conscious apart is the sense of “I”: “myself”, as a unitary, bounded entity — even if those boundaries are a bit fuzzy<ref>per {{author|Daniel Dennett}}, “If you make yourself really small, you can externalise virtually everything,” and vice versa.</ref> — that existed yesterday, exists now and will, the Gods willing, still exist tomorrow. “I” exist among a large set of things that are ''not'' I, but that still share that characteristic of “longitudinal continuity”.  


It is only if all these continuities are necessary and important to that there is any need to hypothesise a conscious “I”. There is a chicken and egg thing here: a [[strange loop]]: ''important to whom?''
There are “things” in the universe beyond “my” boundaries with which “I” must interact and that, consequently the problems and opportunities that “I” face in that universe, and with those things, must have the same causal continuity. “I” have discretion, or a sort of agency, to do something about them based upon “my” best interests as “I” perceive them.


We suppose that a sense of “I” is not important to plants, for example.<ref>We may be wrong. Who knows?</ref> They can get by automatically, by algorithmic operation of evolved functions in their cells. Their cells operate rather like miniature Turing machines. If this then, that. What great advantage would consciousness yield? It does not matter what happened yesterday, or what will happen tomorrow, except as far as previous events influenced how things are now, and even then everything a cell needs to know about what happened yesterday is encoded in the state of its environment now. The relevant past is encoded, laterally, on the present. Evolving a consciousness, a sense of “self”, would require great effort, consume huge resources, and confer little benefit. It might actually ''decrease'' adaptive fitness: a plant which evolved over millennia to automatically optimise its immediately available resources, but which can now think about whether to do so, is unlikely to make better decisions than [[Darwin’s Dangerous Idea]]. For plants , consciousness does not pass the evolutionary test: the [[business case]] fails.
It is only if all these continuities are necessary and important to that unitary, continuous self that there is any need to hypothesise a conscious “I” to look out for it.  


Modern materialists will raise a finger at the dreaded [[dualism]]: this is a [[Cartesian]] view, mystical, godly. “''Je pense, donc je suis''” has been out of fashion for a century but didn’t {{author|Daniel Dennett}} hammer in the last nail with {{br|Consciousness Explained}}? Well, no — for consciousness in that narrow sense of self—awareness; a comprehension of one’s person as a unitary object in time and space, and therefore, per Descartes, a profoundly different thing than ''every other thing in the universe'' — is precisely the thing to be explained. It is a clever sleight of hand to define it away before kick-off — full marks for the brio, professor Dennett — but it won’t really do. We are left with the same conundrum we started with.
But there is a chicken and egg problem here: a [[strange loop]]: ''important to whom?''
====Why plants are not conscious====
We suppose that a sense of self is not important to plants, for example.<ref>We may be wrong. Who knows?</ref> They can get by automatically, by algorithmic operation of evolved functions in their cells. Their cells operate rather like miniature [[Turing machine|Turing machines]]: If ''this'' then, ''that''.
 
What great advantage would consciousness yield? It does not matter what happened yesterday, or what will happen tomorrow, except as far as previous events influenced how things are now, and even then everything a cell needs to know about what happened yesterday is encoded in the state of its environment now. The relevant past is encoded, laterally, onto the present.
 
Evolving a consciousness, a sense of “self”, would require great effort, consume huge resources that might better be used for something else (seeking nutrients in the soil, for example), and confer little benefit. Self-awareness for a plant that cannot move might actually ''decrease'' its adaptive fitness: a plant that evolved over millennia to automatically optimise its immediately available resources, but which can now think about whether or not to do so, is unlikely to make better decisions than [[Darwin’s Dangerous Idea]].
 
For plants, consciousness does not pass the evolutionary test: the [[business case]] fails.
====Dualism ahoy====
Modern materialists will raise a finger at the dreaded [[dualism]]: this is a [[Cartesian]] view, mystical, godly.
 
“''Je pense, donc je suis''” has been out of fashion for a century, but didn’t {{author|Daniel Dennett}} hammer in the last nail with {{br|Consciousness Explained}}?  
 
Well, no — for consciousness in that narrow sense of self-awareness; a comprehension of one’s person as a unitary object in time and space and, therefore, per [[René Descartes|Descartes]], a profoundly different thing than ''every other thing in the universe'' — ''is precisely the thing to be explained''.  
 
It is a clever sleight of hand to define it away before kick-off — full marks for the ''brio'', Professor Dennett — but it won’t really do. We are left in the same pickle we started with.


We have evolved a means of comporting just that kind of continuity, and our language has developed some neat tricks for managing that dimension. [[Tense]]. [[Metaphor]].
We have evolved a means of comporting just that kind of continuity, and our language has developed some neat tricks for managing that dimension. [[Tense]]. [[Metaphor]].
===Symbol-processing v reading ===
Traditional computer code is not like that. It has no tense. It does not understand past, present or future: for the applications to which it is put, ''it does not need to''. It performs millions of discrete “operations” — arithmetic calculations — that come at it as if from a firehose.


Traditional computer code is not like that. It has no tense. It does not understand past, present or future: for the applications to which it is put it does not ''need'' to. It performs millions of discrete “operations” — arithmetic calculations — that come at it as if from a firehose. It has no need to ask ''why'' it performs these operations, nor how they relate to each other. Hence, GIGO: if its instructions are flawed, a Turing machine cannot diagnose the problem much less speculate as to the programmer’s likely intent and take that course of action. It just crashes.<ref>Better programs can query ostensible input errors, but only if they have been ''programmed'' to. [[Large learning model]]s can guess, but only by association with familiar patterns: they do not “form an independent theory of the world” much less develop theories about the inentional states of their creators.</ref>
It has no need to ask ''why'' it performs these operations, what they mean for it, nor how they relate to each other. Hence, garbage in, garbage out: if its instructions are flawed, a [[Turing machine]] cannot transcend the text to creatively diagnose the problem, much less speculate as to the programmer’s likely intent and take that course of action. It just crashes.<ref>Better programs can query ostensible input errors, but only if they have been ''pre-programmed'' to.  
 
[[Large learning model]]s can guess, but only by association with familiar patterns: they do not “form an independent theory of the world” much less develop theories about the inentional states of their creators.</ref>


Yet computers can ''seem'' magically conscious — but, as we did with [[E.L.I.Z.A.]], we are projecting ''our'' consciousness onto them. ''We'' are doing the construction, not the computer.
Yet computers can ''seem'' magically conscious — but, as we did with [[E.L.I.Z.A.]], we are projecting ''our'' consciousness onto them. ''We'' are doing the construction, not the computer.
====On the dynamic and static====
For a thing’s temporality is not apparent in the infinitesimally brief — stationary — snapshot represented by any given data: When they are frozen in time, static things and dynamic things ''look the same'': static. Dynamism, or its absence, is function of the passage of time.


For a thing’s temporality of things is not apparent on an infinitesimal snapshot represented by any presentation of data. Things which are immutable and things which are liable to change look the same. We assign rights based upon the nature of things, and it matters how permanent those things happen to be. A human’s count of arms and legs us the same — barring catastrophe, and it can hardly ''grow'' — but ones tastes and opinions, however devoutly held, can change. In a fully functioning adult, indeed, they ''should''.
Yet, we assign rights based upon the nature of things, and it matters how ''permanent'' they are. A human’s count of arms and legs is the same — barring catastrophe, it can hardly ''grow'' — but one’s tastes and opinions, however devoutly held, can ''change''. In a fully functioning adult, indeed, they ''should''. (The seventy year old man who cleaves to the same politics he did when he was fifteen ''hasn’t learned anything''.)


(The seventy year old man who cleaves to the same politics he did when he was fifteen ''hasn’t learned anything''.)
===The past is not like the future===
In another way is continuity important. Unless you accept some kind of [[determinism|hard deterministic]] model of utter predestination — in which case, I commiserate you for having to read this foppish screed, but you have no choice, do deal with it — then the past and the future are entirely unalike in an important way. This is obvious, but bear with me.


===The past is not like the future===
There is only one possible past: the actual one. There are infinite possible futures.
In another way is continuity important. Unless you accept some kind of hard deterministic model of utter predestination — in which case, I commiserate you for having to read this foppish screed, but you have no choice — then the past and the future are entirely unalike in an important way. This is obvious but bear with me.


There is only one past, and infinite futures.
Things in the past cannot change. We may remember them, learn from them and, in the future, compensate for them, but we cannot change them. We can only change the stories we tell about them. We do this a lot: this is called writing history. But what is done is done.


It follows that things in the past are fixed. They cannot change. We may remember them, learn from them, and in the future compensate for them, but we cannot change them.  We can only change the stories we tell about them. We do this a lot This is the function of history. But what is done is done.
The past is fully priced. That is why you can’t place a bet on yesterday’s Grand National. (Well you could, but after the bookies’ spread, the odds would be poor).


All risk and all opportunity therefore lies in the future. The past is fully priced. That is why you can’t place a bet on yesterday’s Grand National. (Well you could, but after the bookies’ spread, the odds would be poor).
''All risk and all opportunity lies in the future.'' The only history in the future is the one we choose to take with us, to make sense of it. It is the lens and our filter we choose to apply, and it is entirely within our gift.


This is a profound, obvious, asymmetry.  
This is a profound, obvious, asymmetry.  


Similarly, all victories, all defeats, all vindications and grievances are in the past. We can tell ourselves different stories about them, we can change our minds about how good or bad we think they were, but they are done. They are realised. The profit or loss is known and booked. The potential profit and loss for the future is not. It is — unless you are a deterministic predestination fan, in which case, put your feet up to some extent within our gift. Among the infinite branches of our possible futures, there are infinite losses and infinite gains. Which ones we capture is, partly, up to us. The ones we have already booked are not.
Similarly, all victories, all defeats, all vindications and all grievances are in the past. We can tell ourselves different stories about them, we can change our minds about how good or bad we think they were, but they are done. The profit and loss is known and booked.  
 
The potential profit and loss for the future is not. To ''some'' extent it is — unless you are a determinist, in which case, close your eyes and think of England — within our gift.  
 
Among the infinite branches of our possible futures, there are infinite losses and infinite gains. Which ones we capture is, partly, up to us. The choices we make depend on the history we choose to take with us. The morality tales we apply. The lessons we have learned.  


Focusing on remediating the past at the expense of the future is a rum business. This is why so many legal rights go unexercised. The client imperative. The revenue to be earned in the unknowable future colossally outweighs the realised loss on ''this'' trade. Learn your lesson, take the loss, show yourself to be a good sport, and concentrate on the next win. Trade your magnanimity for extra business. Or sue, and end the revenue
Focusing on remediating the past at the future’s ''expense'' is a rum business. Hence, why so many legal rights go unexercised: behold, the [[commercial imperative]]. The potential benefits to reap in the unknowable future colossally outweighs the realised loss on ''this'' trade. Learn your lesson, take the loss, show yourself to be a good sport, and concentrate on the next win. Trade your magnanimity for extra business. Or sue, and end the revenue for ever.

Revision as of 12:29, 29 November 2023

Temporality
/ˌtɛmpəˈralɪti/ (n.)
The unavoidable, but easy to forget, nature of existence through time. Continuity.

The importance of our continuing existence through the entropising sludge of time is much overlooked in our tenseless, stupid age: we surrender ourselves to data, to the dictates of symbol processing; to the unthinking tyranny of if the difference engine. But we discard the fourth dimension to our own detriment and the machines’ great advantage: it is along that underestimated axis that our wicked game of life plays out. That is where the machines cannot fathom us.

Computer “intelligence” is a clever card trick: a cheap simulacrum of consciousness composed of still frames. But consciousness — a continuing “self” extending backwards and forwards in time — evolved to solve the single problem that an algorithm cannot: existential continuity.

The computer is like a cine-film: it conjures vibrant, but only ostensible, motion from countless still frames. To the extent it manages history, it does so laterally, in a kind of present perfect, past states catalogued and presented as archived versions of the now.

What sets the conscious apart is the sense of “I”: “myself”, as a unitary, bounded entity — even if those boundaries are a bit fuzzy[1] — that existed yesterday, exists now and will, the Gods willing, still exist tomorrow. “I” exist among a large set of things that are not I, but that still share that characteristic of “longitudinal continuity”.

There are “things” in the universe beyond “my” boundaries with which “I” must interact and that, consequently the problems and opportunities that “I” face in that universe, and with those things, must have the same causal continuity. “I” have discretion, or a sort of agency, to do something about them based upon “my” best interests as “I” perceive them.

It is only if all these continuities are necessary and important to that unitary, continuous self that there is any need to hypothesise a conscious “I” to look out for it.

But there is a chicken and egg problem here: a strange loop: important to whom?

Why plants are not conscious

We suppose that a sense of self is not important to plants, for example.[2] They can get by automatically, by algorithmic operation of evolved functions in their cells. Their cells operate rather like miniature Turing machines: If this then, that.

What great advantage would consciousness yield? It does not matter what happened yesterday, or what will happen tomorrow, except as far as previous events influenced how things are now, and even then everything a cell needs to know about what happened yesterday is encoded in the state of its environment now. The relevant past is encoded, laterally, onto the present.

Evolving a consciousness, a sense of “self”, would require great effort, consume huge resources that might better be used for something else (seeking nutrients in the soil, for example), and confer little benefit. Self-awareness for a plant that cannot move might actually decrease its adaptive fitness: a plant that evolved over millennia to automatically optimise its immediately available resources, but which can now think about whether or not to do so, is unlikely to make better decisions than Darwin’s Dangerous Idea.

For plants, consciousness does not pass the evolutionary test: the business case fails.

Dualism ahoy

Modern materialists will raise a finger at the dreaded dualism: this is a Cartesian view, mystical, godly.

Je pense, donc je suis” has been out of fashion for a century, but didn’t Daniel Dennett hammer in the last nail with Consciousness Explained?

Well, no — for consciousness in that narrow sense of self-awareness; a comprehension of one’s person as a unitary object in time and space and, therefore, per Descartes, a profoundly different thing than every other thing in the universeis precisely the thing to be explained.

It is a clever sleight of hand to define it away before kick-off — full marks for the brio, Professor Dennett — but it won’t really do. We are left in the same pickle we started with.

We have evolved a means of comporting just that kind of continuity, and our language has developed some neat tricks for managing that dimension. Tense. Metaphor.

Symbol-processing v reading

Traditional computer code is not like that. It has no tense. It does not understand past, present or future: for the applications to which it is put, it does not need to. It performs millions of discrete “operations” — arithmetic calculations — that come at it as if from a firehose.

It has no need to ask why it performs these operations, what they mean for it, nor how they relate to each other. Hence, garbage in, garbage out: if its instructions are flawed, a Turing machine cannot transcend the text to creatively diagnose the problem, much less speculate as to the programmer’s likely intent and take that course of action. It just crashes.[3]

Yet computers can seem magically conscious — but, as we did with E.L.I.Z.A., we are projecting our consciousness onto them. We are doing the construction, not the computer.

On the dynamic and static

For a thing’s temporality is not apparent in the infinitesimally brief — stationary — snapshot represented by any given data: When they are frozen in time, static things and dynamic things look the same: static. Dynamism, or its absence, is function of the passage of time.

Yet, we assign rights based upon the nature of things, and it matters how permanent they are. A human’s count of arms and legs is the same — barring catastrophe, it can hardly grow — but one’s tastes and opinions, however devoutly held, can change. In a fully functioning adult, indeed, they should. (The seventy year old man who cleaves to the same politics he did when he was fifteen hasn’t learned anything.)

The past is not like the future

In another way is continuity important. Unless you accept some kind of hard deterministic model of utter predestination — in which case, I commiserate you for having to read this foppish screed, but you have no choice, do deal with it — then the past and the future are entirely unalike in an important way. This is obvious, but bear with me.

There is only one possible past: the actual one. There are infinite possible futures.

Things in the past cannot change. We may remember them, learn from them and, in the future, compensate for them, but we cannot change them. We can only change the stories we tell about them. We do this a lot: this is called writing history. But what is done is done.

The past is fully priced. That is why you can’t place a bet on yesterday’s Grand National. (Well you could, but after the bookies’ spread, the odds would be poor).

All risk and all opportunity lies in the future. The only history in the future is the one we choose to take with us, to make sense of it. It is the lens and our filter we choose to apply, and it is entirely within our gift.

This is a profound, obvious, asymmetry.

Similarly, all victories, all defeats, all vindications and all grievances are in the past. We can tell ourselves different stories about them, we can change our minds about how good or bad we think they were, but they are done. The profit and loss is known and booked.

The potential profit and loss for the future is not. To some extent it is — unless you are a determinist, in which case, close your eyes and think of England — within our gift.

Among the infinite branches of our possible futures, there are infinite losses and infinite gains. Which ones we capture is, partly, up to us. The choices we make depend on the history we choose to take with us. The morality tales we apply. The lessons we have learned.

Focusing on remediating the past at the future’s expense is a rum business. Hence, why so many legal rights go unexercised: behold, the commercial imperative. The potential benefits to reap in the unknowable future colossally outweighs the realised loss on this trade. Learn your lesson, take the loss, show yourself to be a good sport, and concentrate on the next win. Trade your magnanimity for extra business. Or sue, and end the revenue for ever.

  1. per Daniel Dennett, “If you make yourself really small, you can externalise virtually everything,” and vice versa.
  2. We may be wrong. Who knows?
  3. Better programs can query ostensible input errors, but only if they have been pre-programmed to. Large learning models can guess, but only by association with familiar patterns: they do not “form an independent theory of the world” much less develop theories about the inentional states of their creators.