Evolution proves that algorithms can solve any problem: Difference between revisions

no edit summary
No edit summary
Tags: Mobile edit Mobile web edit
No edit summary
Tags: Mobile edit Mobile web edit
Line 23: Line 23:
Hofstadter is at his best when when he addresses the reflexivity of human consciousness — the magic that emerges courtesy of the strange loop whereby the human perceives ''itself'' inside the universe it constructs, and where that working narrative must allow for, to explain, ones own causal impact on the universe. This sets off an infinite loop which creates magical artifacts by itself.
Hofstadter is at his best when when he addresses the reflexivity of human consciousness — the magic that emerges courtesy of the strange loop whereby the human perceives ''itself'' inside the universe it constructs, and where that working narrative must allow for, to explain, ones own causal impact on the universe. This sets off an infinite loop which creates magical artifacts by itself.


In Roland Ennos’ recent book {{be|The Wood Age}} he gives a better example:
In Roland Ennos’ recent book {{br|The Wood Age}} he gives a better example:


{{Quote|Early apes, manoeuvering through the treetops, developed a concept of self, because they realised their bodies changed the world around them by bending the branches they stood on.}}
{{Quote|Early apes, manoeuvering through the treetops, developed a concept of self, because they realised their bodies changed the world around them by bending the branches they stood on.}}
Line 29: Line 29:
The only way you can explain the movement of those branches is by reference to your own presence. It is hard to see a dematerialized computer operating in a virtual space, having to solve that same problem, other than at the quantum level (it need hardly be said that quantum elements are not the same as machine consciousness - that would be a reductionism too far.
The only way you can explain the movement of those branches is by reference to your own presence. It is hard to see a dematerialized computer operating in a virtual space, having to solve that same problem, other than at the quantum level (it need hardly be said that quantum elements are not the same as machine consciousness - that would be a reductionism too far.


Machines are more like Arthur C. Clarke’s sentinels, watching dissociatively. They purport describe the world as it is without affecting it: they monitor, measure, observe, process and give back but not to themselves, and not for themselves. to do that would be to colour their observations about human interaction, which would be to defeat their commercial purpose.
Machines are more like {{author|Arthur C. Clarke}}’s sentinels, watching dissociatively. They purport describe the world as it is without affecting it: they monitor, measure, observe, process and give back but not to themselves, and not for themselves. to do that would be to colour their observations about human interaction, which would be to defeat their commercial purpose.
 
{{Author|Kazuo ishiguro}}’s {{br|Klara and the Sun}}, which I happened to read straight after {{br|The Wood Age}}, makes the same point through the android’s unusual segmented spatial perception. It prompts the question: to what extent is consciousness a function of our own peculiar evolved perceptive apparatus? It seems to me it must be. When we design artificial intelligence with a blank slate, and especially if we come at it from a Behaviourist [[machine learning]] angle, which cannot have been the route to consciousness of human evolution, deprecating as it does the notion of an inner consciousness altogether, we would make different design choices, arrive at functional intelligence different way and these might lead to profoundly shifted articulations of ”consciousness”, if indeed any kind of consciousness emerges at all.
 
 


{{draft}}
{{draft}}