Template:M intro design System redundancy: Difference between revisions

Tags: Mobile edit Mobile web edit
Tags: Mobile edit Mobile web edit
Line 37: Line 37:
Done of this is new: just our enthusiasm for it. The prophet of [[data modernism]] was [[Frederick Winslow Taylor]], progenitor of the maximally efficient production line. His inheritors say things like, “[[The Singularity is Near|the singularity is near]]” and “[[Software is eating the world|software will eat the world]]” but for all their millenarianism the on-the-ground experience at the business end of this all world-eating software is as grim as it ever was.
Done of this is new: just our enthusiasm for it. The prophet of [[data modernism]] was [[Frederick Winslow Taylor]], progenitor of the maximally efficient production line. His inheritors say things like, “[[The Singularity is Near|the singularity is near]]” and “[[Software is eating the world|software will eat the world]]” but for all their millenarianism the on-the-ground experience at the business end of this all world-eating software is as grim as it ever was.
====Time====
====Time====
We have a theory that in reducing everything to measured inputs and outputs, data modernism collapses into a kind of ''[[reductionism]]'', only about ''time'': just as reductionists see our knowledge of the universe as being reducible to infinitesimally small sub-atomic essences — so a function of theoretical physics — so do data modernists see all of commerce as explicable in terms of infinitesimally small windows of ''time'' so thin that they are static. Let’s call these windows “frames”, resembling as they do individual frames in a movie reel. In the same way that the appearance of motion [[emerge]]s from advancing film frames, so, in data modernism , does “the passage of time” [[emergence|emerge]] from a sufficiently large number of adjacent “static frames”. The modernist system has no more need of a concept of “passing time” than a movie camera has of “motion”.
We have a theory that in reducing everything to measured inputs and outputs, [[data modernism]] collapses into a kind of ''[[reductionism]]'', only about ''time'': just as reductionists see our knowledge of the universe as being reducible to infinitesimally small sub-atomic essences — so a function of theoretical physics — so do data modernists see all of commerce as explicable in terms of infinitesimally small windows of ''time'' so thin that they are static. Let’s call these windows “frames”, resembling as they do individual frames in a movie reel. The beauty of static frames is, not being in motion, they can’t do anything unexpected. Yet, if you run a sequence of consecutive frames close to one another they ''appear'' to move, in the same way that still movie frames do. In this way does data modernism replace  the ''actual'' passage of time with the appearance of passing time.


Which is just as well, because computer languages don’t do [[tense|''tense'']]: they are coded in the present, and have no frame of reference for continuity.  
Data modernism has no concept of time at all: the computer languages in which it is written don’t do [[tense|''tense'']]: they are coded in the present, and have no frame of reference for continuity.  


Whereas existential continuity backwards and forwards in “time” is precisely the problem that the human brain solves: this is the thing that demands “things”, just one of which is “me”, moving through a spatio-temporal universe, interacting with each other and hence requiring definitive boundaries.<ref>{{author|Daniel Dennett}} made a virtuoso attempt to apply this reductionist approach to the problem of mind in {{br|Consciousness Explained}}, but ended up defining away the very thing he claimed to explain, effectively concluding “consciouness is an illusion”. But on whom?</ref>
But 'existential continuity backwards and forwards in “time” is precisely the problem that the human brain solves: this is the thing that demands continuously existing “things”, just one of which is “me”, moving through a spatio-temporal universe, interacting with each other and hence requiring definitive boundaries.<ref>{{author|David Hume}} wrestled with this idea of continuity: if I see you, then look away, then look back at you, what ''grounds'' do I have for believing it is still “you”?  Computer code makes no such assumption. It is the human genius to make that logical leap. How we do it, and how consciousness works, defies explanation. {{author|Daniel Dennett}} made a virtuoso attempt to apply this algorithmic [[reductionist]] approach to the problem of mind in {{br|Consciousness Explained}}, but ended up defining away the very thing he claimed to explain, effectively concluding “consciouness is an illusion”. But on whom?</ref>


[[Data modernism]] does away with the need for time and continuity altogether — it essentially vanishes when to regard the picture show as a sequence of frames.<ref>Pace Dennett, any [[difference engine]] that can operate wholly without needing a concept of continuity won’t evolve “consciousness”. why would it? I have put some thoughts together on that [[Code and language|here]]. I should say this is all my own work and is likely therefore to be nonsense.</ref> And it is partly because having to cope with history, the passage of time, and the continued existence of objects, makes things exponentially more complex than they already are. An atomically thin snapshot of the world as data is enough of a beast to be still well beyond the operating parameters of even the most powerful quantum machines: that level of detail extending into the future and back from the past is, literally, infinitely more complicated. The modernist programme is to suppose that “time” is really just comprised of billions of infinitesimally thin, static slices, each functionally identical to any other, so by measuring the [[delta]] between them we have a means of handling that complexity.
[[Data modernism]] thereby does away with the need for time and continuity altogether, but rather ''simulates'' it through a succession of static slices but continuity vanishes when one regards the picture show as a sequence of frames.  


That is does not have a hope of working seems beside the point.
But dealing with history is exactly the challenge.
 
Gerd Gigerenzer has a nice example that illustrates the importance of continuity.
 
Imagine a still frame of two pint glasses, A and B, each containing half a pint of beer.<Ref>One that costs more than a fortnight’s subscription to the JC, by the way.</ref> Which is half-full and which is half-empty?
 
Now, imagine a short film in which glass A is full and glass B empty, then a little Cartesian homunculus tips half of the contents of glass A into glass B. ''Now'' which is half-full and which is half-empty?
 
The first scenario seems to pose a stupid question. The second scenario tells us something small  about the history of the world. To capture that information using code is possible, sure, but it is extremely complicated.
 
And it is partly because having to cope with history, the passage of time, and the continued existence of objects, makes things exponentially more complex than they already are. An atomically thin snapshot of the world as data is enough of a beast to be still well beyond the operating parameters of even the most powerful quantum machines: that level of detail extending into the future and back from the past is, literally, infinitely more complicated. The modernist programme is to suppose that “time” is really just comprised of billions of infinitesimally thin, static slices, each functionally identical to any other, so by measuring the [[delta]] between them we have a means of handling that complexity.


In any case, just in time rationalisers take a cycle and code for that.  What is the process, start to finish, what are the dependencies, what are the plausible unknowns, and how do we optimise for efficiency of movement, components and materials, to manage
In any case, just in time rationalisers take a cycle and code for that.  What is the process, start to finish, what are the dependencies, what are the plausible unknowns, and how do we optimise for efficiency of movement, components and materials, to manage