A World Without Work: Difference between revisions

Jump to navigation Jump to search
no edit summary
No edit summary
No edit summary
Line 13: Line 13:


===... but [[chess]]-playing supercomputers... ===
===... but [[chess]]-playing supercomputers... ===
Hand-waving about [[Chess]] and [[Go]]-playing supercomputers — there is a lot of that in {{br|A World Without Work}} — does not advance the argument. Both are hermetically sealed games on small, finite boards with simple sets of unvarying rules between two players sharing a common objective. Outcomes are entirely deterministic, and you can see that, at the limit, the player with the superior number-crunching power ''must'' win. Even here the natural imagination of human players, otherwise at a ''colossal'' disadvantage from an information processing perspective, made the job of beating them surprisingly hard. This ought to be the lesson: even in thoroughly simplistic binary games, it takes a ton of dumb processing power to beat a puny imagineer. Instead, Susskind reads this as a signpost to the [[Apocalypse]].
Hand-waving about [[Chess]] and [[Go]]-playing supercomputers — there is a lot of that in {{br|A World Without Work}} — does not advance the argument. Both are hermetically sealed games on small, finite boards with simple sets of unvarying rules between two players sharing a common objective. Outcomes may be [[complicated]], but they are not [[complex]]: they are entirely deterministic, and you can see that, at the limit, the player with the superior number-crunching power ''must'' win. Even here, the natural imagination of human players, otherwise at a ''colossal'' disadvantage from an information processing perspective, made the job of beating them surprisingly hard. This ought to be the lesson: even in thoroughly simplistic binary games, it takes a ton of dumb processing power to beat a puny imagineer. Instead, Susskind reads this as a signpost to the [[Apocalypse]].


But life is not a two-person board-game on a small-board with fixed rules a static, common, zero-sum objective. Analogising from this — ironically, something a computer could not do — is not great police-work. In the world of [[systems analysis]], [[Chess]] and [[Go]] are [[complicated]], not [[complex]], problems. The risk payoff is normal, not exponential. They can, in theory, be “brute force” managed by skilled operation of an algorithm, and the consequences of failure are predictable and contained — you lose. ''[[Complex]]'' problems — those one finds at the frontier, when one has boldly gone where no-one has gone before, in dynamic systems, where information is not perfect, where risk outcomes are [[convexity|convex]] — so-called “[[wicked environment]]s” — are not like that.<ref>There is more on this topic at [[complex systems]].</ref> Here algorithms are no good. One needs experience, wisdom and judgment.
But life is not a two-person board-game on a small-board with fixed rules a static, common, zero-sum objective. Analogising from this — ironically, something a computer could not do — is not great police-work. In the world of [[systems analysis]], [[Chess]] and [[Go]] are [[complicated]], not [[complex]], problems. Their risk payoff is normal, not exponential. They can, in theory, be “brute force” managed by skilled operation of an algorithm, and the consequences of failure are predictable and contained — you lose. ''[[Complex]]'' problems — those one finds at the frontier, when one has boldly gone where no-one has gone before, in dynamic systems, where information is not perfect, where risk outcomes are [[convexity|convex]] — so-called “[[wicked environment]]s” — are not like that.<ref>There is more on this topic at [[complex systems]].</ref> Here [[algorithm]]s are no good. One needs experience, wisdom and judgment. ''Algorithms get in the way''.


===Computers can’t solve novel problems===
===Computers can’t solve novel problems===
By design, computers can only follow rules. One which could not be relied on to process instructions with absolute fidelity would be a ''bad'' computer. ''Good'' computers cannot think, they cannot imagine, they cannot handle ambiguity — if they have a “mental life”, it exists in a flat space with no future or past. Computer language, by design, has no ''tense''. It is not a ''symbolic'' structure, in that its vocabulary does not represent anything.<ref>See: [[Code and language - technology article|Code and language]].</ref> Machines are linguistically, structurally ''incapable'' of interpreting, let alone ''coining'' [[metaphor|metaphors]], and they cannot reason by analogy or manage any of the innate ambiguities that comprise human decision-making.  
By design, computers can only follow rules. A machine that could not process instructions with absolute fidelity would be a ''bad'' computer. ''Good'' computers cannot think, they cannot imagine, they cannot handle ambiguity — if they have a “mental life”, it exists in a flat space with no future or past. Computer language, by design, has no ''tense''. It is not a ''symbolic'' structure, in that its vocabulary does not represent anything.<ref>See: [[Code and language - technology article|Code and language]].</ref> Machines are linguistically, structurally ''incapable'' of interpreting, let alone ''coining'' [[metaphor|metaphors]], and they cannot reason by analogy or manage any of the innate ambiguities that comprise human decision-making.  


Until they can do these things, they can only aid — in most circumstances, ''complicate'' — the already over-complicated networks we all inhabit.  
Until they can do these things, they can only aid — in most circumstances, ''complicate'' — the already over-complicated networks we all inhabit.  


And even this is before considering the purblind, irrational sociology that propels all organisations, because it propels all ''individuals'' in those organisations. Like the academy in which {{author|Daniel Susskind}}’s millenarianism thrives, computers function best in a theoretical, [[Platonic form|Platonic]] universe governed by unchanging and unambiguous physical rules, and populated by rational agents. In that world, Susskind ''might'' have a point — though I doubt it.  
But, but, but — how can we explain this relentless encroachment of the dumb algorithm on the inviolable province of consciousness? Well, there’s an alternative explanation, and it’s a bit more prosaic: it is not so much that [[AI]] is breaching the mystical ramparts of consciousness, but that much of what we ''thought'' required ineffable consciousness, doesn’t. This isn’t news: the impish polymath {{author|Julian Jaynes}} laid this all out in some style in 1976. If you haven’t read {{br|The Origin of Consciousness in the Breakdown of the Bicameral Mind}}, do. It’s a fabulous book.  


But in the conflicted, dirty, unpredictable universe we find ourselves in out here in TV land, there will continue to be plenty of work, as there always has been, administrating, governing, auditing, advising, [[rent-seeking]] — not to mention speculating and bullshitting about the former — as long as the computer-enhanced tight-coupled complexity of our networks doesn't [[Lentil convexity|wipe us out first]].
And even this is before considering the purblind, irrational sociology that propels all organisations, because it propels all ''individuals'' in those organisations. Like the academy in which {{author|Daniel Susskind}}’s millenarianism thrives, computers work best in a theoretical, [[Platonic form|Platonic]] universe governed by unchanging and unambiguous physical rules, and populated by rational agents. In that world, Susskind ''might'' have a point — though I doubt it.
 
But in the conflicted, dirty, unpredictable, [[complex]] universe we find ourselves in, out here in TV land, there will continue to be plenty of work, as there always has been, administrating, governing, auditing, advising, [[rent-seeking]] — not to mention speculating and bullshitting about the former — as long as the computer-enhanced tight-coupled complexity of our networks doesn't [[Lentil convexity|wipe us out first]].


===Employment and Taylorism===
===Employment and Taylorism===
Susskind’s conception of “work” as a succession of definable, atomisable and impliedly dull tasks — a framework, of course, which suits it perfectly to adaptation by machine — is as retrograde and-out-of-touch as you might expect of an academic son of an academic whose closest encounter with paid employment has been as a special policy adviser to government. Perhaps he once had a paper round. This kind of Taylorism is common in management layers of the corporate world, of course, but that hardly makes it any less boneheaded.  
Susskind’s conception of “work” as a succession of definable, atomisable and impliedly dull tasks — a framework, of course, which suits it perfectly to adaptation by machine — is a kind of Taylorism. It is common in management layers of the corporate world, of course, but that hardly makes a case for it.  


The better response is to recognise that definable, atomisable and dull tasks do not define what is employment, but it's very inverse: what it should not be. The [[JC]]’s [[third law of worker entropy]] is exactly that: [[tedium]] is as sure a sign of [[waste]] in an organisation. If your workers are bored, you have a problem. If they’re boring ''each other'', then it’s an exponential problem.
The better response is to recognise that definable, atomisable and dull tasks do not define what ''is'' employment, but it's very inverse: what it should ''not'' be. The [[JC]]’s [[third law of worker entropy]] is exactly that: [[tedium]] is as sure a sign of [[waste]] in an organisation. If your workers are bored, you have a problem. If they’re boring ''each other'', then it’s an exponential problem.


{{sa}}
{{sa}}

Navigation menu