A World Without Work

From The Jolly Contrarian
Revision as of 17:16, 7 August 2020 by Amwelladmin (talk | contribs)
Jump to navigation Jump to search
The Jolly Contrarian’s book review service™

A World Without Work: Technology, Automation, and How We Should Respond, by Daniel Susskind
First published on Amazon on .

Passtimes of the future, as imagined by Daniel Susskind


Index: Click to expand:

Comments? Questions? Suggestions? Requests? Insults? We’d love to 📧 hear from you.
Sign up for our newsletter.


Help, help, we’re all going to die

In which Daniel Susskind grasps a flagon of Ray Kurzweil’s home-made Kool-Aid and bets the farm.

Susskind will doubtless find enough gullible general counsel, anxious to seem at the technological vanguard — and interested mugs like me, who are suckers for sci fi alternative histories — at least to recoup his advance but, like the consistent output of his father over the last three decades, A World Without Work will not signpost, let alone dent, the immutable trajectory of modern employment, failing as it does to understand how humans, organisations and economies work, while ignoring — neigh, contradicting — the whole history of technology, from the plough.

Technology has never destroyed overall labour, and Susskind gives no good grounds for believing it will suddenly start now.

No innovation since the wheel has failed to create unexpected diversity, or opportunity — that’s more or less the definition of “innovation” — or more subsidiary complexity & inefficiency as a by-product. Both the opportunities and the inefficiencies “need” human midwifery, to exploit (for the former) and effectively manage (for the latter).

Nothing that the information revolution has yet thrown up suggests any of that has changed. There more technology is deployed, the more fog of confusion and complexity — as in complexity theory complexity — engulfs us.

... but chess-playing supercomputers...

Hand-waving about Chess and Go-playing supercomputers — there is a lot of that in A World Without Work — does not advance the argument. Both are hermetically sealed games on small, finite boards with simple sets of unvarying rules between two players sharing a common objective. Outcomes are entirely deterministic, and you can see that, at the limit, the player with the superior number-crunching power must win. Even here the natural imagination of human players, otherwise at a colossal disadvantage from an information processing perspective, made the job of beating them surprisingly hard. This ought to be the lesson: even in thoroughly simplistic binary games, it takes a ton of dumb processing power to beat a puny imagineer. Instead, Susskind reads this as a signpost to the Apocalypse.

But life is not a two-person board-game on a small-board with fixed rules a static, common, zero-sum objective. Analogising from this — ironically, something a computer could not do — is not great police-work. In the world of systems analysis, Chess and Go are complicated, not complex, problems. The risk payoff is normal, not exponential. They can, in theory, be “brute force” managed by skilled operation of an algorithm, and the consequences of failure are predictable and contained — you lose. Complex problems — those one finds at the frontier, when one has boldly gone where no-one has gone before, in dynamic systems, where information is not perfect, where risk outcomes are convex — so-called “wicked environments” — are not like that.[1] Here algorithms are no good. One needs experience, wisdom and judgment.

Computers can’t solve novel problems

By design, computers can only follow rules. One which could not be relied on to process instructions with absolute fidelity would be a bad computer. Good computers cannot think, they cannot imagine, they cannot handle ambiguity — if they have a “mental life”, it exists in a flat space with no future or past. Computer language, by design, has no tense. It is not a symbolic structure, in that its vocabulary does not represent anything.[2] Machines are linguistically, structurally incapable of interpreting, let alone coining metaphors, and they cannot reason by analogy or manage any of the innate ambiguities that comprise human decision-making.

Until they can do these things, they can only aid — in most circumstances, complicate — the already over-complicated networks we all inhabit.

And even this is before considering the purblind, irrational sociology that propels all organisations, because it propels all individuals in those organisations. Like the academy in which Daniel Susskind’s millenarianism thrives, computers function best in a theoretical, Platonic universe governed by unchanging and unambiguous physical rules, and populated by rational agents. In that world, Susskind might have a point — though I doubt it.

But in the conflicted, dirty, unpredictable universe we find ourselves in out here in TV land, there will continue to be plenty of work, as there always has been, administrating, governing, auditing, advising, rent-seeking — not to mention speculating and bullshitting about the former — as long as the computer-enhanced tight-coupled complexity of our networks doesn't wipe us out first.

Employment and Taylorism

Susskind’s conception of “work” as a succession of definable, atomisable and impliedly dull tasks — a framework, of course, which suits it perfectly to adaptation by machine — is as retrograde and-out-of-touch as you might expect of an academic son of an academic whose closest encounter with paid employment has been as a special policy adviser to government. Perhaps he once had a paper round. This kind of Taylorism is common in management layers of the corporate world, of course, but that hardly makes it any less boneheaded.

The better response is to recognise that definable, atomisable and dull tasks do not define what is employment, but it's very inverse: what it should not be. The JC’s third law of worker entropy is exactly that: tedium is as sure a sign of waste in an organisation. If your workers are bored, you have a problem. If they’re boring each other, then it’s an exponential problem.

See also

References

  1. There is more on this topic at complex systems.
  2. See: Code and language.