Template:M intro technology rumours of our demise: Difference between revisions

no edit summary
Tags: Mobile edit Mobile web edit
No edit summary
Tags: Mobile edit Mobile web edit
 
(31 intermediate revisions by the same user not shown)
Line 30: Line 30:
But technology also creates space and capacity to indulge ourselves. [[Parkinson’s law]] states: work expands to fill the time allotted for its completion. Technology ''also'' frees us up to care about things we never used to care about. The microcomputer made generate duplicate and distribute documents far, far easier. There’s that boon and bane, again.
But technology also creates space and capacity to indulge ourselves. [[Parkinson’s law]] states: work expands to fill the time allotted for its completion. Technology ''also'' frees us up to care about things we never used to care about. The microcomputer made generate duplicate and distribute documents far, far easier. There’s that boon and bane, again.


So, before concluding that ''this time'' the machines will put us out of work we must explain ''how''. [[this time is different|Why is this time different]]? What has changed?  
So, before concluding that ''this time'' the machines will put us out of work we must explain ''how''. [[this time is different|Why is this time different]]? What has changed?  


===FAANGs ahoy?===
===FAANGs ahoy?===
Line 86: Line 86:
Thirdly — cool gadget maker that pivots to banking ''and does it well'' has as much chance of maintaining millennial brand loyalty as does a toy factory that moves into dentistry.
Thirdly — cool gadget maker that pivots to banking ''and does it well'' has as much chance of maintaining millennial brand loyalty as does a toy factory that moves into dentistry.


Those Occupy Wall Street gang? Apple fanboys, the lot of them. ''At the moment''. But it isn’t the ''way'' [[trad fi]] banks go about banking that tarnishes your brand. It’s ''banking''. ''No-one likes money-lenders''. It is a dull, painful, risky business. Part of the game is doing shitty things to customers when they lose your money. Repossessing the Tesla. Foreclosing on the condo. That ''isn’t'' part of the game of selling MP3 players.
Those Occupy Wall Street gang? Apple fanboys, the lot of them. ''At the moment''. But it isn’t the ''way'' [[trad fi]] banks go about banking that tarnishes your brand. It’s ''banking''. ''No-one likes money-lenders''. It is a dull, painful, risky business. Part of the game is doing shitty things to customers when they lose your money. Repossessing the Tesla. Foreclosing on the condo. That ''isn’t'' part of the game of selling MP3 players.


''The business of banking will trash the brand.''
''The business of banking will trash the brand.''
Line 122: Line 122:
But state-of-the-art machines, per Arthur C. Clarke, aren’t magic: it just ''seems'' like it, sometimes. They are a two-dimensional, simplified model of human intelligence. A proxy: a modernist [[simulacrum]]. They are a shorthand way of mimicking a limited sort of sentience, potentially useful in known environments and constrained circumstances.
But state-of-the-art machines, per Arthur C. Clarke, aren’t magic: it just ''seems'' like it, sometimes. They are a two-dimensional, simplified model of human intelligence. A proxy: a modernist [[simulacrum]]. They are a shorthand way of mimicking a limited sort of sentience, potentially useful in known environments and constrained circumstances.


Yet we have begun to model ourselves upon machines. The most dystopian part of John Cryan’s opening quote was the first part — “''today, we have people doing work like robots''” — because it accurately describes a stupid present reality. We have persuaded ourselves that “being machine-like’’ should be our loftiest aim. But if we are in a footrace where what matters is simply strength, speed, consistency, modularity, [[fungibility]] and ''mundanity'' — humans will surely ''lose''.  
Yet we have begun to model ourselves upon machines. The most dystopian part of John Cryan’s opening quote was the first part — “''today, we have people doing work like robots''” — because it describes a stupid present reality. We have persuaded ourselves that “being machine-like’’ should be our loftiest aim. But if we are in a footrace where what matters is simply strength, speed, consistency, modularity, [[fungibility]] and ''mundanity'' — humans will surely ''lose''.  


But we ''aren’t'' in that foot race. Strength, speed, consistency, fungibility and patience are the loftiest aims ''only where there is no suitable machine''.   
But we ''aren’t'' in that foot race. Strength, speed, consistency, fungibility and patience are the loftiest aims ''only where there is no suitable machine''.   
Line 131: Line 131:


===Body and mind as metaphors===
===Body and mind as metaphors===
We are used to the “[[Turing machine]]” as a [[metaphor]] for “mind” but, for these reasons, it is a bad metaphor. It is unambitious. It does not do justice to the human mind.  
We are used to the “[[Turing machine]]” as a [[metaphor]] for “mind” but it is a bad metaphor. It is unambitious. It does not do justice to the human mind.  


Perhaps we could invert it, and use “body” '''''' in that dishonourably dualist, [[Descartes|Cartesian]] sense — as a [[metaphor]] for a Turing machine, and “mind” for natural human intelligence. “Mind” and “body” in this sense, are a practical guiding principle for the [[division of labour]] between human and machine: what goes to “body”, give to a machine motor skills; temperature regulation; the pulmonary system; digestion; aspiration — the conscious mind has no business there. There is little it can add. It only gets in the way. There is compelling evidence that when the conscious mind takes over motor skills, things go to hell.<ref>This is the premise of Daniel Kahneman’s {{br|Thinking: Fast and Slow}}, and for that matter, [[Matthew Syed]]’s {{br|Bounce}}></ref>
Perhaps we could invert it. We might instead use “body” — in that dishonourably dualist, [[Descartes|Cartesian]] sense — as a [[metaphor]] for a Turing machine, and “mind” for natural human intelligence. “Mind” and “body” in this sense, are a practical guiding principle for the [[division of labour]] between human and machine: what goes to “body”, give to a machine: motor skills; temperature regulation; the pulmonary system; digestion; aspiration — the conscious mind has no business there. There is little it can add. It only gets in the way. There is compelling evidence that when the conscious mind takes over motor skills, things go to hell.<ref>This is the premise of {{author|Daniel Kahneman}}’s {{br|Thinking, Fast and Slow}}, and for that matter, [[Matthew Syed]]’s {{br|Bounce}}.</ref>


But leave interpersonal relationships, communication, perception, [[construction]], decision-making in times of uncertainty, imagination and creation to the mind. ''Leave the machines out of this''. They will only bugger it up. Let them ''report'', by all means. Let them assist: triage the “conscious act” to hive off the mechanical tasks on which it depends.<ref>{{Author|Julian Jaynes}} has a magnificent passage in his book [[The Origins of Consciousness in the Breakdown of the Bicameral Mind|''The Origin of Consciousness in the Bicameral Mind'']] where he steps through all the aspects of consciousness that we assume are conscious, but which are not.  
But leave interpersonal relationships, communication, perception, [[construction]], decision-making in times of uncertainty, imagination and creation to the mind. ''Leave the machines out of this''. They will only bugger it up. Let them ''report'', by all means. Let them assist: triage the “conscious act” to hive off the mechanical tasks on which it depends.<ref>{{Author|Julian Jaynes}} has a magnificent passage in his book {{br|The Origins of Consciousness in the Breakdown of the Bicameral Mind}} where he steps through all the aspects of consciousness that we assume are conscious, but which are not.  


“Consciousness is a much smaller part of our mental life than we are conscious of, because we cannot be conscious of what we are not conscious of. How simple that is to say; how difficult to appreciate!”</ref> Let the machines loose on those mechanical tasks. Let them provide, on request, the information the conscious mind needs to make its build its models and make its plans, but ''do not let them intermediate that plan''.
“Consciousness is a much smaller part of our mental life than we are conscious of, because we cannot be conscious of what we are not conscious of. How simple that is to say; how difficult to appreciate!”</ref> Let the machines loose on those mechanical tasks. Let them provide, on request, the information the conscious mind needs to make its build its models and make its plans, but ''do not let them intermediate that plan''.
Line 148: Line 148:
We take one look at the output of an AI art generator and conclude the highest human intellectual achievements are under siege. However good humans artists may be, they cannot compete with the massively parallel power of LLMs, which can generate billions of images some of which, by accident, will be transcendentally great art.   
We take one look at the output of an AI art generator and conclude the highest human intellectual achievements are under siege. However good humans artists may be, they cannot compete with the massively parallel power of LLMs, which can generate billions of images some of which, by accident, will be transcendentally great art.   


Not only does reducing art to its “[[Bayesian prior]]s” like this stunningly [[Symbol processing|miss the point]] about art, but it suggests those who would deploy artificial intelligence have their priorities dead wrong. There is no shortage of sublime human expression: quite the opposite. The internet is awash with “content”: there is already an order of magnitude more content to consume than our collected ears and eyes have capacity to take in. And, here: have some more. 
Not only does reducing art to its “[[Bayesian prior]]s” like this stunningly [[Symbol processing|miss the point]] about art, but it suggests those who would deploy artificial intelligence have their priorities dead wrong. There is no shortage of sublime human expression: quite the opposite. The internet is awash with “content”: there is far more than our collected ears and eyes can take in.  


''We don’t need more content''. What we do need is ''dross management'' and ''needle-from-haystack'' extraction. This is stuff machines ought to be really good at. Why don’t we point the machines at that?
And, here: have some more


Remember the [[division of labour]]: machines are good at dreary, fiddly, repetitive stuff. There are plenty of easy, dreary, mechanical applications to which machines might profitably put but to which they have not, and with which we are still burdened: folding washing, clearing up the kitchen and changing nappies. For these mundane but potentially life-changing tasks there is, apparently, no technological resolution in sight.
''We don’t need more content''. What we need is ''dross management'' and ''needle-from-haystack'' extraction. Machines ought to be really good at this.  
 
Okay, some require motor control and interaction with the irreducibly messy [[off-world|real world]], so there are practical barriers to progression.  And robot lawnmowers and vacuum cleaners suggest a route ahead there. (They are cool!) 
 
But other facilities would not: remembering where you put the car keys, weeding out fake news, managing browser cookies, or this: ''curating'' the great corpus of human creation, rather than ''ripping it off''.


===== Digression: Nietzsche, Blake and the Camden Cat =====
===== Digression: Nietzsche, Blake and the Camden Cat =====
{{Quote|''The Birth of Tragedy'' sold 625 copies in six years; the three parts of ''Thus Spoke Zarathustra'' sold fewer than a hundred copies each. Not until it was too late did his works finally reach a few decisive ears, including Edvard Munch, August Strindberg, and the Danish-Jewish critic Georg Brandes, whose lectures at the University of Copenhagen first introduced Nietzsche’s philosophy to a wider audience.
{{Quote|''The Birth of Tragedy'' sold 625 copies in six years; the three parts of ''Thus Spoke Zarathustra'' sold fewer than a hundred copies each. Not until it was too late did his works finally reach a few decisive ears, including Edvard Munch, August Strindberg, and the Danish-Jewish critic Georg Brandes, whose lectures at the University of Copenhagen first introduced Nietzsche’s philosophy to a wider audience.
:—''The Sufferings of Nietzsche'', Los Angeles Review of Books, 2018}}
:—''The Sufferings of Nietzsche'', Los Angeles Review of Books, 2018
Here [[Sam Bankman-Fried]], with his loopy remarks about William Shakespeare’s damning [[Bayesian prior|Bayesian priors]], has a point, though not the one he thought he had. [[Friedrich Nietzsche]] died in obscurity and was only redeemed, after a nasty mix-up with National Socialism, in the 1960s, some seventy years after his death. William Blake, too, died in obscurity, as did Emily Dickinson.  
The [[Bayesian priors]] are pretty damning. ...When Shakespeare wrote, almost all of Europeans were busy farming, and very few people attended university; few people were even literate — probably as low as about ten million people. By contrast, there are now upwards of a billion literate people in the Western sphere. What are the odds that the greatest writer would have been born in 1564?
:—[[Sam Bankman-Fried|Sam Bankman Fried]]’s “sophomore college blog”}}
[[Sam Bankman-Fried]] had a point here, though not the one he thought.


These are the artists for whom the improbability engine worked its magic, even if not in their lifetimes. But how many undiscovered Nietzsches, Blakes and Dickinsons are there, who never caught the light, and now lie lost, sedimented into unreachably deep strata of the human canon? How many ''living'' artists are brilliantly ploughing an under-appreciated furrow, cursing their own immaculate Bayesian priors? How many solitary geniuses are out there who, as we speak, are galloping towards an obscurity a [[large language model]] might save them from?
[[Friedrich Nietzsche]] died in obscurity, as did William Blake and Emily Dickinson. They were lucky that the improbability engine worked its magic for them, even if not in their lifetimes.


(I know of at least one: legendary rockabilly singer [[Daniel Jeanrenaud]], known to his fans as the [[Camden Cat]], who for thirty years has plied his trade with a beat-up acoustic guitar on the Northern Line, and once wrote and recorded one of the great rockabilly singles of all time. Here it is, on {{Plainlink|1=https://soundcloud.com/thecamdencats/you-carry-on?si=24ececd75c0540faafd470d822971ab7|2=SoundCloud}}.)
But how many undiscovered Nietzsches, Blakes and Dickinsons are there, now sedimented into unreachably deep strata of the human canon?  


Digression over.
How many ''living'' artists are currently ploughing an under-appreciated furrow, stampeding towards an obscurity a [[large language model]] might save them from, cursing their own immaculate “[[Bayesian priors]]”?


=== If AI is a cheapest-to-deliver strategy you are doing it wrong ===
(I know of at least one: the [[Camden Cat]], who for thirty years has plied his trade with a beat-up acoustic guitar on the Northern Line, and once wrote and recorded one of the great rockabilly singles of all time. It remains bafflingly unacknowledged. Here it is, on {{Plainlink|1=https://soundcloud.com/thecamdencats/you-carry-on?si=24ececd75c0540faafd470d822971ab7|2=SoundCloud}}.)
 
=== If AI is a cheapest-to-deliver strategy you’re doing it wrong ===
{{quote|
{{quote|
{{D|Cheapest-to-deliver|/ˈʧiːpɪst tuː dɪˈlɪvə/|adj}}
{{D|Cheapest-to-deliver|/ˈʧiːpɪst tuː dɪˈlɪvə/|adj}}
Of the range of possible ways of discharging a [[contract|contractual obligation]], the one that will cost you the least and irritate your customer the most should you choose it.}}
Of the range of possible ways of discharging your [[contract|contractual obligation]] to the letter, the one that will cost you the least and irritate your customer the most should you choose it.}}
 
Imagine having personal [[large language model]]s at our disposal that could pattern-match against our individual reading and listening histories, our engineered prompts, our instructions and the recommendations of like-minded readers. 
 
Our LLM would search through the billions of existing books, plays, films, recordings and artworks, known and unknown that comprise the human ''oeuvre'' but, instead of making its own mashups, it would retrieve existing works that its patterns said would specifically appeal to us? 
 
This is not just the Spotify recommendation algorithm, as occasionally delightful as that is. Any commercial algorithm has its own primary goal to maximise revenue. A certain amount of “customer delight” might be a by-product, but only as far as it intersects with that primary commercial goal. As long as customers are ''just delighted'' ''enough'' to keep listening, the algorithm doesn’t care ''how'' delighted they are.<ref>As with the JC’s school exam grades: anything more than 51% is wasted effort.Try as he might, the JC  was never able to persuade his dear old ''Mutti'' about this.</ref>
 
Commercial algorithms need only follow a ''[[cheapest to deliver]]'' strategy: they “[[satisfice]]”. Being targeted at optimising revenue, they converge upon what is likely to be popular, because that is easier to find. Why scan ocean deeps for human content when you can skim the top and keep the punters happy enough?
 
This, by the way, has been the tragic commons of the collaborative internet: despite [[Chris Anderson]]’s forecast in 2006, that universal interconnectedness would change economics for the better<ref>[[The Long Tail: How Endless Choice is Creating Unlimited Demand]] ''(2006)''</ref> — that, suddenly, it would be costless to service the long tail of global demand, prompting some kind of explosion in cultural diversity — the exact opposite has happened.<ref>It is called “{{Plainlink|https://www.theclassroom.com/cultural-convergence-examples-16778.html|cultural convergence}}”.</ref> The overriding imperatives of [[scale]] have obliterated the subtle appeals of diversity, while sudden, unprecedented global interconnectedness has had the [[system effect]] of ''homogenising demand''.
 
This is the counter-intuitive effect of a “[[cheapest-to-deliver]]” strategy: while it has become ever easier to target the “[[fat head]]”, ''the [[long tail]] has grown thinner''.<ref>{{author|Anita Elberse}}’s [[Blockbusters: Why Big Hits and Big Risks are the Future of the Entertainment Business|''Blockbusters'']] is excellent on this point. </ref> As the tail contracts, the [[commercial imperative]] to target the lowest common denominators inflates. ''This is a highly undesirable feedback loop''. It will homogenise ''us''. ''We'' will become less diverse. ''We'' will become more [[Antifragile|fragile]]. We will resemble machines. ''We are not good at being machines''.
 
Shouldn’t we be more ambitious about what [[artificial intelligence]] could do for ''us''? Isn’t “giving you the bare minimum you’ll take to keep stringing you along” a bit ''underwhelming''? Isn’t using it to “dumb us down” a bit, well, ''dumb''?
 
===Digression: Darwin’s profligate idea===
The theory of [[evolution by natural selection]] really ''is'' magic: it gives a comprehensive account of the origin of life that [[Reductionism|reduces]] to a mindless, repetitive process that we can state in a short sentence:


Imagine a personal [[large language model]], private to a single client user — free, therefore, of data privacy concerns — that would pattern-match purely by reference to its client’s actual reading and listening history, prompts, instructions and to the recommendations of pattern-matched like-minded readers, which searched through the entire human creative ''oeuvre'' — the billions of books, plays, films, recordings and artworks, known and not, that already exist — and, instead of using them to generate random mashups, would be designed to return works of as yet undiscovered delight? 
{{Quote|In a population of organisms with individual traits whose offspring inherit those traits only with random variations, those having traits most suited to the prevailing environment will best survive and reproduce over time.<ref>This is the “ugliest man” by which, [[Nietzsche]] claims, we killed God. </ref>}}


''This is not just the Spotify recommendation algorithm'', as occasionally delightful as that is. Like any commercial algorithm, that has its own primary goal: revenue maximisation. “Client delight” may be a necessary by-product, but only as far as it intersects with that primary commercial goal. As long as clients are delighted ''enough'' to keep listening, the algorithm doesn’t care ''how'' delighted they are. As with the JC’s school exam grades: anything more than 51% is wasted effort.<ref>Try as he might, the JC  was never able to persuade his dear old ''Mutti'' about this.</ref>
The economy of [[Design|''design'']] in this process is staggering. The economy of ''effort in execution'' is not. Evolution is ''tremendously'' ''wasteful''. Not just in how it ''does'' adapt, but in how often it ''does not''. For every serendipitous mutation, ''there are millions and millions of duds''.


Commercial algorithms need only follow a ''[[cheapest to deliver]]'' strategy: they “[[satisfice]]”. Being targeted primarily at revenue optimisation, they will tend to converge upon what is likely to be popular, because that is easier to find. Rather than scanning the entire depth of human content, skim the top and keep the punters happy enough.  
The chain of adaptations from amino acids to Lennon & McCartney may have billions of links in it, but that is a model of parsimony compared with the number of adaptations that ''didn’t'' go anywhere — that arced off into one of design space’s gazillion dead ends and just fizzled out.


This, by the way, has been the tale of the collaborative internet: despite [[Chris Anderson]]’s wishful forecast in 2006 that universal interconnectedness would change economics forever<ref>[[The Long Tail: How Endless Choice is Creating Unlimited Demand]] ''(2006)''</ref> that suddenly it would be costless to service the long tail of global demand, prompting some kind of explosion in cultural diversity, what happened in practice has been the exact opposite. The overriding imperative of scale obliterated the subtle appeal of diversity, while the world’s sudden, unprecedented interconnectedness had the [[system effect]] of ''homogenising demand''. Not only was it still easier to target the fat head than the thin tail, but ''the tail itself got thinner''.<ref>{{author|Anita Elberse}}’s [[Blockbusters: Why Big Hits and Big Risks are the Future of the Entertainment Business|''Blockbusters'']] is excellent on this point. </ref>   
[[Evolution]] isn’t directed — that is its very super-power so it fumbles blindly around, fizzing and sparking, and only a vanishingly small proportion of mutations ever do anything ''useful''. Evolution is a random, [[stochastic]] process. It depends on aeons of time and burns unimaginable resources. Evolution solves problems by ''brute force''.


A [[cheapest-to-deliver]] strategy will have had the counter-intuitive effect of ''truncating'' the “[[long tail]]” of consumer choice. As the long tail contracts, the commercial imperative to target common denominators gets stronger. ''This is a highly undesirable feedback loop''. It will homogenise ''us''. We will become less diverse. We will become more [[Antifragile|fragile]].
Even though it came about through undirected evolution, “natural” mammalian intelligence, whose apogee is ''homo sapiens'', ''is'' directed. In a way that DNA cannot, humans can hypothesise, remember, learn, and rule out plainly stupid ideas without having to go through the motions of trying them.  


Now if artificial intelligence is so spectacular, shouldn’t we be a bit more ambitious about what ''it'' could do for ''us''? Isn’t “giving you the bare minimum you’ll take to keep stringing you along” a bit ''underwhelming''?
All mammals can do this to a degree; even retrievers.<ref>Actually come to think of it Lucille, bless her beautiful soul, didn’t seem to do that very often. But still. </ref> Humans happen to be particularly good at it. It took three and a half billion years to get from amino acid to the wheel, but only 6,000 to get from the wheel to the Nvidia RTX 4090 GPU.
 
Now. [[Large language model]]<nowiki/>s are, like evolution, a “brute force”, ''undirected'' method. They can get better by chomping yet more data, faster, in more parallel instances, with batteries of server farms in air-cooled warehouses full of lightning-fast multi-core graphics processors. But this is already starting to get harder. We are bumping up against computational limits, as Moore’s law conks out, and environmental consequences, as the planet does.
 
For the longest time, “computing power” has been the cheap, efficient option. That is ceasing to be true. More processing is not a zero-cost option. We will start to see the opportunity cost of devoting all these resources to something that, at the moment, creates diverting sophomore mashups we don’t need.<ref>We would do well to remember Arthur C. Clarke’s law here. The parallel processing power an LLM requires is already massive. It may be that the cost of expanding it in the way envisioned would be unfeasibly huge — in which case the original “business case” for [[technological redundancy]] falls away. See also the [[simulation hypothesis]]: it may be that the most efficient way of simulating the universe with sufficient granularity to support the simulation hypothesis is ''to actually build and run a universe'' in which case, the hypothesis fails.</ref>
 
We have, lying unused around us, petabytes of human ingenuity, voluntarily donated into the indifferent maw of the internet. ''We are not lacking content''. Surely the best way of using these brilliant new machines is to harness what we already have. The one thing homo sapiens ''doesn’t'' need is ''more unremarkable information''.


====LibraryThing as a model====
====LibraryThing as a model====
A sensible use for this technology would create [[system effect]]<nowiki/>s to ''extend'' the long tail.  
So, how about using AI to better exploit our existing natural intelligence, rather than imitating or superseding it? Could we, instead, create [[System effect|system effects]] to ''extend'' the long tail?
 
It isn’t hard to imagine how this might work. A rudimentary version exists in LibraryThing’s recommendation engine. It isn’t new or wildly clever '''—''' as far as I know, LibraryThing doesn’t use AI — each user lists, by ISBN, the books in her personal library. The LibraryThing algorithm will tell you with some degree of confidence, based on combined metadata, whether it thinks you will like any other book. Most powerfully, it will compare all the virtual “libraries” on the site and return the most similar user libraries to yours. The attraction of this is not the books you have in common, but the ones you ''don’t''.
 
Browsing doppelganger libraries is like wandering around a library of books you have never read, but which are designed to appeal specifically to you.


It isn’t hard to imagine how this might work. A rudimentary version exists in {{Plainlink|https://www.librarything.com/|LibraryThing}}’s recommendation engine. It isn’t even wildly clever '''''' {{Plainlink|https://www.librarything.com/|LibraryThing}} has been around for nearly twenty years and doesn’t, as far as I know, even use AI: each user lists, by ASIN, the books in her personal library. She can rate them, review them, and the LibraryThing algorithm will compare each users’s virtual “library” with all the other user libraries on the site and list the users with the most similar library. The non-matched books from libraries of similar users are often a revelation.
Note how this role — seeking out delightful new human creativity satisfies our criteria for the [[division of labour]] in that it is quite beyond the capability of any group of humans to do it, and it would not devalue, much less usurp genuine human intellectual capacity. Rather, it would ''empower'' it.


This role — seeking out delightful new human endeavours — would be a valuable role ''that is quite beyond the capability of any group of humans'' and which would not devalue, much less usurp the value of human intellectual capacity. Rather, it would ''empower'' it.
Note also the [[system effect]] it would have: if we held out hope that algorithms were pushing us down the long tail of human creativity, and not shepherding people towards its monetisable head, this would incentivise us all to create more unique and idiosyncratic things.


''This'' is a suitable application for artificial intelligence. This would respect the division of labour between human and machine.
It also would have the system effect of distributing wealth and information — that is, [[strength]], not [[Power structure|power]] — ''down'' the curve of human diversity, rather than concentrating it at the top.


Note also the [[system effect]] it would have: it would encourage people to create unique and idiosyncratic things. It would distribute wealth and information — that is, [[strength]], not [[Power structure|power]] — ''along'' the curve of human diversity, rather than concentrating it at the top.
==== Division of labour, redux ====
So, about that “[[division of labour]]”. When it comes to mechanical tasks of the “body”, [[Turing machine]]s scale well, and humans scale badly.  


We have lying all around us, unused, petabytes of human ingenuity, voluntarily donated into the indifferent maw of the internet. ''We are not lacking ingenuity''. This is one problem homo sapiens ''does not have''. Why would we spend our energy on creating artificial sources of new intelligence? Surely the best way of using this brilliant new generation of machine is to harness the ingenuity that is literally lying around.
===== Body scaling =====
“Scaling”, when we are talking about computational tasks, means doing them over and over again, in series or parallel, quickly and accurately. Each operation can be identical; their combined effect astronomical. Nvidia graphics chips are so good for AI because they can do 25,000 ''trillion'' basic operations per second. ''Of course'' machines are good at this: this is why we build them. They are digital: they preserve information indefinitely, however many processors we use, with almost no loss of fidelity.


The JC is not at all bearish on technology in general, or artificial intelligence in particular. He’s just bearish on dopey applications for it.
You could try to use networked humans to replicate a [[Turing machine]], but the results would be slow, costly, disappointing and the humans would not enjoy it.<ref>{{author|Cixin Liu}} runs exactly this thought experiment in his ''Three Body Problem'' trilogy.</ref> With each touch the [[signal-to-noise ratio]] degrades: this is the premise for the parlour game “Chinese Whispers”.


Information technology has done a fabulous job of alleviating boredom, by filling our empty moments with a 5-inch rectangle of gossip, outrage and titillation, but it has done little to nourish the intellect. This is a function of the choices me have made. They, in turn are informed by the interests. Maybe we are missing something by never being bored. Maybe that is a clear space where imagination can run wild. Perhaps being fearful of boredom, by constantly distracting ourselves from our own existential anguish, we make ourselves vulnerable to this two-dimensional online world.
A game of Chinese Whispers among a group of [[Turing machine]]s would make for a grim evening.  


==== Division of labour, redux ====
In any case, you could not assign a human, or any number of humans, the task of “cataloguing the entire canon of human creative output”: this is quite beyond their theoretical, never mind practical, ability. With a machine, at least in concept, you could.<ref>Though the infinite fidelity of machines is overstated, as I discovered when trying to find the etymology of the word “[[satisfice]]”. Its modern usage was coined by Herbert Simon in a paper in 1956, but the Google ngram suggests its usage began to tick up in the late 1940s. On further examination, the records transpire to be optical character recognition errors. So there is a large part of the human oeuvre — the pre-digital bit that has had be digitised— that does suffer from analogue copy errors.</ref>
About that “[[division of labour]]”. When it comes to mechanical tasks, machines — especially [[Turing machine]]<nowiki/>s scale very well, while humans scale very badly. “Scaling” when we are talking about computational tasks means doing them over and over again, in series or parallel, quickly and accurately. Each operation can be identical; their combined effect astronomical. Of course machines are good at this: this is why we build them. They are digital: they preserve information indefinitely, however many processors we use, with almost no loss of fidelity.  
 
===== Mind scaling =====
“Scaling”, for imaginative tasks, is different. Here, humans scale well. We don’t want identical, digital, high-fidelity ''duplications'': ten thousand copies of ''Finnegans Wake'' will contribute no more to the human canon than does one.<ref>Or possibly, even ''none'': Wikipedia tells us that, “due to its linguistic experiments, stream of consciousness writing style, literary allusions, free dream associations, and abandonment of narrative conventions, ''Finnegans Wake'' has been agreed to be a work largely unread by the general public.”</ref> Multiple humans doing “mind stuff” contribute precisely that idiosyncrasy, variability and difference in perspective that generates the wisdom, or brutality, of crowds. A community of readers independently parse, analyse, explain, narratise, contextualise, extend, criticise, extrapolate, filter, amend, correct, and improvise the information ''and each others’ reactions to it''.
 
This community of expertise is what [[Sam Bankman-Fried]] misses in dismissing Shakespeare’s damning “[[Bayesian prior|Bayesian priors]]”. However handsome William Shakespeare’s personal contribution was to the Shakespeare canon — the text of the actual folio<ref>What counts as “canon” in Shakespeare’s own written work is a matter of debate. There are different, inconsistent editions. Without Shakespeare to tell us, we must decide for ourselves. Even were Shakespeare able to tell us, we could ''still'' decide for ourselves.</ref> — it is finite, small and historic. It was complete by 1616 and hasn’t been changed since. The rest of the body of work we think of as “Shakespeare” is a continually growing body of interpretations, editions, performances, literary criticisms, essays, adaptations and its (and their) infusion into the vernacular. This ''vastly outweighs the bard’s actual folio''. 
 
This kind of “communal mind scaling” creates its own intellectual energy and momentum from a small, wondrous seed planted and nurtured four hundred years ago.<ref>On this view Shakespeare is rather like a “prime mover” who created a universe but has left it to develop according to its own devices.</ref> 
 
The wisdom of the crowd thus shapes itself: community consensus has a directed intelligence all of its own. It is not [[utopia|magically benign]], of course, as [[Sam Bankman-Fried]] might tell us, having been on both ends of it.<ref>See also Lindy Chamberlain, Peter Ellis and the sub-postmasters wrongly convicted in the horizon debâcle.</ref>
===Bayesian priors and the canon of ChatGPT===
The “[[Bayesian priors]]” argument which fails for Shakespeare also fails for a [[large language model]].
 
Just as most of the intellectual energy needed to render a text into the three-dimensional [[Metaphor|metaphorical]] universe we know as ''King Lear'' comes from the surrounding cultural milieu, so it does with the output of an LLM. The source, after all, is entirely drawn from the human canon. A model trained only on randomly assembled ASCII characters would return only randomly assembled ASCII characters.
 
But what if the material is not random? What if the model augments its training data with its own output? Might that create an apocalyptic feedback loop, whereby LLMs bootstrap themselves into some kind of hyper-intelligent super-language, beyond mortal cognitive capacity, whence the machines might dominate human discourse?
 
Are we inadvertently seeding ''Skynet''?
 
Just look at what happened with [[Alpha Go]]. It didn’t require ''any'' human training data: it learned by playing millions of games against itself. Programmers just fed it the rules, switched it on and, with indecent brevity, it worked everything out and walloped the game’s reigning grandmaster.
 
Could LLMs do that? This fear is not new:{{Quote|{{rice pudding and income tax}}}}
 
But brute-forcing outcomes in fully bounded, [[Zero-sum game|zero-sum]] environments with simple, fixed rules in the jargon of [[Complexity|complexity theory]], a “tame” environment — is what machines are designed to do. We should not be surprised that they are good at this, nor that humans are bad at it. ''This is exactly where we would expect a Turing machine to excel''.
 
By contrast, LLMs must operate in complex, “[[wicked]]” environments. Here conditions are unbounded, ambiguous, inchoate and impermanent. ''This is where humans excel''. Here, the whole environment, and everything in it, continually changes. The components interact with each other in [[Non-linear interaction|non-linear]] ways. The landscape dances. Imagination here is an advantage: brute force mathematical computation won’t do.{{Quote|Think how hard physics would be if particles could think.
:— Murray Gell-Mann}}
 
An LLM works by compositing a synthetic output from a massive database of pre-existing text. It must pattern-match against well-formed human language. Degrading its training data with its own output will progressively degrade its output. Such “model collapse” is an observed effect.<ref>https://www.techtarget.com/whatis/feature/Model-collapse-explained-How-synthetic-training-data-breaks-AI</ref> LLMs will only work for humans if they’re fed human generated content. [[Alpha Go]] is different.
{{Quote|{{AlphaGo v LLM}}}}
 
There is another contributor to the cultural milieu surrounding any text: the ''reader''. It is the reader, and her “cultural baggage”, who must make head and tail of the text. She alone determines, for her own case, whether it stands or falls. This is true however rich is the cultural milieu that supports the text. We know this because the overture from ''Tristan und Isolde'' can reduce different listeners to tears of joy or boredom. One contrarian can see, in the Camden Cat, a true inheritor of the great blues pioneers, others might see an unremarkable busker.
 
Construing natural language, much less visuals or sound, is no matter of mere [[Symbol processing|symbol-processing]]. Humans are ''not'' [[Turing machine|Turing machines]]. A text only sparks meaning, and becomes art, in the reader’s head. This is just as true of magic — the conjurer’s skill is to misdirect the audience into ''imagining something that isn’t there.'' The audience supplies the magic.
 
The same goes of an LLM — it is simply ''digital'' magic. We imbue what an LLM generates with meaning. ''We are doing the heavy lifting''.
 
===Coda===
{{Quote|{{abgrund}}
:—Nietzsche}}
Man, this got out of control.
 
So is this just Desperate-Dan, last-stand pattern-matching from an obsolete model, staring forlornly into the abyss? Being told to accept his obsolescence is an occupational hazard for the JC, so no change there.
 
But if [[This time it’s different|this really is the time that is different]], something about it feels underwhelming. If ''this'' is the hill we die on, we’ve let ourselves down.
 
''Don’t be suckered by parlour tricks.'' Don’t redraw our success criteria to suit the machines.  To reconfigure how we judge each other to make easier for technology to do it at scale is not to be obsolete. It is to surrender.


You could try to use networked humans to replicate a Turing machine, but the results would be disappointing and the humans would not enjoy it. Humans are slow and analogue. With each touch they ''degrade'' information (or ''augment'' it, depending on how you feel about it). The [[signal-to-noise ratio]] would quickly degrade. (This is the premise for the parlour game “Chinese Whispers” — each repetition changes the signal. A game of Chinese Whispers among a group of Turing machines would be no fun at all.)
Humans can’t help doing their own sort of pattern-matching. There are common literary tropes where our creations overwhelm us — ''Frankenstein'', ''[[2001: A Space Odyssey]]'', ''Blade Runner'', ''Terminator'', ''Jurassic Park'', ''The Matrix''. They are cautionary tales. They are deep in the cultural weft, and we are inclined to see them everywhere. The actual quotidian progress of technology has a habit of confounding science fiction and being a bit more boring.  


In any case, you could not assign a human, or any number of humans, the task of “catalogue the entire output of human creative output”. With a machine, at least in concept, you could.<ref>Though this is sometime misleading, as I discovered when trying to find the etymology of the word “[[satisfice]]”. Its modern usage was coined by Herbert Simon in a paper in 1956, but the ngram suggests its usage began to tick up in the late 1940s. On further examination the records transpire to be mistranslations caused by optical character recognition errors. So there is a large part of the human oeuvre —the pre-digital bit that has had be digitised—that does suffer from analogue copy errors.</ref>
LLMs will certainly change things, but we’re not fit for battery juice just yet.


But when it comes to imaginative uses of information we associate with the mind, humans scale magnificently. Here what we look for in “scaling” is very different. We don’t want identical, digital, high-fidelity duplication. Ten thousand copies of ''Finnegans Wake'' contribute no more to the human canon than does one.<ref>Or possibly, even ''none'': wikipedia tells us that, “due to its linguistic experiments, stream of consciousness writing style, literary allusions, free dream associations, and abandonment of narrative conventions, ''Finnegans Wake'' has been agreed to be a work largely unread by the general public.”</ref> Multiple humans contribute precisely that difference in perspective: a complex community of readers can, independently parse, analyse, explain, narratise, extend, criticise, extrapolate, filter, amend, correct, and improvise the information and each others’ reactions to it. This community of expertise is what Sam Bankman-Fried overlooks in his dismissal of Shakespeare’s “[[Bayesian prior|Bayesian priors]]” creates its own intellectual energy and momentum. No matter how fast it pattern-matches in parallel processes, artificial intelligence can’t do this.
Buck up, friends: there’s work to do.


===A real challenger bank===
===A real challenger bank===
Line 239: Line 313:
Robotic people do not generally have a rogue streak. They are not loose cannons. They no not call “bullshit”. They do not question their orders. They do not answer back.
Robotic people do not generally have a rogue streak. They are not loose cannons. They no not call “bullshit”. They do not question their orders. They do not answer back.


And so we see: financial services organisations do not value people who do. They value the ''fearful''. They elevate the rule-followers. They distrust “human magic”, which they characterise as human ''weakness''. They find it in the wreckage of [[Enron]], or [[Financial disasters roll of honour|Kerviel]], or [[Madoff]], or [[Archegos]]. Bad apples. [[Operator error]]. They emphasise this human stain over the failure of management that inevitably enabled it. People who did not play by the rules over systems of control that allowed, .or even obliged, them to.
And so we see: financial services organisations do not value people who do. They value the ''fearful''. They elevate the rule-followers. They distrust “human magic”, which they characterise as human ''weakness''. They find it in the wreckage of [[Enron]], or [[Financial disasters roll of honour|Kerviel]], or [[Madoff]], or [[Archegos]]. Bad apples. [[Operator error]]. They emphasise this human stain over the failure of management that inevitably enabled it. People who did not play by the rules over systems of control that allowed, .or even obliged, them to.


The run [[post mortem]]s: with the rear-facing forensic weaponry of [[internal audit]], [[external counsel]] they reconstruct the [[fog of war]] and build a narrative around it. The solution: ''more systems. More control. More elaborate algorithms. More rigid playbooks. The object of the exercise: eliminate the chance of human error. Relocate everything to process.''
The run [[post mortem]]s: with the rear-facing forensic weaponry of [[internal audit]], [[external counsel]] they reconstruct the [[fog of war]] and build a narrative around it. The solution: ''more systems. More control. More elaborate algorithms. More rigid playbooks. The object of the exercise: eliminate the chance of human error. Relocate everything to process.''
Line 245: Line 319:
Yet the accidents keep coming. Our [[financial crashes roll of honour]] refers. They happen with the same frequency, and severity, notwithstanding the additional sedimentary layers of machinery we develop to stop them.  
Yet the accidents keep coming. Our [[financial crashes roll of honour]] refers. They happen with the same frequency, and severity, notwithstanding the additional sedimentary layers of machinery we develop to stop them.  


Hypothesis: these disasters are not prevented by high-modernism. They are a symptom of it. They are its ''products''.
Hypothesis: these disasters are not prevented by high-modernism. They are a symptom of it. They are its ''products''.


===Zero-day vulnerabilities===
===Zero-day vulnerabilities===
Line 275: Line 349:
They are over-populated, too, with low-quality staff. Citigroup claims to employ 70,000 technology experts worldwide, and, well, [[Citigroup v Brigade Capital Management|Revlon]].  
They are over-populated, too, with low-quality staff. Citigroup claims to employ 70,000 technology experts worldwide, and, well, [[Citigroup v Brigade Capital Management|Revlon]].  


It ''is'' hard to imagine Amazon accidentally wiring half a billion dollars to customers because of crappy software and playbook- following school leavers in a call centre in Bucharest. (Can we imagine Amazon using call centres in Bucharest? Absolutely). Banks have a first-mover ''dis''advantage here, as most didn’t start thinking of themselves as tech companies until the last twenty years, by which stage their tech infrastructure was intractably shot.  
It ''is'' hard to imagine Amazon accidentally wiring half a billion dollars to customers because of crappy software and playbook- following school leavers in a call centre in Bucharest. (Can we imagine Amazon using call centres in Bucharest? Absolutely). Banks have a first-mover ''dis''advantage here, as most didn’t start thinking of themselves as tech companies until the last twenty years, by which stage their tech infrastructure was intractably shot.  


But the banks have had decades to recover, and if they didn’t then, they definitely do now think of themselves as tech companies. Bits of the banking system — high-frequency trading algorithms, blockchain and data analytics used on global strategies — are as sophisticated as anything Apple can design.<ref>That said, the signal processing capability of Apple’s consumer music software is pretty impressive.</ref>
But the banks have had decades to recover, and if they didn’t then, they definitely do now think of themselves as tech companies. Bits of the banking system — high-frequency trading algorithms, blockchain and data analytics used on global strategies — are as sophisticated as anything Apple can design.<ref>That said, the signal processing capability of Apple’s consumer music software is pretty impressive.</ref>
Line 286: Line 360:


=== Yes, bank staff are rubbish===
=== Yes, bank staff are rubbish===
Now, to lionise the human spirit ''in the abstract'', as we do, is not to say we should sanctify bank employees as a class ''in the particular''. The JC has spent a quarter century among them. They — we — may be unusually paid, for all the difference we make to the median life on planet Earth, but we are not unusually gifted or intelligent.  
Now, to lionise the human spirit ''in the abstract'', as we do, is not to say we should sanctify bank employees as a class ''in the particular''. The JC has spent a quarter century among them. They — ''we'' — may be unusually well-paid, for all the difference we make to the median life on planet Earth, but we are not unusually gifted or intelligent.  Come on, guys: — [[backtesting]]. [[Debt value adjustments]]. [[ David Viniar|“Six sigma” events several days in a row]]. [[Madoff]].
 
It is an ongoing marvel how commercial organisations can be so reliably profitable given the median calibre of those they employ to steer themselves. Sure, our levels of formal accreditation may be unprecedented, but levels of “[[metis]]” — which can't be had from the academy — have stayed where they are. Market and organisational [[system]]s tend to be configured to ''[[mediocrity drift|ensure]]'' reversion to mediocrity over time.<ref>See {{br|The Peter Principle}}; {{br|Parkinson’s Law}}: for classic studies.</ref>
 
There is some irony here. As western economies shifted from the production of ''things'' to the delivery of ''services'' over the last half-century, the proportion of their workforce in “white collar work” has exploded. There are more people in the UK today than there were in 1970, and almost none of them now work down the mines, on the production line or even in the menswear department at Grace Brothers, as we are given to believe they did a generation ago.  


It is an ongoing marvel how commercial organisations can be so reliably profitable given the calibre of the hordes they employ to steer them. We have argued [[mediocrity drift| elsewhere]] that informal systems tend to be configured to ''ensure'' staff mediocrity over time. Others have too.<ref>See {{br|The Peter Principle}}; {{br|Parkinson’s Law}}: for classic studies.</ref>
All kinds of occupations that scarcely existed when our parents were young have emerged, evolved, and self-declared themselves to be professions. The ratio of jobs requiring, ''de facto'', university degrees (the modern lite professional qualification) has grown. The number of universities have expanded as polytechnics rebadged themselves. This, too, feels like a [[system effect]] of the [[modernist]] orthodoxy: if we can only assess people by reference to formal criteria, then industries devoted to contriving and dishing out those criteria are sure to flourish. The self-interests of different constituencies in the system contrive to entrench each other: this is how [[feedback loop]]s work.


There is some irony here. As western economies have shifted, from the production of ''things'' to the delivery of ''services'', the proportion of  their workforce in “white collar work” has exploded. All kinds of occupations that scarcely existed a generation ago have weaponised themselves into self-declared professions. The ratio of jobs requiring, de facto, university degrees (the modern lite professional qualification) has grown. The number of universities have expanded; polytechnics rebadging themselves. This, too, feels like a [[system effect]] of the modernist orthodoxy: if we can only assess people by reference to formal criteria, then industries devoted to contriving and dishing out those criteria are sure to emerge. The self-interests of different constituencies in the system contrive to entrench each other: this is how feedback loops work
As technologies expand and encroach, taming unbroken landscapes and disrupting already-cultivated ones, let us take a moment to salute the human ingenuity that it takes to defy the march of the machines. For every technology that “solves” a formal pursuit there arises a second order of required white collar oversight. [[ESG]] specialists have evolved to opine on and steward forward our dashboards and marshall criteria for our environmental and social governance strategies — strategies we were until not happy enough to leave in for guidance by Adam Smith’s invisible hand. [[Blockchain as a service]] has emerged to provide a solution to those who believe in the power of disintermediated networks but want someone else to find in and do the heavy-lifting for them. AI compliance officers and prompt engineers steward our relationship with machines supposedly so clever they should not need hand-holding.


Other feedback loops emerge to counteract them. The high modernist programme can measure by any observable criteria, not just formal qualifications. Its favourite is [[cost]].<ref>The great conundrum posed by [[Taiichi Ohno|Ohno-sensei]]’s [[Toyota Production System]] is why prioritise [[cost]] over [[waste]]? Because cost is numerical and can easily be measured by the dullest book-keeper. Knowing what is [[waste]]ful in a process requires analysis and understanding of the system, and so cannot be easily measured. We go eliminate ''cost'' as a lazy proxy. It makes you wonder why executives are paid so well.</ref> The basic proposition is, “Okay, we need a human resources operation. I can't gainsay that. But it need not be housed in London, nor staffed by Oxbridge grads, if Romanian school leavers, supervised by an alumnus from the Plovdiv Technical Institute with a diploma in personnel management, will do.”
Other feedback loops emerge to counteract them. The high modernist programme can measure by any observable criteria, not just formal qualifications. Its favourite is [[cost]].<ref>The great conundrum posed by [[Taiichi Ohno|Ohno-sensei]]’s [[Toyota Production System]] is why prioritise [[cost]] over [[waste]]? Because cost is numerical and can easily be measured by the dullest book-keeper. Knowing what is [[waste]]ful in a process requires analysis and understanding of the system, and so cannot be easily measured. We go eliminate ''cost'' as a lazy proxy. It makes you wonder why executives are paid so well.</ref> The basic proposition is, “Okay, we need a human resources operation. I can't gainsay that. But it need not be housed in London, nor staffed by Oxbridge grads, if Romanian school leavers, supervised by an alumnus from the Plovdiv Technical Institute with a diploma in personnel management, will do.”