Template:M intro technology rumours of our demise: Difference between revisions

no edit summary
No edit summary
Tags: Mobile edit Mobile web edit
 
(5 intermediate revisions by the same user not shown)
Line 152: Line 152:
And, here: have some more!   
And, here: have some more!   


''We don’t need more content''. What we need is ''dross management'' and ''needle-from-haystack'' extraction. Machines ought to be really good at this.
''We don’t need more content''. What we need is ''dross management'' and ''needle-from-haystack'' extraction. Machines ought to be really good at this.  
 
There are plenty of easy, dreary, mechanical applications to which machines might profitably put: remembering where you put the car keys, weeding out fake news, managing browser cookies, or simply ''curating'' the great corpus of human creation, rather than ''ripping it off''.


===== Digression: Nietzsche, Blake and the Camden Cat =====
===== Digression: Nietzsche, Blake and the Camden Cat =====
Line 171: Line 169:
(I know of at least one: the [[Camden Cat]], who for thirty years has plied his trade with a beat-up acoustic guitar on the Northern Line, and once wrote and recorded one of the great rockabilly singles of all time. It remains bafflingly unacknowledged. Here it is, on {{Plainlink|1=https://soundcloud.com/thecamdencats/you-carry-on?si=24ececd75c0540faafd470d822971ab7|2=SoundCloud}}.)  
(I know of at least one: the [[Camden Cat]], who for thirty years has plied his trade with a beat-up acoustic guitar on the Northern Line, and once wrote and recorded one of the great rockabilly singles of all time. It remains bafflingly unacknowledged. Here it is, on {{Plainlink|1=https://soundcloud.com/thecamdencats/you-carry-on?si=24ececd75c0540faafd470d822971ab7|2=SoundCloud}}.)  


=== If AI is a cheapest-to-deliver strategy you are doing it wrong ===
=== If AI is a cheapest-to-deliver strategy you’re doing it wrong ===
{{quote|
{{quote|
{{D|Cheapest-to-deliver|/ˈʧiːpɪst tuː dɪˈlɪvə/|adj}}
{{D|Cheapest-to-deliver|/ˈʧiːpɪst tuː dɪˈlɪvə/|adj}}
Of the range of possible ways of discharging your [[contract|contractual obligation]] to the letter, the one that will cost you the least and irritate your customer the most should you choose it.}}
Of the range of possible ways of discharging your [[contract|contractual obligation]] to the letter, the one that will cost you the least and irritate your customer the most should you choose it.}}


Imaging having personal [[large language model]]s at our disposal that could pattern-match against our individual reading and listening histories, our engineered prompts, our instructions and the recommendations of like-minded readers.   
Imagine having personal [[large language model]]s at our disposal that could pattern-match against our individual reading and listening histories, our engineered prompts, our instructions and the recommendations of like-minded readers.   


Our LLM would search through the billions of existing books, plays, films, recordings and artworks, known and unknown that comprise the human ''oeuvre'' but, instead of making its own mashups, it would retrieve existing works that its patterns said would specifically appeal to us?   
Our LLM would search through the billions of existing books, plays, films, recordings and artworks, known and unknown that comprise the human ''oeuvre'' but, instead of making its own mashups, it would retrieve existing works that its patterns said would specifically appeal to us?   
Line 245: Line 243:
The wisdom of the crowd thus shapes itself: community consensus has a directed intelligence all of its own. It is not [[utopia|magically benign]], of course, as [[Sam Bankman-Fried]] might tell us, having been on both ends of it.<ref>See also Lindy Chamberlain, Peter Ellis and the sub-postmasters wrongly convicted in the horizon debâcle.</ref>
The wisdom of the crowd thus shapes itself: community consensus has a directed intelligence all of its own. It is not [[utopia|magically benign]], of course, as [[Sam Bankman-Fried]] might tell us, having been on both ends of it.<ref>See also Lindy Chamberlain, Peter Ellis and the sub-postmasters wrongly convicted in the horizon debâcle.</ref>
===Bayesian priors and the canon of ChatGPT===
===Bayesian priors and the canon of ChatGPT===
Last point on literary theory is that the “[[Bayesian priors]]” argument which fails for Shakespeare also fails for a [[large language model]].  
The “[[Bayesian priors]]” argument which fails for Shakespeare also fails for a [[large language model]].  


Just as most of the intellectual energy needed to render a text into the three-dimensional [[metaphor]]ical universe we know as ''King Lear'' comes from the surrounding cultural milieu, so it does with the output of an LLM. The source, after all, is entirely drawn from the human canon. A model trained only on randomly assembled ASCII characters would return only randomly assembled ASCII characters.
Just as most of the intellectual energy needed to render a text into the three-dimensional [[Metaphor|metaphorical]] universe we know as ''King Lear'' comes from the surrounding cultural milieu, so it does with the output of an LLM. The source, after all, is entirely drawn from the human canon. A model trained only on randomly assembled ASCII characters would return only randomly assembled ASCII characters.


But what if the material is not random? What if the model augments its training data with its own output? Might that create an apocalyptic feedback loop, whereby LLMs bootstrap themselves into some kind of hyperintelligent super-language, beyond mortal cognitive capacity, whence the machines might dominate human discourse?
But what if the material is not random? What if the model augments its training data with its own output? Might that create an apocalyptic feedback loop, whereby LLMs bootstrap themselves into some kind of hyper-intelligent super-language, beyond mortal cognitive capacity, whence the machines might dominate human discourse?


Are we inadvertently seeding ''Skynet''?  
Are we inadvertently seeding ''Skynet''?


Just look what happened with [[Alpha Go]]. It didn’t require ''any'' human training data: it learned by playing millions of games against itself. Programmers just fed it the rules, switched it on and, with indecent brevity, it worked everything out and walloped the game’s ruling grandmaster.  
Just look at what happened with [[Alpha Go]]. It didn’t require ''any'' human training data: it learned by playing millions of games against itself. Programmers just fed it the rules, switched it on and, with indecent brevity, it worked everything out and walloped the game’s reigning grandmaster.


Could LLMs do that? This fear has been with us for a while.
Could LLMs do that? This fear is not new:{{Quote|{{rice pudding and income tax}}}}


{{Quote|{{rice pudding and income tax}}}}
But brute-forcing outcomes in fully bounded, [[Zero-sum game|zero-sum]] environments with simple, fixed rules — in the jargon of [[Complexity|complexity theory]], a “tame” environment — is what machines are designed to do. We should not be surprised that they are good at this, nor that humans are bad at it. ''This is exactly where we would expect a Turing machine to excel''.


But brute-forcing outcomes in a fully bounded, [[Zero-sum game|zero-sum]] environments with simple, fixed rules — in the jargon of [[Complexity|complexity theory]], a “tame” environment — is exactly what machines are designed to do. We should not be surprised that they are good at this, nor that humans are bad at it.  
By contrast, LLMs must operate in complex, [[wicked]]environments. Here conditions are unbounded, ambiguous, inchoate and impermanent. ''This is where humans excel''. Here, the whole environment, and everything in it, continually changes. The components interact with each other in [[Non-linear interaction|non-linear]] ways. The landscape dances. Imagination here is an advantage: brute force mathematical computation won’t do.{{Quote|Think how hard physics would be if particles could think.
:— Murray Gell-Mann}}


To see this as a fair comparison is to misdirect ''ourselves'': willingly, to suspend disbelief. ''This is exactly where we would expect a Turing machine to excel''.
An LLM works by compositing a synthetic output from a massive database of pre-existing text. It must pattern-match against well-formed human language. Degrading its training data with its own output will progressively degrade its output. Such “model collapse” is an observed effect.<ref>https://www.techtarget.com/whatis/feature/Model-collapse-explained-How-synthetic-training-data-breaks-AI</ref> LLMs will only work for humans if they’re fed human generated content. [[Alpha Go]] is different.
{{Quote|{{AlphaGo v LLM}}}}


By contrast, LLMs must operate in complex, “[[wicked]]” environments. Here conditions are unbounded, ambiguous, inchoate and impermanent. ''This is where humans excel''. The situation continually changes. The components interact with each other to make the landscape dance. Here, narratising is an advantage: brute force mathematical computation won’t do.
There is another contributor to the cultural milieu surrounding any text: the ''reader''. It is the reader, and her “cultural baggage”, who must make head and tail of the text. She alone determines, for her own case, whether it stands or falls. This is true however rich is the cultural milieu that supports the text. We know this because the overture from ''Tristan und Isolde'' can reduce different listeners to tears of joy or boredom. One contrarian can see, in the Camden Cat, a true inheritor of the great blues pioneers, others might see an unremarkable busker.  


{{Quote|Think how hard physics would be if particles could think.
Construing natural language, much less visuals or sound, is no matter of mere [[Symbol processing|symbol-processing]]. Humans are ''not'' [[Turing machine|Turing machines]]. A text only sparks meaning, and becomes art, in the reader’s head. This is just as true of magic — the conjurer’s skill is to misdirect the audience into ''imagining something that isn’t there.'' The audience supplies the magic.
:— Murray Gell-Mann}}
 
The same goes of an LLM — it is simply ''digital'' magic. We imbue what an LLM generates with meaning. ''We are doing the heavy lifting''.
 
===Coda===
{{Quote|{{abgrund}}
:—Nietzsche}}
Man, this got out of control.


And nor does it: an LLM works by compositing a synthetic output from a massive database of pre-existing text. It must pattern-match against well-formed human language. Degrading its training data will progressively degrade its output. Such “model collapse” is an observed effect.<ref>https://www.techtarget.com/whatis/feature/Model-collapse-explained-How-synthetic-training-data-breaks-AI</ref> LLMs will only work for humans if they’re fed human generated content.
So is this just Desperate-Dan, last-stand pattern-matching from an obsolete model, staring forlornly into the abyss? Being told to accept his obsolescence is an occupational hazard for the JC, so no change there.  
{{Quote|{{AlphaGo v LLM}}}}


There is another contributor to the cultural milieu surrounding any text: the ''reader''. It is the reader, and her “[[cultural baggage]], who must make head and tail of the text. She alone determines, for her own case, whether it stands or falls. This is true however rich is the cultural milieu that supports the text.  
But if [[This time it’s different|this really is the time that is different]], something about it feels underwhelming. If ''this'' is the hill we die on, we’ve let ourselves down.


Construing natural language, much less visuals or sound, is no matter of mere [[Symbol processing|symbol-processing]]. Humans are ''not'' [[Turing machine|Turing machines]]. A text only sparks meaning, and becomes art, in the reader’s head.  
''Don’t be suckered by parlour tricks.'' Don’t redraw our success criteria to suit the machines. To reconfigure how we judge each other to make easier for technology to do it at scale is not to be obsolete. It is to surrender.


We know this because the overture from ''Tristan und Isolde'' can reduce different listeners to tears of joy or boredom. One contrarian can see in the Camden Cat a true inheritor of the great blues pioneers, others might see an unremarkable busker.
Humans can’t help doing their own sort of pattern-matching. There are common literary tropes where our creations overwhelm us — ''Frankenstein'', ''[[2001: A Space Odyssey]]'', ''Blade Runner'', ''Terminator'', ''Jurassic Park'', ''The Matrix''. They are cautionary tales. They are deep in the cultural weft, and we are inclined to see them everywhere. The actual quotidian progress of technology has a habit of confounding science fiction and being a bit more boring.  


This is as true of magic — the conjurer’s trick is to misdirect her audience into ''imagining'' something that isn’t there: the magic is supplied by the audience — and it is of ''digital'' magic. We imbue what an LLM generates with meaning. ''The meatware is doing the heavy lifting''.
LLMs will certainly change things, but we’re not fit for battery juice just yet.


It turns out that sterile undergraduate debate about the nature of meaning and art is now critically important.
Buck up, friends: there’s work to do.


===A real challenger bank===
===A real challenger bank===
Line 355: Line 360:


=== Yes, bank staff are rubbish===
=== Yes, bank staff are rubbish===
Now, to lionise the human spirit ''in the abstract'', as we do, is not to say we should sanctify bank employees as a class ''in the particular''. The JC has spent a quarter century among them. They — we — may be unusually paid, for all the difference we make to the median life on planet Earth, but we are not unusually gifted or intelligent.  
Now, to lionise the human spirit ''in the abstract'', as we do, is not to say we should sanctify bank employees as a class ''in the particular''. The JC has spent a quarter century among them. They — ''we'' — may be unusually well-paid, for all the difference we make to the median life on planet Earth, but we are not unusually gifted or intelligent.  Come on, guys: — [[backtesting]]. [[Debt value adjustments]]. [[ David Viniar|“Six sigma” events several days in a row]]. [[Madoff]].
 
It is an ongoing marvel how commercial organisations can be so reliably profitable given the median calibre of those they employ to steer themselves. Sure, our levels of formal accreditation may be unprecedented, but levels of “[[metis]]” — which can't be had from the academy — have stayed where they are. Market and organisational [[system]]s tend to be configured to ''[[mediocrity drift|ensure]]'' reversion to mediocrity over time.<ref>See {{br|The Peter Principle}}; {{br|Parkinson’s Law}}: for classic studies.</ref>
 
There is some irony here. As western economies shifted from the production of ''things'' to the delivery of ''services'' over the last half-century, the proportion of their workforce in “white collar work” has exploded. There are more people in the UK today than there were in 1970, and almost none of them now work down the mines, on the production line or even in the menswear department at Grace Brothers, as we are given to believe they did a generation ago.  


It is an ongoing marvel how commercial organisations can be so reliably profitable given the calibre of the hordes they employ to steer them. We have argued [[mediocrity drift|elsewhere]] that informal systems tend to be configured to ''ensure'' staff mediocrity over time. Others have too.<ref>See {{br|The Peter Principle}}; {{br|Parkinson’s Law}}: for classic studies.</ref>
All kinds of occupations that scarcely existed when our parents were young have emerged, evolved, and self-declared themselves to be professions. The ratio of jobs requiring, ''de facto'', university degrees (the modern lite professional qualification) has grown. The number of universities have expanded as polytechnics rebadged themselves. This, too, feels like a [[system effect]] of the [[modernist]] orthodoxy: if we can only assess people by reference to formal criteria, then industries devoted to contriving and dishing out those criteria are sure to flourish. The self-interests of different constituencies in the system contrive to entrench each other: this is how [[feedback loop]]s work.


There is some irony here. As western economies have shifted, from the production of ''things'' to the delivery of ''services'', the proportion of their workforce in “white collar work” has exploded. All kinds of occupations that scarcely existed a generation ago have weaponised themselves into self-declared professions. The ratio of jobs requiring, de facto, university degrees (the modern lite professional qualification) has grown. The number of universities have expanded; polytechnics rebadging themselves. This, too, feels like a [[system effect]] of the modernist orthodoxy: if we can only assess people by reference to formal criteria, then industries devoted to contriving and dishing out those criteria are sure to emerge. The self-interests of different constituencies in the system contrive to entrench each other: this is how feedback loops work
As technologies expand and encroach, taming unbroken landscapes and disrupting already-cultivated ones, let us take a moment to salute the human ingenuity that it takes to defy the march of the machines. For every technology that “solves” a formal pursuit there arises a second order of required white collar oversight. [[ESG]] specialists have evolved to opine on and steward forward our dashboards and marshall criteria for our environmental and social governance strategies — strategies we were until not happy enough to leave in for guidance by Adam Smith’s invisible hand. [[Blockchain as a service]] has emerged to provide a solution to those who believe in the power of disintermediated networks but want someone else to find in and do the heavy-lifting for them. AI compliance officers and prompt engineers steward our relationship with machines supposedly so clever they should not need hand-holding.


Other feedback loops emerge to counteract them. The high modernist programme can measure by any observable criteria, not just formal qualifications. Its favourite is [[cost]].<ref>The great conundrum posed by [[Taiichi Ohno|Ohno-sensei]]’s [[Toyota Production System]] is why prioritise [[cost]] over [[waste]]? Because cost is numerical and can easily be measured by the dullest book-keeper. Knowing what is [[waste]]ful in a process requires analysis and understanding of the system, and so cannot be easily measured. We go eliminate ''cost'' as a lazy proxy. It makes you wonder why executives are paid so well.</ref> The basic proposition is, “Okay, we need a human resources operation. I can't gainsay that. But it need not be housed in London, nor staffed by Oxbridge grads, if Romanian school leavers, supervised by an alumnus from the Plovdiv Technical Institute with a diploma in personnel management, will do.”
Other feedback loops emerge to counteract them. The high modernist programme can measure by any observable criteria, not just formal qualifications. Its favourite is [[cost]].<ref>The great conundrum posed by [[Taiichi Ohno|Ohno-sensei]]’s [[Toyota Production System]] is why prioritise [[cost]] over [[waste]]? Because cost is numerical and can easily be measured by the dullest book-keeper. Knowing what is [[waste]]ful in a process requires analysis and understanding of the system, and so cannot be easily measured. We go eliminate ''cost'' as a lazy proxy. It makes you wonder why executives are paid so well.</ref> The basic proposition is, “Okay, we need a human resources operation. I can't gainsay that. But it need not be housed in London, nor staffed by Oxbridge grads, if Romanian school leavers, supervised by an alumnus from the Plovdiv Technical Institute with a diploma in personnel management, will do.”