Template:M intro technology rumours of our demise: Difference between revisions

Tags: Mobile edit Mobile web edit
Line 152: Line 152:
And, here: have some more!   
And, here: have some more!   


''We don’t need more content''. What we need is ''dross management'' and ''needle-from-haystack'' extraction. Machines ought to be really good at this.
''We don’t need more content''. What we need is ''dross management'' and ''needle-from-haystack'' extraction. Machines ought to be really good at this.  
 
There are plenty of easy, dreary, mechanical applications to which machines might profitably put: remembering where you put the car keys, weeding out fake news, managing browser cookies, or simply ''curating'' the great corpus of human creation, rather than ''ripping it off''.


===== Digression: Nietzsche, Blake and the Camden Cat =====
===== Digression: Nietzsche, Blake and the Camden Cat =====
Line 171: Line 169:
(I know of at least one: the [[Camden Cat]], who for thirty years has plied his trade with a beat-up acoustic guitar on the Northern Line, and once wrote and recorded one of the great rockabilly singles of all time. It remains bafflingly unacknowledged. Here it is, on {{Plainlink|1=https://soundcloud.com/thecamdencats/you-carry-on?si=24ececd75c0540faafd470d822971ab7|2=SoundCloud}}.)  
(I know of at least one: the [[Camden Cat]], who for thirty years has plied his trade with a beat-up acoustic guitar on the Northern Line, and once wrote and recorded one of the great rockabilly singles of all time. It remains bafflingly unacknowledged. Here it is, on {{Plainlink|1=https://soundcloud.com/thecamdencats/you-carry-on?si=24ececd75c0540faafd470d822971ab7|2=SoundCloud}}.)  


=== If AI is a cheapest-to-deliver strategy you are doing it wrong ===
=== If AI is a cheapest-to-deliver strategy you’re doing it wrong ===
{{quote|
{{quote|
{{D|Cheapest-to-deliver|/ˈʧiːpɪst tuː dɪˈlɪvə/|adj}}
{{D|Cheapest-to-deliver|/ˈʧiːpɪst tuː dɪˈlɪvə/|adj}}
Of the range of possible ways of discharging your [[contract|contractual obligation]] to the letter, the one that will cost you the least and irritate your customer the most should you choose it.}}
Of the range of possible ways of discharging your [[contract|contractual obligation]] to the letter, the one that will cost you the least and irritate your customer the most should you choose it.}}


Imaging having personal [[large language model]]s at our disposal that could pattern-match against our individual reading and listening histories, our engineered prompts, our instructions and the recommendations of like-minded readers.   
Imagine having personal [[large language model]]s at our disposal that could pattern-match against our individual reading and listening histories, our engineered prompts, our instructions and the recommendations of like-minded readers.   


Our LLM would search through the billions of existing books, plays, films, recordings and artworks, known and unknown that comprise the human ''oeuvre'' but, instead of making its own mashups, it would retrieve existing works that its patterns said would specifically appeal to us?   
Our LLM would search through the billions of existing books, plays, films, recordings and artworks, known and unknown that comprise the human ''oeuvre'' but, instead of making its own mashups, it would retrieve existing works that its patterns said would specifically appeal to us?   
Line 245: Line 243:
The wisdom of the crowd thus shapes itself: community consensus has a directed intelligence all of its own. It is not [[utopia|magically benign]], of course, as [[Sam Bankman-Fried]] might tell us, having been on both ends of it.<ref>See also Lindy Chamberlain, Peter Ellis and the sub-postmasters wrongly convicted in the horizon debâcle.</ref>
The wisdom of the crowd thus shapes itself: community consensus has a directed intelligence all of its own. It is not [[utopia|magically benign]], of course, as [[Sam Bankman-Fried]] might tell us, having been on both ends of it.<ref>See also Lindy Chamberlain, Peter Ellis and the sub-postmasters wrongly convicted in the horizon debâcle.</ref>
===Bayesian priors and the canon of ChatGPT===
===Bayesian priors and the canon of ChatGPT===
Last point on literary theory is that the “[[Bayesian priors]]” argument which fails for Shakespeare also fails for a [[large language model]].  
The “[[Bayesian priors]]” argument which fails for Shakespeare also fails for a [[large language model]].  


Just as most of the intellectual energy needed to render a text into the three-dimensional [[Metaphor|metaphorical]] universe we know as ''King Lear'' comes from the surrounding cultural milieu, so it does with the output of an LLM. The source, after all, is entirely drawn from the human canon. A model trained only on randomly assembled ASCII characters would return only randomly assembled ASCII characters.
Just as most of the intellectual energy needed to render a text into the three-dimensional [[Metaphor|metaphorical]] universe we know as ''King Lear'' comes from the surrounding cultural milieu, so it does with the output of an LLM. The source, after all, is entirely drawn from the human canon. A model trained only on randomly assembled ASCII characters would return only randomly assembled ASCII characters.


But what if the material is not random? What if the model augments its training data with its own output? Might that create an apocalyptic feedback loop, whereby LLMs bootstrap themselves into some kind of hyperintelligent super-language, beyond mortal cognitive capacity, whence the machines might dominate human discourse?
But what if the material is not random? What if the model augments its training data with its own output? Might that create an apocalyptic feedback loop, whereby LLMs bootstrap themselves into some kind of hyper-intelligent super-language, beyond mortal cognitive capacity, whence the machines might dominate human discourse?


Are we inadvertently seeding ''Skynet''?
Are we inadvertently seeding ''Skynet''?
Line 255: Line 253:
Just look at what happened with [[Alpha Go]]. It didn’t require ''any'' human training data: it learned by playing millions of games against itself. Programmers just fed it the rules, switched it on and, with indecent brevity, it worked everything out and walloped the game’s reigning grandmaster.
Just look at what happened with [[Alpha Go]]. It didn’t require ''any'' human training data: it learned by playing millions of games against itself. Programmers just fed it the rules, switched it on and, with indecent brevity, it worked everything out and walloped the game’s reigning grandmaster.


Could LLMs do that? This fear has been with us for a while.{{Quote|{{rice pudding and income tax}}}}
Could LLMs do that? This fear is not new:{{Quote|{{rice pudding and income tax}}}}


But brute-forcing outcomes in fully bounded, [[Zero-sum game|zero-sum]] environments with simple, fixed rules — in the jargon of [[Complexity|complexity theory]], a “tame” environment — is what machines are designed to do. We should not be surprised that they are good at this, nor that humans are bad at it. ''This is exactly where we would expect a Turing machine to excel''.  
But brute-forcing outcomes in fully bounded, [[Zero-sum game|zero-sum]] environments with simple, fixed rules — in the jargon of [[Complexity|complexity theory]], a “tame” environment — is what machines are designed to do. We should not be surprised that they are good at this, nor that humans are bad at it. ''This is exactly where we would expect a Turing machine to excel''.  


By contrast, LLMs must operate in complex, “[[wicked]]” environments. Here conditions are unbounded, ambiguous, inchoate and impermanent. ''This is where humans excel''. Here, the situation continually changes. The components interact with each other in non-linear ways. The landscape dances. Imagination here is an advantage: brute force mathematical computation won’t do.{{Quote|Think how hard physics would be if particles could think.
By contrast, LLMs must operate in complex, “[[wicked]]” environments. Here conditions are unbounded, ambiguous, inchoate and impermanent. ''This is where humans excel''. Here, the whole environment, and everything in it, continually changes. The components interact with each other in [[Non-linear interaction|non-linear]] ways. The landscape dances. Imagination here is an advantage: brute force mathematical computation won’t do.{{Quote|Think how hard physics would be if particles could think.
:— Murray Gell-Mann}}
:— Murray Gell-Mann}}


Line 265: Line 263:
{{Quote|{{AlphaGo v LLM}}}}
{{Quote|{{AlphaGo v LLM}}}}


There is another contributor to the cultural milieu surrounding any text: the ''reader''. It is the reader, and her “[[cultural baggage]]”, who must make head and tail of the text. She alone determines, for her own case, whether it stands or falls. This is true however rich is the cultural milieu that supports the text.  
There is another contributor to the cultural milieu surrounding any text: the ''reader''. It is the reader, and her “cultural baggage”, who must make head and tail of the text. She alone determines, for her own case, whether it stands or falls. This is true however rich is the cultural milieu that supports the text. We know this because the overture from ''Tristan und Isolde'' can reduce different listeners to tears of joy or boredom. One contrarian can see, in the Camden Cat, a true inheritor of the great blues pioneers, others might see an unremarkable busker.  
 
Construing natural language, much less visuals or sound, is no matter of mere [[Symbol processing|symbol-processing]]. Humans are ''not'' [[Turing machine|Turing machines]]. A text only sparks meaning, and becomes art, in the reader’s head.
 
We know this because the overture from ''Tristan und Isolde'' can reduce different listeners to tears of joy or boredom. One contrarian can see in the Camden Cat a true inheritor of the great blues pioneers, others might see an unremarkable busker.


This is as true of magic — the conjurer’s trick is to misdirect her audience into ''imagining'' something that isn’t there: the magic is supplied by the audience — and it is of ''digital'' magic. We imbue what an LLM generates with meaning. ''The meatware is doing the heavy lifting''.
Construing natural language, much less visuals or sound, is no matter of mere [[Symbol processing|symbol-processing]]. Humans are ''not'' [[Turing machine|Turing machines]]. A text only sparks meaning, and becomes art, in the reader’s head. This is just as true of magic — the conjurer’s skill is to misdirect the audience into ''imagining something that isn’t there.'' The audience supplies the magic.  


It turns out that sterile undergraduate debate about the nature of meaning and art is now critically important.
The same goes of an LLM — it is simply ''digital'' magic. We imbue what an LLM generates with meaning. ''We are doing the heavy lifting''.


===Coda====
===Coda===
{{Quote|{{abgrund}}
:—Nietzsche}}
Man, this got out of control.  
Man, this got out of control.  


So is this just is Desperate-Dan, last-stand pattern matching from an aging model staring into forlornly into the abyss?  
So is this just Desperate-Dan, last-stand pattern-matching from an obsolete model, staring forlornly into the abyss? Being told to accept his obsolescence is an occupational hazard for the JC, so no change there.


Being told to acknowledge our obsolescence is an occupational hazard for the JC’s generation, so no change there.  
But if [[This time it’s different|this really is the time that is different]], something about it feels underwhelming. If ''this'' is the hill we die on, we’ve let ourselves down.


But something about this feels underwhelming. If this is the hill we die on, we’ve let ourselves down.
''Don’t be suckered by parlour tricks.'' Don’t redraw our success criteria to suit the machines.  To reconfigure how we judge each other to make easier for technology to do it at scale is not to be obsolete. It is to surrender.


We shouldn’t let ourselves be suckered by parlour tricks. We should not redraw our own lines to conform with the machines just because that makes it easier for technology to deal with us at scale. The machines are meant to serve us, not vice versa.
Humans can’t help doing their own sort of pattern-matching. There are common literary tropes where our creations overwhelm us — ''Frankenstein'', ''[[2001: A Space Odyssey]]'', ''Blade Runner'', ''Terminator'', ''Jurassic Park'', ''The Matrix''. They are cautionary tales. They are deep in the cultural weft, and we are inclined to see them everywhere. The actual quotidian progress of technology has a habit of confounding science fiction and being a bit more boring.  


Humans are not just carbon-based [[Turing machine]]s who have finally met their match. These are common literary tropes — Frankenstein, 2001, Terminator, Matrix — but they, too, are human narratives. They are cautionary tales. They are deep in the cultural weft, and we are inclined to see them, but the actual quotidian progress of technology has a habit of confounding science fiction.
LLMs will certainly change things, but we’re not fit for battery juice just yet.


Buck up, friends: there’s work to do.
Buck up, friends: there’s work to do.